Sample records for excess variance method

  1. Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics

    NASA Astrophysics Data System (ADS)

    Esposito, Larry W.; Rehnberg, Morgan; Colwell, Joshua E.; Sremcevic, Miodrag

    2017-10-01

    We compare two methods for determining the size of self-gravity wakes in Saturn’s rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives:W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find:W ~ 10m and infer the wavelength of the fastest growing instabilityLambda(TOOMRE) = S + W ~ 30m.This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.

  2. Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics

    NASA Astrophysics Data System (ADS)

    Esposito, L. W.; Rehnberg, M.; Colwell, J. E.; Sremcevic, M.

    2017-12-01

    We compare two methods for determining the size of self-gravity wakes in Saturn's rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives: W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find: W 10m and infer the wavelength of the fastest growing instability lamdaT = S + W 30m. This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.

  3. Can Twitter be used to predict county excessive alcohol consumption rates?

    PubMed Central

    Ashford, Robert D.; Hemmons, Jessie; Summers, Dan; Hamilton, Casey

    2018-01-01

    Objectives The current study analyzes a large set of Twitter data from 1,384 US counties to determine whether excessive alcohol consumption rates can be predicted by the words being posted from each county. Methods Data from over 138 million county-level tweets were analyzed using predictive modeling, differential language analysis, and mediating language analysis. Results Twitter language data captures cross-sectional patterns of excessive alcohol consumption beyond that of sociodemographic factors (e.g. age, gender, race, income, education), and can be used to accurately predict rates of excessive alcohol consumption. Additionally, mediation analysis found that Twitter topics (e.g. ‘ready gettin leave’) can explain much of the variance associated between socioeconomics and excessive alcohol consumption. Conclusions Twitter data can be used to predict public health concerns such as excessive drinking. Using mediation analysis in conjunction with predictive modeling allows for a high portion of the variance associated with socioeconomic status to be explained. PMID:29617408

  4. More than Drought: Precipitation Variance, Excessive Wetness, Pathogens and the Future of the Western Edge of the Eastern Deciduous Forest.

    PubMed

    Hubbart, Jason A; Guyette, Richard; Muzika, Rose-Marie

    2016-10-01

    For many regions of the Earth, anthropogenic climate change is expected to result in increasingly divergent climate extremes. However, little is known about how increasing climate variance may affect ecosystem productivity. Forest ecosystems may be particularly susceptible to this problem considering the complex organizational structure of specialized species niche adaptations. Forest decline is often attributable to multiple stressors including prolonged heat, wildfire and insect outbreaks. These disturbances, often categorized as megadisturbances, can push temperate forests beyond sustainability thresholds. Absent from much of the contemporary forest health literature, however, is the discussion of excessive precipitation that may affect other disturbances synergistically or that might represent a principal stressor. Here, specific points of evidence are provided including historic climatology, variance predictions from global change modeling, Midwestern paleo climate data, local climate influences on net ecosystem exchange and productivity, and pathogen influences on oak mortality. Data sources reveal potential trends, deserving further investigation, indicating that the western edge of the Eastern Deciduous forest may be impacted by ongoing increased precipitation, precipitation variance and excessive wetness. Data presented, in conjunction with recent regional forest health concerns, suggest that climate variance including drought and excessive wetness should be equally considered for forest ecosystem resilience against increasingly dynamic climate. This communication serves as an alert to the need for studies on potential impacts of increasing climate variance and excessive wetness in forest ecosystem health and productivity in the Midwest US and similar forest ecosystems globally. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  6. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  7. Robust Likelihoods for Inflationary Gravitational Waves from Maps of Cosmic Microwave Background Polarization

    NASA Technical Reports Server (NTRS)

    Switzer, Eric Ryan; Watts, Duncan J.

    2016-01-01

    The B-mode polarization of the cosmic microwave background provides a unique window into tensor perturbations from inflationary gravitational waves. Survey effects complicate the estimation and description of the power spectrum on the largest angular scales. The pixel-space likelihood yields parameter distributions without the power spectrum as an intermediate step, but it does not have the large suite of tests available to power spectral methods. Searches for primordial B-modes must rigorously reject and rule out contamination. Many forms of contamination vary or are uncorrelated across epochs, frequencies, surveys, or other data treatment subsets. The cross power and the power spectrum of the difference of subset maps provide approaches to reject and isolate excess variance. We develop an analogous joint pixel-space likelihood. Contamination not modeled in the likelihood produces parameter-dependent bias and complicates the interpretation of the difference map. We describe a null test that consistently weights the difference map. Excess variance should either be explicitly modeled in the covariance or be removed through reprocessing the data.

  8. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  10. Analysis of overdispersed count data: application to the Human Papillomavirus Infection in Men (HIM) Study.

    PubMed

    Lee, J-H; Han, G; Fulp, W J; Giuliano, A R

    2012-06-01

    The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.

  11. Accounting for Dark Current Accumulated during Readout of Hubble's ACS/WFC Detectors

    NASA Astrophysics Data System (ADS)

    Ryon, Jenna E.; Grogin, Norman A.; Coe, Dan A.; ACS Team

    2018-06-01

    We investigate the properties of excess dark current accumulated during the 100-second full-frame readout of the Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) detectors. This excess dark current, called "readout dark", gives rise to ambient background gradients and hot columns in each ACS/WFC image. While readout dark signal is removed from science images during the bias correction step in CALACS, the additional noise from the readout dark is currently not taken into account. We develop a method to estimate the readout dark noise properties in ACS/WFC observations. We update the error (ERR) extensions of superbias images to include the appropriate noise from the ambient readout dark gradient and stable hot columns. In recent data, this amounts to about 5 e-/pixel added variance in the rows farthest from the WFC serial registers, and about 7 to 30 e-/pixel added variance along the stable hot columns. We also flag unstable hot columns in the superbias data quality (DQ) extensions. The new reference file pipeline for ACS/WFC implements these updates to our superbias creation process.

  12. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  13. A twin study of body dysmorphic concerns.

    PubMed

    Monzani, B; Rijsdijk, F; Anson, M; Iervolino, A C; Cherkas, L; Spector, T; Mataix-Cols, D

    2012-09-01

    Dysmorphic concern refers to an excessive preoccupation with a perceived or slight defect in physical appearance. It lies on a continuum of severity from no or minimal concerns to severe concerns over one's appearance. The present study examined the heritability of dysmorphic concerns in a large sample of twins. Twins from the St Thomas UK twin registry completed a valid and reliable self-report measure of dysmorphic concerns, which also includes questions about perceived body odour and malfunction. Twin modelling methods (female twins only, n=3544) were employed to decompose the variance in the liability to dysmorphic concerns into additive genetic, shared and non-shared environmental factors. Model-fitting analyses showed that genetic factors accounted for approximately 44% [95% confidence intervals (CI) 36-50%] of the variance in dysmorphic concerns, with non-shared environmental factors and measurement error accounting for the remaining variance (56%; 95% CI 50-63%). Shared environmental factors were negligible. The results remained unchanged when excluding individuals reporting an objective medical condition/injury accounting for their concern in physical appearance. Over-concern with a perceived or slight defect in physical appearance is a heritable trait, with non-shared environmental factors also playing an important role in its causation. The results are relevant for various psychiatric disorders characterized by excessive concerns in body appearance, odour or function, including but not limited to body dysmorphic disorder.

  14. Upper Limits on the 21 cm Epoch of Reionization Power Spectrum from One Night with LOFAR

    NASA Astrophysics Data System (ADS)

    Patil, A. H.; Yatawatta, S.; Koopmans, L. V. E.; de Bruyn, A. G.; Brentjens, M. A.; Zaroubi, S.; Asad, K. M. B.; Hatef, M.; Jelić, V.; Mevius, M.; Offringa, A. R.; Pandey, V. N.; Vedantham, H.; Abdalla, F. B.; Brouw, W. N.; Chapman, E.; Ciardi, B.; Gehlot, B. K.; Ghosh, A.; Harker, G.; Iliev, I. T.; Kakiichi, K.; Majumdar, S.; Mellema, G.; Silva, M. B.; Schaye, J.; Vrbanec, D.; Wijnholds, S. J.

    2017-03-01

    We present the first limits on the Epoch of Reionization 21 cm H I power spectra, in the redshift range z = 7.9-10.6, using the Low-Frequency Array (LOFAR) High-Band Antenna (HBA). In total, 13.0 hr of data were used from observations centered on the North Celestial Pole. After subtraction of the sky model and the noise bias, we detect a non-zero {{{Δ }}}{{I}}2={(56+/- 13{mK})}2 (1-σ) excess variance and a best 2-σ upper limit of {{{Δ }}}212< {(79.6{mK})}2 at k = 0.053 h cMpc-1 in the range z = 9.6-10.6. The excess variance decreases when optimizing the smoothness of the direction- and frequency-dependent gain calibration, and with increasing the completeness of the sky model. It is likely caused by (I) residual side-lobe noise on calibration baselines, (II) leverage due to nonlinear effects, (III) noise and ionosphere-induced gain errors, or a combination thereof. Further analyses of the excess variance will be discussed in forthcoming publications.

  15. High variance in reproductive success generates a false signature of a genetic bottleneck in populations of constant size: a simulation study

    PubMed Central

    2013-01-01

    Background Demographic bottlenecks can severely reduce the genetic variation of a population or a species. Establishing whether low genetic variation is caused by a bottleneck or a constantly low effective number of individuals is important to understand a species’ ecology and evolution, and it has implications for conservation management. Recent studies have evaluated the power of several statistical methods developed to identify bottlenecks. However, the false positive rate, i.e. the rate with which a bottleneck signal is misidentified in demographically stable populations, has received little attention. We analyse this type of error (type I) in forward computer simulations of stable populations having greater than Poisson variance in reproductive success (i.e., variance in family sizes). The assumption of Poisson variance underlies bottleneck tests, yet it is commonly violated in species with high fecundity. Results With large variance in reproductive success (Vk ≥ 40, corresponding to a ratio between effective and census size smaller than 0.1), tests based on allele frequencies, allelic sizes, and DNA sequence polymorphisms (heterozygosity excess, M-ratio, and Tajima’s D test) tend to show erroneous signals of a bottleneck. Similarly, strong evidence of population decline is erroneously detected when ancestral and current population sizes are estimated with the model based method MSVAR. Conclusions Our results suggest caution when interpreting the results of bottleneck tests in species showing high variance in reproductive success. Particularly in species with high fecundity, computer simulations are recommended to confirm the occurrence of a population bottleneck. PMID:24131797

  16. The More, the Better? Curvilinear Effects of Job Autonomy on Well-Being From Vitamin Model and PE-Fit Theory Perspectives.

    PubMed

    Stiglbauer, Barbara; Kovacs, Carrie

    2017-12-28

    In organizational psychology research, autonomy is generally seen as a job resource with a monotone positive relationship with desired occupational outcomes such as well-being. However, both Warr's vitamin model and person-environment (PE) fit theory suggest that negative outcomes may result from excesses of some job resources, including autonomy. Thus, the current studies used survey methodology to explore cross-sectional relationships between environmental autonomy, person-environment autonomy (mis)fit, and well-being. We found that autonomy and autonomy (mis)fit explained between 6% and 22% of variance in well-being, depending on type of autonomy (scheduling, method, or decision-making) and type of (mis)fit operationalization (atomistic operationalization through the separate assessment of actual and ideal autonomy levels vs. molecular operationalization through the direct assessment of perceived autonomy (mis)fit). Autonomy (mis)fit (PE-fit perspective) explained more unique variance in well-being than environmental autonomy itself (vitamin model perspective). Detrimental effects of autonomy excess on well-being were most evident for method autonomy and least consistent for decision-making autonomy. We argue that too-much-of-a-good-thing effects of job autonomy on well-being exist, but suggest that these may be dependent upon sample characteristics (range of autonomy levels), type of operationalization (molecular vs. atomistic fit), autonomy facet (method, scheduling, or decision-making), as well as individual and organizational moderators. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Measuring the Power Spectrum with Peculiar Velocities

    NASA Astrophysics Data System (ADS)

    Macaulay, Edward; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-01-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large scale excess in the matter power spectrum, and can appear to be in some tension with the LCDM model. We use a composite catalogue of 4,537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results from Macaulay et al. (2011), studying minimum variance moments of the velocity field, as calculated by Watkins, Feldman & Hudson (2009) and Feldman, Watkins & Hudson (2010). We find good agreement with the LCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1, although with a 1 sigma uncertainty which includes the LCDM model. We find that the uncertainty in the excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and nonlinear clustering in simulated peculiar velocity catalogues, and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  18. Power spectrum estimation from peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Macaulay, E.; Feldman, H. A.; Ferreira, P. G.; Jaffe, A. H.; Agarwal, S.; Hudson, M. J.; Watkins, R.

    2012-09-01

    The peculiar velocities of galaxies are an inherently valuable cosmological probe, providing an unbiased estimate of the distribution of matter on scales much larger than the depth of the survey. Much research interest has been motivated by the high dipole moment of our local peculiar velocity field, which suggests a large-scale excess in the matter power spectrum and can appear to be in some tension with the Λ cold dark matter (ΛCDM) model. We use a composite catalogue of 4537 peculiar velocity measurements with a characteristic depth of 33 h-1 Mpc to estimate the matter power spectrum. We compare the constraints with this method, directly studying the full peculiar velocity catalogue, to results by Macaulay et al., studying minimum variance moments of the velocity field, as calculated by Feldman, Watkins & Hudson. We find good agreement with the ΛCDM model on scales of k > 0.01 h Mpc-1. We find an excess of power on scales of k < 0.01 h Mpc-1 with a 1σ uncertainty which includes the ΛCDM model. We find that the uncertainty in excess at these scales is larger than an alternative result studying only moments of the velocity field, which is due to the minimum variance weights used to calculate the moments. At small scales, we are able to clearly discriminate between linear and non-linear clustering in simulated peculiar velocity catalogues and find some evidence (although less clear) for linear clustering in the real peculiar velocity data.

  19. Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on Mortality in the Eastern United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jianyong; Zhou, Ying; Gao, Yang

    Background: It is anticipated that climate change will influence heat-related mortality in the future. However, the estimation of excess mortality attributable to future heat waves is subject to large uncertainties, which have not been examined under the latest greenhouse gas emission scenarios. Objectives: We estimated the future heat wave impact on mortality in the eastern United States (~ 1,700 counties) under two Representative Concentration Pathways (RCPs) and analyzed the sources of uncertainties. Methods Using dynamically downscaled hourly temperature projections in 2057-2059, we calculated heat wave days and episodes based on four heat wave metrics, and estimated the excess mortality attributablemore » to them. The sources of uncertainty in estimated excess mortality were apportioned using a variance-decomposition method. Results: In the eastern U.S., the excess mortality attributable to heat waves could range from 200-7,807 with the mean of 2,379 persons/year in 2057-2059. The projected average excess mortality in RCP 4.5 and 8.5 scenarios was 1,403 and 3,556 persons/year, respectively. Excess mortality would be relatively high in the southern and eastern coastal areas. The major sources of uncertainty in the estimates are relative risk of heat wave mortality, the RCP scenarios, and the heat wave definitions. Conclusions: The estimated mortality risks from future heat waves are likely an order of magnitude higher than its current level and lead to thousands of deaths each year under the RCP8.5 scenario. The substantial spatial variability in estimated county-level heat mortality suggests that effective mitigation and adaptation measures should be developed based on spatially resolved data.« less

  20. Correction of Excessive Precipitation over Steep Mountains in a General Circulation Model (GCM)

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2012-01-01

    Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and regional climate models even at a resolution as high as 19km. The affected regions include the Andes, the Himalayas, Sierra Madre, New Guinea and others. This problem also shows up in some data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime subgrid-scale upslope winds, which in turn is forced by heated boundary layer on the slopes. These upslope winds are associated with large subgrid-scale topographic variance, which is found over steep mountains. Without such subgrid-scale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvable-scale upslope flow in the boundary layer combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to excessive precipitation over the affected regions. We have parameterized the effects of subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in the layers higher up when topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-5 GCM have shown that the EPSM problem is largely solved.

  1. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  2. Fat-free mass is not lower 24 months postbariatric surgery than nonoperated matched controls

    PubMed Central

    Strain, Gladys Witt; Ebel, Faith; Honohan, Jamie; Gagner, Michel; Dakin, Gregory F.; Pomp, Alfons; Gallagher, Dympna

    2017-01-01

    Objective Concerns about an excessive loss of fat-free mass (FFM) after bariatric surgery prompted this comparison of operated versus matched nonoperated controls regarding FFM. Setting University Hospital and University Research Unit in an urban medical center. Methods Body composition with bioelectric impedance (Tanita 310, Tanita Corp, Arlington Heights, IL) was measured approximately 2 years after bariatric surgery in weight stable patients and nonoperated weight stable controls matched for body mass index (BMI), gender, and age. t tests provided comparisons. Analysis of variance was used to compare FFM changes for 4 procedures. Levene’s test evaluated variance. Results Patients (n = 252; 24.7 ± 15 mo after surgery) and nonoperated controls (n = 252) were matched for gender (71.8% female), age (44.5 ± 11.0 yr), and BMI (32.8 ± 7.0 kg/m2). Patients had different surgical procedures: 107 gastric bypasses (RYGBs), 62 biliopancreatic diversions with duodenal switch (BPD/DSs), 40 adjustable gastric bands (AGBs), and 43 sleeve gastrectomies (LSGs). FFM percentage was significantly higher in the operated patients than controls, 66% versus 62%, P < .0001. For 3 procedures, the FFM was significantly higher; however, AGBs changed only 7.3 BMI units and FFM was not significantly different from their matched controls, 59.8% versus 58.2%. Across surgical groups, FFM percentage differed, P < .0001 (RYGB 66.5 ± 9.2%, BPD/DS 74.0 ± 9.3%, AGB 59.8 ± 7.0%, LSG 59.6 ± 9.3%). Variance was not different (P = .17). Conclusion Weight-reduced bariatric surgery patients have greater FFM compared with nonoperated matched controls. These findings support surgically assisted weight loss as a physiologic process and in general patients do not suffer from excessive FFM depletion after bariatric procedures. PMID:27387700

  3. Seasonal Predictability in a Model Atmosphere.

    NASA Astrophysics Data System (ADS)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  4. Relationship between family history of alcohol addiction, parents' education level, and smartphone problem use scale scores.

    PubMed

    Beison, Ashley; Rademacher, David J

    2017-03-01

    Background and aims Smartphones are ubiquitous. As smartphones increased in popularity, researchers realized that people were becoming dependent on their smartphones. The purpose here was to provide a better understanding of the factors related to problematic smartphone use (PSPU). Methods The participants were 100 undergraduates (25 males, 75 females) whose ages ranged from 18 to 23 (mean age = 20 years). The participants completed questionnaires to assess gender, ethnicity, year in college, father's education level, mother's education level, family income, age, family history of alcoholism, and PSPU. The Family Tree Questionnaire assessed family history of alcoholism. The Mobile Phone Problem Use Scale (MPPUS) and the Adapted Cell Phone Addiction Test (ACPAT) were used to determine the degree of PSPU. Whereas the MPPUS measures tolerance, escape from other problems, withdrawal, craving, and negative life consequences, the ACPAT measures preoccupation (salience), excessive use, neglecting work, anticipation, lack of control, and neglecting social life. Results Family history of alcoholism and father's education level together explained 26% of the variance in the MPPUS scores and 25% of the variance in the ACPAT scores. The inclusion of mother's education level, ethnicity, family income, age, year in college, and gender did not significantly increase the proportion of variance explained for either MPPUS or ACPAT scores. Discussion and conclusions Family history of alcoholism and father's education level are good predictors of PSPU. As 74%-75% of the variance in PSPU scale scores was not explained, future studies should aim to explain this variance.

  5. Excessive Acquisition in Hoarding

    PubMed Central

    Frost, Randy O.; Tolin, David F.; Steketee, Gail; Fitch, Kristin E.; Selbo-Bruns, Alexandra

    2009-01-01

    Compulsive hoarding (the acquisition of and failure to discard large numbers of possessions) is associated with substantial health risk, impairment, and economic burden. However, little research has examined separate components of this definition, particularly excessive acquisition. The present study examined acquisition in hoarding. Participants, 878 self-identified with hoarding and 665 family informants (not matched to hoarding participants), completed an internet survey. Among hoarding participants who met criteria for clinically significant hoarding, 61% met criteria for a diagnosis of compulsive buying and approximately 85% reported excessive acquisition. Family informants indicated that nearly 95% exhibited excessive acquisition. Those who acquired excessively had more severe hoarding; their hoarding had an earlier onset and resulted in more psychiatric work impairment days; and they experienced more symptoms of obsessive-compulsive disorder, depression, and anxiety. Two forms of excessive acquisition (buying and free things) each contributed independent variance in the prediction of hoarding severity and related symptoms. PMID:19261435

  6. X-Ray Emission from Active Galactic Nuclei with Intermediate-Mass Black Holes

    NASA Astrophysics Data System (ADS)

    Dewangan, G. C.; Mathur, S.; Griffiths, R. E.; Rao, A. R.

    2008-12-01

    We present a systematic X-ray study of eight active galactic nuclei (AGNs) with intermediate-mass black holes (MBH ~ 8-95 × 104 M⊙) based on 12 XMM-Newton observations. The sample includes the two prototype AGNs in this class—NGC 4395 and POX 52 and six other AGNs discovered with the Sloan Digitized Sky Survey. These AGNs show some of the strongest X-ray variability, with the normalized excess variances being the largest and the power density break timescales being the shortest observed among radio-quiet AGNs. The excess-variance-luminosity correlation appears to depend on both the BH mass and the Eddington luminosity ratio. The break timescale-black hole mass relations for AGN with IMBHs are consistent with that observed for massive AGNs. We find that the FWHM of the Hβ/Hα line is uncorrelated with the BH mass, but shows strong anticorrelation with the Eddington luminosity ratio. Four AGNs show clear evidence for soft X-ray excess emission (kTin ~ 150-200 eV). X-ray spectra of three other AGNs are consistent with the presence of the soft excess emission. NGC 4395 with lowest L/LEdd lacks the soft excess emission. Evidently small black mass is not the primary driver of strong soft X-ray excess emission from AGNs. The X-ray spectral properties and optical-to-X-ray spectral energy distributions of these AGNs are similar to those of Seyfert 1 galaxies. The observed X-ray/UV properties of AGNs with IMBHs are consistent with these AGNs being low-mass extensions of more massive AGNs, those with high Eddington luminosity ratio looking more like narrow-line Seyfert 1 s and those with low L/LEdd looking more like broad-line Seyfert 1 galaxies.

  7. Relationship between family history of alcohol addiction, parents’ education level, and smartphone problem use scale scores

    PubMed Central

    Beison, Ashley; Rademacher, David J.

    2017-01-01

    Background and aims Smartphones are ubiquitous. As smartphones increased in popularity, researchers realized that people were becoming dependent on their smartphones. The purpose here was to provide a better understanding of the factors related to problematic smartphone use (PSPU). Methods The participants were 100 undergraduates (25 males, 75 females) whose ages ranged from 18 to 23 (mean age = 20 years). The participants completed questionnaires to assess gender, ethnicity, year in college, father’s education level, mother’s education level, family income, age, family history of alcoholism, and PSPU. The Family Tree Questionnaire assessed family history of alcoholism. The Mobile Phone Problem Use Scale (MPPUS) and the Adapted Cell Phone Addiction Test (ACPAT) were used to determine the degree of PSPU. Whereas the MPPUS measures tolerance, escape from other problems, withdrawal, craving, and negative life consequences, the ACPAT measures preoccupation (salience), excessive use, neglecting work, anticipation, lack of control, and neglecting social life. Results Family history of alcoholism and father’s education level together explained 26% of the variance in the MPPUS scores and 25% of the variance in the ACPAT scores. The inclusion of mother’s education level, ethnicity, family income, age, year in college, and gender did not significantly increase the proportion of variance explained for either MPPUS or ACPAT scores. Discussion and conclusions Family history of alcoholism and father’s education level are good predictors of PSPU. As 74%–75% of the variance in PSPU scale scores was not explained, future studies should aim to explain this variance. PMID:28316252

  8. The Compass Paradigm for the Systematic Evaluation of U.S. Army Command and Control Systems Using Neural Network and Discrete Event Computer Simulation

    DTIC Science & Technology

    2005-11-01

    interest has a large variance so that excessive run lengths are required. This naturally invokes the interest for searches for effective variance ...years since World War II the nature , organization, and mode of the operation of command organizations within the Army has remained virtually...Laboratory began a series of studies and projects focused on investigating the nature of military command and control (C2) operations. The questions

  9. Beliefs about excessive exercise in eating disorders: the role of obsessions and compulsions.

    PubMed

    Naylor, Heather; Mountford, Victoria; Brown, Gary

    2011-01-01

    This study aimed to develop an understanding of excessive exercise in eating disorders by exploring the role of exercise beliefs, obsessive beliefs and obsessive-compulsive behaviours. Sixty-four women were recruited from eating disorder services and 75 non-clinical women were recruited from a university. Exercise beliefs and behaviours, obsessive beliefs and behaviours and eating disorder psychopathology were assessed using self-report questionnaires. There was an association between exercise beliefs, obsessive beliefs and obsessive-compulsive behaviours in the eating-disordered group, but not in the non-eating-disordered group. In the eating-disordered group obsessive beliefs and obsessive-compulsive behaviours were associated with a significant proportion of variance in exercise beliefs after controlling for eating disorder psychopathology and BMI. In the non-eating-disordered group obsessive beliefs and behaviours were associated with beliefs about exercise as a method of affect regulation after controlling for BMI. The results are compatible with a model in which obsessive beliefs and exercise beliefs could maintain exercise in eating disorders. This has implications for the assessment and treatment of excessive exercise. Further research is necessary to determine the causality of these relationships. Copyright © 2011 John Wiley & Sons, Ltd and Eating Disorders Association.

  10. Ensemble X-ray variability of active galactic nuclei. II. Excess variance and updated structure function

    NASA Astrophysics Data System (ADS)

    Vagnetti, F.; Middei, R.; Antonucci, M.; Paolillo, M.; Serafinelli, R.

    2016-09-01

    Context. Most investigations of the X-ray variability of active galactic nuclei (AGN) have been concentrated on the detailed analyses of individual, nearby sources. A relatively small number of studies have treated the ensemble behaviour of the more general AGN population in wider regions of the luminosity-redshift plane. Aims: We want to determine the ensemble variability properties of a rich AGN sample, called Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue (XMMSSC-DR5), with redshift between ~0.1 and ~5, and X-ray luminosities in the 0.5-4.5 keV band between ~1042 erg/s and ~1047 erg/s. Methods: We urge caution on the use of the normalised excess variance (NXS), noting that it may lead to underestimate variability if used improperly. We use the structure function (SF), updating our previous analysis for a smaller sample. We propose a correction to the NXS variability estimator, taking account of the light curve duration in the rest frame on the basis of the knowledge of the variability behaviour gained by SF studies. Results: We find an ensemble increase of the X-ray variability with the rest-frame time lag τ, given by SF ∝ τ0.12. We confirm an inverse dependence on the X-ray luminosity, approximately as SF ∝ LX-0.19. We analyse the SF in different X-ray bands, finding a dependence of the variability on the frequency as SF ∝ ν-0.15, corresponding to a so-called softer when brighter trend. In turn, this dependence allows us to parametrically correct the variability estimated in observer-frame bands to that in the rest frame, resulting in a moderate (≲15%) shift upwards (V-correction). Conclusions: Ensemble X-ray variability of AGNs is best described by the structure function. An improper use of the normalised excess variance may lead to an underestimate of the intrinsic variability, so that appropriate corrections to the data or the models must be applied to prevent these effects. Full Table 1 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/593/A55

  11. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, Hong Yi; Milne, Alice; Webster, Richard

    2016-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σK2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR = MSE/σK2 ≈ 1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (ECa) of the topsoil was measured at 525 points in a field of 2.3 ha. The marginal distribution of the observations was strongly positively skewed, and so the observed ECas were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  12. Distribution of kriging errors, the implications and how to communicate them

    NASA Astrophysics Data System (ADS)

    Li, HongYi; Milne, Alice; Webster, Richard

    2015-04-01

    Kriging in one form or another has become perhaps the most popular method for spatial prediction in environmental science. Each prediction is unbiased and of minimum variance, which itself is estimated. The kriging variances depend on the mathematical model chosen to describe the spatial variation; different models, however plausible, give rise to different minimized variances. Practitioners often compare models by so-called cross-validation before finally choosing the most appropriate for their kriging. One proceeds as follows. One removes a unit (a sampling point) from the whole set, kriges the value there and compares the kriged value with the value observed to obtain the deviation or error. One repeats the process for each and every point in turn and for all plausible models. One then computes the mean errors (MEs) and the mean of the squared errors (MSEs). Ideally a squared error should equal the corresponding kriging variance (σ_K^2), and so one is advised to choose the model for which on average the squared errors most nearly equal the kriging variances, i.e. the ratio MSDR=MSE/ σ_K2 ≈1. Maximum likelihood estimation of models almost guarantees that the MSDR equals 1, and so the kriging variances are unbiased predictors of the squared error across the region. The method is based on the assumption that the errors have a normal distribution. The squared deviation ratio (SDR) should therefore be distributed as χ2 with one degree of freedom with a median of 0.455. We have found that often the median of the SDR (MedSDR) is less, in some instances much less, than 0.455 even though the mean of the SDR is close to 1. It seems that in these cases the distributions of the errors are leptokurtic, i.e. they have an excess of predictions close to the true values, excesses near the extremes and a dearth of predictions in between. In these cases the kriging variances are poor measures of the uncertainty at individual sites. The uncertainty is typically under-estimated for the extreme observations and compensated for by over estimating for other observations. Statisticians must tell users when they present maps of predictions. We illustrate the situation with results from mapping salinity in land reclaimed from the Yangtze delta in the Gulf of Hangzhou, China. There the apparent electrical conductivity (EC_a) of the topsoil was measured at 525 points in a field of 2.3~ha. The marginal distribution of the observations was strongly positively skewed, and so the observed EC_as were transformed to their logarithms to give an approximately symmetric distribution. That distribution was strongly platykurtic with short tails and no evident outliers. The logarithms were analysed as a mixed model of quadratic drift plus correlated random residuals with a spherical variogram. The kriged predictions that deviated from their true values with an MSDR of 0.993, but with a medSDR=0.324. The coefficient of kurtosis of the deviations was 1.45, i.e. substantially larger than 0 for a normal distribution. The reasons for this behaviour are being sought. The most likely explanation is that there are spatial outliers, i.e. points at which the observed values that differ markedly from those at their their closest neighbours.

  13. Particle sizes in Saturn's rings from UVIS stellar occultations 1. Variations with ring region

    NASA Astrophysics Data System (ADS)

    Colwell, J. E.; Esposito, L. W.; Cooney, J. H.

    2018-01-01

    The Cassini spacecraft's Ultraviolet Imaging Spectrograph (UVIS) includes a high speed photometer (HSP) that has observed stellar occultations by Saturn's rings with a radial resolution of ∼10 m. In the absence of intervening ring material, the time series of measurements by the HSP is described by Poisson statistics in which the variance equals the mean. The finite sizes of the ring particles occulting the star lead to a variance that is larger than the mean due to correlations in the blocking of photons due to finite particle size and due to random variations in the number of individual particles in each measurement area. This effect was first exploited by Showalter and Nicholson (1990) with the stellar occultation observed by Voyager 2. At a given optical depth, a larger excess variance corresponds to larger particles or clumps that results in greater variation of the signal from measurement to measurement. Here we present analysis of the excess variance in occultations observed by Cassini UVIS. We observe differences in the best-fitting particle size in different ring regions. The C ring plateaus show a distinctly smaller effective particle size, R, than the background C ring, while the background C ring itself shows a positive correlation between R and optical depth. The innermost 700 km of the B ring has a distribution of excess variance with optical depth that is consistent with the C ring ramp and C ring but not with the remainder of the B1 region. The Cassini Division, while similar to the C ring in spectral and structural properties, has different trends in effective particle size with optical depth. There are discrete jumps in R on either side of the Cassini Division ramp, while the C ring ramp shows a smooth transition in R from the C ring to the B ring. The A ring is dominated by self-gravity wakes whose shadow size depends on the occultation geometry. The spectral ;halo; regions around the strongest density waves in the A ring correspond to decreases in R. There is also a pronounced dip in R at the Mimas 5:3 bending wave corresponding to an increase in optical depth there, suggesting that at these waves small particles are liberated from clumps or self-gravity wakes leading to a reduction in effective particle size and an increase in optical depth.

  14. Estimation and Uncertainty Analysis of Impacts of Future Heat Waves on Mortality in the Eastern United States

    PubMed Central

    Wu, Jianyong; Zhou, Ying; Gao, Yang; Fu, Joshua S.; Johnson, Brent A.; Huang, Cheng; Kim, Young-Min

    2013-01-01

    Background: Climate change is anticipated to influence heat-related mortality in the future. However, estimates of excess mortality attributable to future heat waves are subject to large uncertainties and have not been projected under the latest greenhouse gas emission scenarios. Objectives: We estimated future heat wave mortality in the eastern United States (approximately 1,700 counties) under two Representative Concentration Pathways (RCPs) and investigated sources of uncertainty. Methods: Using dynamically downscaled hourly temperature projections for 2057–2059, we projected heat wave days that were defined using four heat wave metrics and estimated the excess mortality attributable to them. We apportioned the sources of uncertainty in excess mortality estimates using a variance-decomposition method. Results: Estimates suggest that excess mortality attributable to heat waves in the eastern United States would result in 200–7,807 deaths/year (mean 2,379 deaths/year) in 2057–2059. Average excess mortality projections under RCP4.5 and RCP8.5 scenarios were 1,403 and 3,556 deaths/year, respectively. Excess mortality would be relatively high in the southern states and eastern coastal areas (excluding Maine). The major sources of uncertainty were the relative risk estimates for mortality on heat wave versus non–heat wave days, the RCP scenarios, and the heat wave definitions. Conclusions: Mortality risks from future heat waves may be an order of magnitude higher than the mortality risks reported in 2002–2004, with thousands of heat wave–related deaths per year in the study area projected under the RCP8.5 scenario. Substantial spatial variability in county-level heat mortality estimates suggests that effective mitigation and adaptation measures should be developed based on spatially resolved data. Citation: Wu J, Zhou Y, Gao Y, Fu JS, Johnson BA, Huang C, Kim YM, Liu Y. 2014. Estimation and uncertainty analysis of impacts of future heat waves on mortality in the eastern United States. Environ Health Perspect 122:10–16; http://dx.doi.org/10.1289/ehp.1306670 PMID:24192064

  15. Are stock prices too volatile to be justified by the dividend discount model?

    NASA Astrophysics Data System (ADS)

    Akdeniz, Levent; Salih, Aslıhan Altay; Ok, Süleyman Tuluğ

    2007-03-01

    This study investigates excess stock price volatility using the variance bound framework of LeRoy and Porter [The present-value relation: tests based on implied variance bounds, Econometrica 49 (1981) 555-574] and of Shiller [Do stock prices move too much to be justified by subsequent changes in dividends? Am. Econ. Rev. 71 (1981) 421-436.]. The conditional variance bound relationship is examined using cross-sectional data simulated from the general equilibrium asset pricing model of Brock [Asset prices in a production economy, in: J.J. McCall (Ed.), The Economics of Information and Uncertainty, University of Chicago Press, Chicago (for N.B.E.R.), 1982]. Results show that the conditional variance bounds hold, hence, our hypothesis of the validity of the dividend discount model cannot be rejected. Moreover, in our setting, markets are efficient and stock prices are neither affected by herd psychology nor by the outcome of noise trading by naive investors; thus, we are able to control for market efficiency. Consequently, we show that one cannot infer any conclusions about market efficiency from the unconditional variance bounds tests.

  16. Low iron stores: a risk factor for excessive hair loss in non-menopausal women.

    PubMed

    Deloche, Claire; Bastien, Philippe; Chadoutaud, Stéphanie; Galan, Pilar; Bertrais, Sandrine; Hercberg, Serge; de Lacharrière, Olivier

    2007-01-01

    Iron deficiency has been suspected to represent one of the possible causes of excessive hair loss in women. The aim of our study was to assess this relationship in a very large population of 5110 women aged between 35 and 60 years. Hair loss was evaluated using a standardized questionnaire sent to all volunteers. The iron status was assessed by a serum ferritin assay carried out in each volunteer. Multivariate analysis allowed us to identify three categories: "absence of hair loss" (43%), "moderate hair loss" (48%) and "excessive hair loss" (9%). Among the women affected by excessive hair loss, a larger proportion of women (59%) had low iron stores (< 40 microg/L) compared to the remainder of the population (48%). Analysis of variance and logistic regression show that a low iron store represents a risk factor for hair loss in non-menopausal women.

  17. 76 FR 47068 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Section 110(a)(2...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ... concerns involving provisions in existing SIPs and with EPA's statements in other proposals that it would...) Existing provisions related to excess emissions during periods of start-up, shutdown, or malfunction (SSM... (ii) existing provisions related to ``director's variance'' or ``director's discretion'' that purport...

  18. Attentional bias in excessive Internet gamers: Experimental investigations using an addiction Stroop and a visual probe.

    PubMed

    Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia

    2016-03-01

    Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5 th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers.

  19. Attentional bias in excessive Internet gamers: Experimental investigations using an addiction Stroop and a visual probe

    PubMed Central

    Jeromin, Franziska; Nyenhuis, Nele; Barke, Antonia

    2016-01-01

    Background and aims Internet Gaming Disorder is included in the Diagnostic and statistical manual of mental disorders (5th edition) as a disorder that merits further research. The diagnostic criteria are based on those for Substance Use Disorder and Gambling Disorder. Excessive gamblers and persons with Substance Use Disorder show attentional biases towards stimuli related to their addictions. We investigated whether excessive Internet gamers show a similar attentional bias, by using two established experimental paradigms. Methods We measured reaction times of excessive Internet gamers and non-gamers (N = 51, 23.7 ± 2.7 years) by using an addiction Stroop with computer-related and neutral words, as well as a visual probe with computer-related and neutral pictures. Mixed design analyses of variance with the between-subjects factor group (gamer/non-gamer) and the within-subjects factor stimulus type (computer-related/neutral) were calculated for the reaction times as well as for valence and familiarity ratings of the stimulus material. Results In the addiction Stroop, an interaction for group × word type was found: Only gamers showed longer reaction times to computer-related words compared to neutral words, thus exhibiting an attentional bias. In the visual probe, no differences in reaction time between computer-related and neutral pictures were found in either group, but the gamers were faster overall. Conclusions An attentional bias towards computer-related stimuli was found in excessive Internet gamers, by using an addiction Stroop but not by using a visual probe. A possible explanation for the discrepancy could lie in the fact that the visual probe may have been too easy for the gamers. PMID:28092198

  20. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  1. ON PREDICTING INFRAGRAVITY ENERGY IN THE SURF ZONE.

    USGS Publications Warehouse

    Sallenger,, Asbury H.; Holman, Robert A.; Edge, Billy L.

    1985-01-01

    Flow data were obtained in the surf zone across a barred profile during a storm. RMS cross-shore velocities due to waves in the intragravity band (wave periods greater than 20 s) had maxima in excess of 0. 5 m/s over the bar crest. For comparison to measured spectra, synthetic spectra of cross-shore flow were computed using measured nearshore profiles. The structure, in the infragravity band, of these synthetic spectra corresponded reasonably well with the structure of the measured spectra. Total variances of measured cross-shore flow within the infragravity band were nondimensionalized by dividing by total infragravity variances of synthetic spectra. These nondimensional variances were independent of distance offshore and increased with the square of the breaker height. Thus, cross-shore flow due to infragravity waves can be estimated with knowledge of the nearshore profile and incident wave conditions. Refs.

  2. Excessive consumption of dietary supplements among professionals working in gyms in Pelotas, Rio Grande do Sul State, Brazil, 2012.

    PubMed

    Cava, Tatiane Araujo; Madruga, Samanta Winck; Teixeira, Gesiane Dias Trindade; Reichert, Felipe Fossati; Silva, Marcelo Cozzensa da; Rombaldi, Airton José

    2017-01-01

    to investigate the prevalence and factors associated with excessive consumption of dietary supplements among professionals working at gyms in Pelotas, Rio Grande do Sul State, Brazil. this is a cross-sectional study with all local fitness professionals identified in 2012; excessive consumption of dietary supplements was defined as the use of three or more types of supplements simultaneously; multivariate analysis was carried out using Poisson regression with robust variance. 497 professionals were interviewed; the prevalence of excessive consumption of dietary supplements was 10.5% (95%CI 7.9;13.5); there was association with the male sex (PR=3.2; 95%CI 1.6;6.7) and with length of time of dietary supplement consumption ≥4 years when compared to <1 year (PR=2.8; 95%CI 1.7;4.7); lower consumption was found among professionals with higher levels of education, regardless of whether they had a degree in physical education or not (p=0,007). prevalence of excessive consumption of dietary supplements can be considered high and was associated with sociodemographic variables.

  3. Interpersonal Problems and Their Relationship to Depression, Self-Esteem, and Malignant Self-Regard.

    PubMed

    Huprich, Steven K; Lengu, Ketrin; Evich, Carly

    2016-12-01

    DSM-5 Section III recommends that level of personality functioning be assessed. This requires an assessment of self and other representations. Malignant self-regard (MSR) is a way of assessing the level of functioning of those with a masochistic, self-defeating, depressive, or vulnerably narcissistic personality. In Study 1, 840 undergraduates were assessed for MSR, depressive symptoms, self-esteem, anaclitic and introjective depression, and interpersonal problems. MSR, self-esteem, depressive symptoms, and anaclitic and introjective depression were correlated with multiple dimensions of interpersonal problems, and MSR predicted the most variance in interpersonal scales measuring social inhibition, nonassertion, over-accommodation, and excessive self-sacrifice. MSR, anaclitic, and introjective depression predicted unique variance in six of the eight domains of interpersonal problems assessed. In Study 2, 68 undergraduates were provided positive or negative feedback. Consistent with theory, MSR predicted unique variance in state anxiety but not state anger. Results support the validity of the MSR construct.

  4. Consistency between the luminosity function of resolved millisecond pulsars and the galactic center excess

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ploeg, Harrison; Gordon, Chris; Crocker, Roland

    Fermi Large Area Telescope data reveal an excess of GeV gamma rays from the direction of the Galactic Center and bulge. Several explanations have been proposed for this excess including an unresolved population of millisecond pulsars (MSPs) and self-annihilating dark matter. It has been claimed that a key discriminant for or against the MSP explanation can be extracted from the properties of the luminosity function describing this source population. Specifically, is the luminosity function of the putative MSPs in the Galactic Center consistent with that characterizing the resolved MSPs in the Galactic disk? To investigate this we have used amore » Bayesian Markov Chain Monte Carlo to evaluate the posterior distribution of the parameters of the MSP luminosity function describing both resolved MSPs and the Galactic Center excess. At variance with some other claims, our analysis reveals that, within current uncertainties, both data sets can be well fit with the same luminosity function.« less

  5. The relationship between psychosocial job stress and burnout in emergency departments: an exploratory study.

    PubMed

    García-Izquierdo, Mariano; Ríos-Rísquez, María Isabel

    2012-01-01

    The purpose of this study was to examine the relationship and predictive power of various psychosocial job stressors for the 3 dimensions of burnout in emergency departments. This study was structured as a cross-sectional design, with a questionnaire as the tool. The data were gathered using an anonymous questionnaire in 3 hospitals in Spain. The sample consisted of 191 emergency departments. Burnout was evaluated by the Maslach Burnout Inventory and the job stressors by the Nursing Stress Scale. The Burnout Model in this study consisted of 3 dimensions: emotional exhaustion, cynicism, and reduced professional efficacy. The model that predicted the emotional exhaustion dimension was formed by 2 variables: Excessive workload and lack of emotional support. These 2 variables explained 19.4% of variance in emotional exhaustion. Cynicism had 4 predictors that explained 25.8% of variance: Interpersonal conflicts, lack of social support, excessive workload, and type of contract. Finally, variability in reduced professional efficacy was predicted by 3 variables: Interpersonal conflicts, lack of social support, and the type of shift worked, which explained 10.4% of variance. From the point of view of nurse leaders, organizational interventions, and the management of human resources, this analysis of the principal causes of burnout is particularly useful to select, prioritize, and implement preventive measures that will improve the quality of care offered to patients and the well-being of personnel. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  7. A Nonparametric Mean-Variance Smoothing Method to Assess Arabidopsis Cold Stress Transcriptional Regulator CBF2 Overexpression Microarray Data

    PubMed Central

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181

  8. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  9. 77 FR 76099 - Yorkville ETF Trust and Yorkville ETF Advisors, LLC; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    ... given trading day, or from day to day, such variances occur as a result of third party market forces... that the proposed distribution system will be orderly because competitive forces will ensure that the... include concerns about undue influence by a fund of funds over underlying funds, excessive layering of...

  10. A Flexible Framework Hydrological Informatic Modeling System - HIMS

    NASA Astrophysics Data System (ADS)

    WANG, L.; Wang, Z.; Changming, L.; Li, J.; Bai, P.

    2017-12-01

    Simulating water cycling process temporally and spatially fitting for the characteristics of the study area was important for floods prediction and streamflow simulation with high accuracy, as soil properties, land scape, climate, and land managements were the critical factors influencing the non-linear relationship of rainfall-runoff at watershed scales. Most existing hydrological models cannot simulate water cycle process at different places with customized mechanisms with fixed single structure and mode. This study develops Hydro-Informatic Modeling System (HIMS) model with modular of each critical hydrological process with multiple choices for various scenarios to solve this problem. HIMS has the structure accounting for two runoff generation mechanisms of infiltration excess and saturation excess and estimated runoff with different methods including Time Variance Gain Model (TVGM), LCM which has good performance at ungauged areas, besides the widely used Soil Conservation Service-Curve Number (SCS-CN) method. Channel routing model contains the most widely used Muskingum, and kinematic wave equation with new solving method. HIMS model performance with its symbolic runoff generation model LCM was evaluated through comparison with the observed streamflow datasets of Lasha river watershed at hourly, daily, and monthly time steps. Comparisons between simulational and obervational streamflows were found with NSE higher than 0.87 and WE within ±20%. Water balance analysis about precipitation, streamflow, actual evapotranspiration (ET), and soil moisture change was conducted temporally at annual time step and it has been proved that HIMS model performance was reliable through comparison with literature results at the Lhasa River watershed.

  11. Statistical study of EBR-II fuel elements manufactured by the cold line at Argonne-West and by Atomics International

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harkness, A. L.

    1977-09-01

    Nine elements from each batch of fuel elements manufactured for the EBR-II reactor have been analyzed for /sup 235/U content by NDA methods. These values, together with those of the manufacturer, are used to estimate the product variance and the variances of the two measuring methods. These variances are compared with the variances computed from the stipulations of the contract. A method is derived for resolving the several variances into their within-batch and between-batch components. Some of these variance components have also been estimated by independent and more familiar conventional methods for comparison.

  12. Association between overuse of mobile phones on quality of sleep and general health among occupational health and safety students.

    PubMed

    Eyvazlou, Meysam; Zarei, Esmaeil; Rahimi, Azin; Abazari, Malek

    2016-01-01

    Concerns about health problems due to the increasing use of mobile phones are growing. Excessive use of mobile phones can affect the quality of sleep as one of the important issues in the health literature and general health of people. Therefore, this study investigated the relationship between the excessive use of mobile phones and general health and quality of sleep on 450 Occupational Health and Safety (OH&S) students in five universities of medical sciences in the North East of Iran in 2014. To achieve this objective, special questionnaires that included Cell Phone Overuse Scale, Pittsburgh's Sleep Quality Index (PSQI) and General Health Questionnaire (GHQ) were used, respectively. In addition to descriptive statistical methods, independent t-test, Pearson correlation, analysis of variance (ANOVA) and multiple regression tests were performed. The results revealed that half of the students had a poor level of sleep quality and most of them were considered unhealthy. The Pearson correlation co-efficient indicated a significant association between the excessive use of mobile phones and the total score of general health and the quality of sleep. In addition, the results of the multiple regression showed that the excessive use of mobile phones has a significant relationship between each of the four subscales of general health and the quality of sleep. Furthermore, the results of the multivariate regression indicated that the quality of sleep has a simultaneous effect on each of the four scales of the general health. Overall, a simultaneous study of the effects of the mobile phones on the quality of sleep and the general health could be considered as a trigger to employ some intervention programs to improve their general health status, quality of sleep and consequently educational performance.

  13. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  14. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    PubMed Central

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  15. Measuring the scale dependence of intrinsic alignments using multiple shear estimates

    NASA Astrophysics Data System (ADS)

    Leonard, C. Danielle; Mandelbaum, Rachel

    2018-06-01

    We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.

  16. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-01

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  17. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas

    2016-05-01

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  18. An exploration of the influence of thinness expectancies and eating pathology on compensatory exercise.

    PubMed

    Garner, Ashton; Davis-Becker, Kendra; Fischer, Sarah

    2014-08-01

    Compensatory exercise (exercise performed in an effort to control weight/shape or in response to caloric intake) and thinness expectancies (beliefs that thinness will improve the overall quality of life) are strongly linked to the development, maintenance, severity, and outcome of eating disorders. There is little literature, however, examining the relationship between compensatory exercise and thinness expectancies. The goal of the current study was to examine whether thinness expectancies contribute significant variance in the endorsement of excessive exercise over and above binge eating, restraint, and shape and weight concerns. A total of 677 undergraduate women (mean age=18.73) completed self-report measures of thinness expectancies and eating disorder symptoms (TREI and EDE-Q). There was a significant association between thinness expectancies and frequency of compensatory exercise behavior. Restraint and subjective binge episodes accounted for significant variance in compensatory exercise. Frequency of objective binge episodes did not, nor did endorsement of thinness expectancies. These findings suggest a potential profile of individuals engaging in compensatory exercise as individuals who actively restrict their diets, feel as if they have binged when they violate those restrictions, and feel the need to excessively exercise to compensate for the subjective binges. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study

    ERIC Educational Resources Information Center

    Suero, Manuel; Privado, Jesús; Botella, Juan

    2017-01-01

    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  20. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.

  1. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  2. Estimating integrated variance in the presence of microstructure noise using linear regression

    NASA Astrophysics Data System (ADS)

    Holý, Vladimír

    2017-07-01

    Using financial high-frequency data for estimation of integrated variance of asset prices is beneficial but with increasing number of observations so-called microstructure noise occurs. This noise can significantly bias the realized variance estimator. We propose a method for estimation of the integrated variance robust to microstructure noise as well as for testing the presence of the noise. Our method utilizes linear regression in which realized variances estimated from different data subsamples act as dependent variable while the number of observations act as explanatory variable. We compare proposed estimator with other methods on simulated data for several microstructure noise structures.

  3. On long-only information-based portfolio diversification framework

    NASA Astrophysics Data System (ADS)

    Santos, Raphael A.; Takada, Hellinton H.

    2014-12-01

    Using the concepts from information theory, it is possible to improve the traditional frameworks for long-only asset allocation. In modern portfolio theory, the investor has two basic procedures: the choice of a portfolio that maximizes its risk-adjusted excess return or the mixed allocation between the maximum Sharpe portfolio and the risk-free asset. In the literature, the first procedure was already addressed using information theory. One contribution of this paper is the consideration of the second procedure in the information theory context. The performance of these approaches was compared with three traditional asset allocation methodologies: the Markowitz's mean-variance, the resampled mean-variance and the equally weighted portfolio. Using simulated and real data, the information theory-based methodologies were verified to be more robust when dealing with the estimation errors.

  4. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  5. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  6. THE NANOGRAV NINE-YEAR DATA SET: EXCESS NOISE IN MILLISECOND PULSAR ARRIVAL TIMES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, M. T.; Jones, M. L.; McLaughlin, M. A.

    Gravitational wave (GW) astronomy using a pulsar timing array requires high-quality millisecond pulsars (MSPs), correctable interstellar propagation delays, and high-precision measurements of pulse times of arrival. Here we identify noise in timing residuals that exceeds that predicted for arrival time estimation for MSPs observed by the North American Nanohertz Observatory for Gravitational Waves. We characterize the excess noise using variance and structure function analyses. We find that 26 out of 37 pulsars show inconsistencies with a white-noise-only model based on the short timescale analysis of each pulsar, and we demonstrate that the excess noise has a red power spectrum formore » 15 pulsars. We also decompose the excess noise into chromatic (radio-frequency-dependent) and achromatic components. Associating the achromatic red-noise component with spin noise and including additional power-spectrum-based estimates from the literature, we estimate a scaling law in terms of spin parameters (frequency and frequency derivative) and data-span length and compare it to the scaling law of Shannon and Cordes. We briefly discuss our results in terms of detection of GWs at nanohertz frequencies.« less

  7. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE PAGES

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...

    2016-05-03

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  8. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  9. A Survey of X-Ray Variability in Seyfert 1 Galaxies with XMM-Newton to study the soft excess and the broad Fe lines

    NASA Astrophysics Data System (ADS)

    Ponti, Gabriele

    The nature of the soft excess and the presence of the broad Fe lines is still nowadays highly debated because the different absorption/emission models are degenerate. Spectral variability studies have the potential to break this degeneracy. I will present the results of a spectral variability RMS survey of the 36 brightest type 1 Seyfert galaxies observed by XMM-Newton for more than 30 ks. More than 80 as already measured, on longer timescales, with RXTE (Markowitz et al. 2004). About half of the sample show lower variability in the soft energy band, indicating that the emission from the soft excess is more stable than the one of the continuum. While the other sources show a soft excess that is as variable as the continuum. About half of the sample do not show an excess of variability where the warm absorber component imprints its stronger features, suggesting that for these sources the soft excess is not produced by a relativistic absorbing wind. In a few bright and well exposed sources it has been possible to measure an excess of variability at the energy of the broad component of the Fe K line, in agreement with the broad emission line interpretation. For the sources where more than one observation was available the stability of the shape of the RMS spectrum has been investigated. Moreover, it will be presented the results of the computation of the excess variance of all the radio quiet type 1 AGN of the XMM-Newton database. The relations between variability, black hole mass, accretion rate and luminosity are investigated and their scatter measured.

  10. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  11. Defining Soil Materials for 3-D Models of the Near Surface: Preliminary Findings

    DTIC Science & Technology

    2012-03-01

    platykurtic . The corresponding box plot and 95 percent confidence intervals for mean and median are below histogram. .......................... 22...ERDC/GSL TR-12-9 20 platykurtic (Table 3). A higher kurtosis means more of the variance1 in a dataset is the result of infrequent extreme...Descriptive term > 1.0 Excessively peaked (leptokurtic) 1.0 Normally peaked (mesokurtic) < 1.0 Deficiently peaked ( platykurtic ) 4.2.3 Applications of

  12. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  13. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater

    PubMed Central

    Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal

    2016-01-01

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016

  14. Cure Schedule for Stycast 2651/Catalyst 11.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kropka, Jamie Michael; McCoy, John D.

    2017-11-01

    The Henkel technical data sheet (TDS) for Stycast 2651/Catalyst 11 lists three alternate cure schedules for the material, each of which would result in a different state of reaction and different material properties. Here, a cure schedule that attains full reaction of the material is defined. The use of this cure schedule will eliminate variance in material properties due to changes in the cure state of the material, and the cure schedule will serve as the method to make material prior to characterizing properties. The following recommendation was motivated by (1) a desire to cure at a single temperature formore » ease of manufacture and (2) a desire to keep the cure temperature low (to minimize residual stress build-up associated with the cooldown from the cure temperature to room temperature) without excessively limiting the cure reaction due to vitrification (i.e., material glass transition temperature, T g, exceeding cure temperature).« less

  15. VizieR Online Data Catalog: RR Lyrae in SDSS Stripe 82 (Suveges+, 2012)

    NASA Astrophysics Data System (ADS)

    Suveges, M.; Sesar, B.; Varadi, M.; Mowlavi, N.; Becker, A. C.; Ivezic, Z.; Beck, M.; Nienartowicz, K.; Rimoldini, L.; Dubath, P.; Bartholdi, P.; Eyer, L.

    2013-05-01

    We propose a robust principal component analysis framework for the exploitation of multiband photometric measurements in large surveys. Period search results are improved using the time-series of the first principal component due to its optimized signal-to-noise ratio. The presence of correlated excess variations in the multivariate time-series enables the detection of weaker variability. Furthermore, the direction of the largest variance differs for certain types of variable stars. This can be used as an efficient attribute for classification. The application of the method to a subsample of Sloan Digital Sky Survey Stripe 82 data yielded 132 high-amplitude delta Scuti variables. We also found 129 new RR Lyrae variables, complementary to the catalogue of Sesar et al., extending the halo area mapped by Stripe 82 RR Lyrae stars towards the Galactic bulge. The sample also comprises 25 multiperiodic or Blazhko RR Lyrae stars. (8 data files).

  16. Microclimate monitoring of Ariadne's house (Pompeii, Italy) for preventive conservation of fresco paintings.

    PubMed

    Merello, Paloma; García-Diego, Fernando-Juan; Zarzo, Manuel

    2012-11-28

    Ariadne's house, located at the city center of ancient Pompeii, is of great archaeological value due to the fresco paintings decorating several rooms. In order to assess the risks for long-term conservation affecting the valuable mural paintings, 26 temperature data-loggers and 26 relative humidity data-loggers were located in four rooms of the house for the monitoring of ambient conditions. Data recorded during 372 days were analyzed by means of graphical descriptive methods and analysis of variance (ANOVA). Results revealed an effect of the roof type and number of walls of the room. Excessive temperatures were observed during the summer in rooms covered with transparent roofs, and corrective actions were taken. Moreover, higher humidity values were recorded by sensors on the floor level. The present work provides guidelines about the type, number, calibration and position of thermohygrometric sensors recommended for the microclimate monitoring of mural paintings in outdoor or semi-confined environments.

  17. A method to estimate the contribution of regional genetic associations to complex traits from summary association statistics.

    PubMed

    Pare, Guillaume; Mao, Shihong; Deng, Wei Q

    2016-06-08

    Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance.

  18. A method to estimate the contribution of regional genetic associations to complex traits from summary association statistics

    PubMed Central

    Pare, Guillaume; Mao, Shihong; Deng, Wei Q.

    2016-01-01

    Despite considerable efforts, known genetic associations only explain a small fraction of predicted heritability. Regional associations combine information from multiple contiguous genetic variants and can improve variance explained at established association loci. However, regional associations are not easily amenable to estimation using summary association statistics because of sensitivity to linkage disequilibrium (LD). We now propose a novel method, LD Adjusted Regional Genetic Variance (LARGV), to estimate phenotypic variance explained by regional associations using summary statistics while accounting for LD. Our method is asymptotically equivalent to a multiple linear regression model when no interaction or haplotype effects are present. It has several applications, such as ranking of genetic regions according to variance explained or comparison of variance explained by two or more regions. Using height and BMI data from the Health Retirement Study (N = 7,776), we show that most genetic variance lies in a small proportion of the genome and that previously identified linkage peaks have higher than expected regional variance. PMID:27273519

  19. Comparing transformation methods for DNA microarray data

    PubMed Central

    Thygesen, Helene H; Zwinderman, Aeilko H

    2004-01-01

    Background When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. Results We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. Conclusions The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method. PMID:15202953

  20. Comparing transformation methods for DNA microarray data.

    PubMed

    Thygesen, Helene H; Zwinderman, Aeilko H

    2004-06-17

    When DNA microarray data are used for gene clustering, genotype/phenotype correlation studies, or tissue classification the signal intensities are usually transformed and normalized in several steps in order to improve comparability and signal/noise ratio. These steps may include subtraction of an estimated background signal, subtracting the reference signal, smoothing (to account for nonlinear measurement effects), and more. Different authors use different approaches, and it is generally not clear to users which method they should prefer. We used the ratio between biological variance and measurement variance (which is an F-like statistic) as a quality measure for transformation methods, and we demonstrate a method for maximizing that variance ratio on real data. We explore a number of transformations issues, including Box-Cox transformation, baseline shift, partial subtraction of the log-reference signal and smoothing. It appears that the optimal choice of parameters for the transformation methods depends on the data. Further, the behavior of the variance ratio, under the null hypothesis of zero biological variance, appears to depend on the choice of parameters. The use of replicates in microarray experiments is important. Adjustment for the null-hypothesis behavior of the variance ratio is critical to the selection of transformation method.

  1. SU-F-T-39: Comparing Nomograms for Ordering of Palladium-103 Seeds for Dynamic Intraoperative Prostate Seed Implantation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P; Wang, L; Riegel, A

    Purpose: Several nomograms exist for the purpose of ordering palladium- 103 seeds for permanent prostate seed implants. Excess seeds pose additional radiation safety risks and increase the cost of care. This study compared three seed ordering nomograms with seed counts from dynamic intra-operative PSI to determine (1) the cause of excess seeds and (2) the optimal nomogram for our institution. Methods: Pre-operative and intra-operative clinical data were collected for 100 Gy (n=151) and 125 Gy (n=224) prostate seed implants. The number of implanted seeds which would have given D90=100% was normalized to that criteria and seed strength of 2U. Thismore » was plotted against intra-operative prostate volume and compared to two previously published nomograms and an in-house nomogram. A linear fit was produced and confidence intervals were calculated. The causes of excess seeds were assessed by comparing pre- and intra-operative prostate volumes, variability of D90 around 100%, and variance of seed strength from 2U. Results: Of the 375 total cases, 97.6% had excess seeds. On average, 27.17±12.91% of ordered seeds were wasted. Of this percentage, 6.98±5.47% of excess seeds were due to overestimation of pre-operative prostate volume, 1.10±0.88% were due to D90<100%, 1.17±0.67% were due to seed strength over 2U, and 17.36±7.79% could not be directly attributed to a specific reason. The latter percentage may be due to overestimation of the in-house nomogram. Two of three nomograms substantially overestimated the number of seeds required. The third nomogram underestimated the required seed number for smaller prostate treatment volume. A linear fit to the clinical data was derived and 99.9% confidence intervals were calculated. Conclusion: Over 85% of clinical cases wasted over 15% of ordered seeds. Two of three nomograms overestimated the required number of seeds. The upper 99.9% C.I. of the clinical data may provide a more reasonable nomogram for Pd-103 seed ordering.« less

  2. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  3. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  4. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  5. THE LONGEST TIMESCALE X-RAY VARIABILITY REVEALS EVIDENCE FOR ACTIVE GALACTIC NUCLEI IN THE HIGH ACCRETION STATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Youhong, E-mail: youhong.zhang@mail.tsinghua.edu.cn

    2011-01-01

    The All Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer has continuously monitored a number of active galactic nuclei (AGNs) with similar sampling rates for 14 years, from 1996 January to 2009 December. Utilizing the archival ASM data of 27 AGNs, we calculate the normalized excess variances of the 300-day binned X-ray light curves on the longest timescale (between 300 days and 14 years) explored so far. The observed variance appears to be independent of AGN black-hole mass and bolometric luminosity. According to the scaling relation of black-hole mass (and bolometric luminosity) from galactic black hole X-ray binariesmore » (GBHs) to AGNs, the break timescales that correspond to the break frequencies detected in the power spectral density (PSD) of our AGNs are larger than the binsize (300 days) of the ASM light curves. As a result, the singly broken power-law (soft-state) PSD predicts the variance to be independent of mass and luminosity. Nevertheless, the doubly broken power-law (hard-state) PSD predicts, with the widely accepted ratio of the two break frequencies, that the variance increases with increasing mass and decreases with increasing luminosity. Therefore, the independence of the observed variance on mass and luminosity suggests that AGNs should have soft-state PSDs. Taking into account the scaling of the break timescale with mass and luminosity synchronously, the observed variances are also more consistent with the soft-state than the hard-state PSD predictions. With the averaged variance of AGNs and the soft-state PSD assumption, we obtain a universal PSD amplitude of 0.030 {+-} 0.022. By analogy with the GBH PSDs in the high/soft state, the longest timescale variability supports the standpoint that AGNs are scaled-up GBHs in the high accretion state, as already implied by the direct PSD analysis.« less

  6. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  7. Reproducibility of the Internal Load and Performance-Based Responses to Simulated Amateur Boxing.

    PubMed

    Thomson, Edward D; Lamb, Kevin L

    2017-12-01

    Thomson, ED and Lamb, KL. Reproducibility of the internal load and performance-based responses to simulated amateur boxing. J Strength Cond Res 31(12): 3396-3402, 2017-The aim of this study was to examine the reproducibility of the internal load and performance-based responses to repeated bouts of a three-round amateur boxing simulation protocol (boxing conditioning and fitness test [BOXFIT]). Twenty-eight amateur boxers completed 2 familiarization trials before performing 2 complete trials of the BOXFIT, separated by 4-7 days. To characterize the internal load, mean (HRmean) and peak (HRpeak) heart rate, breath-by-breath oxygen uptake (V[Combining Dot Above]O2), aerobic energy expenditure, excess carbon dioxide production (CO2excess), and ratings of perceived exertion were recorded throughout each round, and blood lactate determined post-BOXFIT. Additionally, an indication of the performance-based demands of the BOXFIT was provided by a measure of acceleration of the punches thrown in each round. Analyses revealed there were no significant differences (p > 0.05) between repeated trials in any round for all dependent measures. The typical error (coefficient variation %) for all but 1 marker of internal load (CO2excess) was 1.2-16.5% and reflected a consistency that was sufficient for the detection of moderate changes in variables owing to an intervention. The reproducibility of the punch accelerations was high (coefficient of variance % range = 2.1-2.7%). In general, these findings suggest that the internal load and performance-based efforts recorded during the BOXFIT are reproducible and, thereby, offer practitioners a method by which meaningful changes impacting on performance could be identified.

  8. Short version of the Smartphone Addiction Scale adapted to Spanish and French: Towards a cross-cultural research in problematic mobile phone use.

    PubMed

    Lopez-Fernandez, Olatz

    2017-01-01

    Research into smartphone addiction has followed the scientific literature on problematic mobile phone use developed during the last decade, with valid screening scales being developed to identify maladaptive behaviour associated with this technology, usually in adolescent populations. This study adapts the short version of the Smartphone Addiction Scale [SAS-SV] into Spanish and into French. The aim of the study was to (i) examine the scale's psychometric properties in both languages, (ii) estimate the prevalence of potential excessive smartphone use among Spanish and Belgian adults, and (iii) compare the addictive symptomatology measured by the SAS-SV between potentially excessive users from both countries. Data were collected via online surveys administered to 281 and 144 voluntary participants from both countries respectively, aged over 18years and recruited from academic environments. Results indicated that the reliability was excellent (i.e., Cronbach alphas: Spain: .88 and Belgium: .90), and the validity was very good (e.g., unifactoriality with a 49% and 54% of variance explained through explorative factor analysis, respectively). Findings showed that the prevalence of potential excessive smartphone use 12.5% for Spanish and 21.5% for francophone Belgians. The scale showed that at least 60% of excessive users endorsed withdrawal and tolerance symptoms in both countries, although the proposed addictive symptomatology did not cover the entire group of estimated excessive users and cultural differences appeared. This first cross-cultural study discusses the smartphone excessive use construct from its addictive pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  10. Alcohol hangover symptoms and their contribution to the overall hangover severity.

    PubMed

    Penning, Renske; McKinney, Adele; Verster, Joris C

    2012-01-01

    Scientific literature suggests a large number of symptoms that may be present the day after excessive alcohol consumption. The purpose of this study was to explore the presence and severity of hangover symptoms, and determine their interrelationship. A survey was conducted among n = 1410 Dutch students examining their drinking behavior and latest alcohol hangover. The severity of 47 presumed hangover symptoms were scored on a 10-point scale ranging from 0 (absent) to 10 (maximal). Factor analysis was conducted to summarize the data into groups of associated symptoms that contribute significantly to the alcohol hangover and symptoms that do not. About half of the participants (56.1%, n = 791) reported having had a hangover during the past month. Most commonly reported and most severe hangover symptoms were fatigue (95.5%) and thirst (89.1%). Factor analysis revealed 11 factors that together account for 62% of variance. The most prominent factor 'drowsiness' (explained variance 28.8%) included symptoms such as drowsiness, fatigue, sleepiness and weakness. The second factor 'cognitive problems' (explained variance 5.9%) included symptoms such as reduced alertness, memory and concentration problems. Other factors, including the factor 'disturbed water balance' comprising frequently reported symptoms such as 'dry mouth' and 'thirst', contributed much less to the overall hangover (explained variance <5%). Drowsiness and impaired cognitive functioning are the two dominant features of alcohol hangover.

  11. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  12. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.

    2017-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF demonstrated that CF is improved if SHOC is provided with correct variances of total water and MSE. Consequently, SHOC was modified to include two new prognostic equations for variances of total water and MSE, and coupled with the Chikira-Sugiyama parameterization of deep convection to include effects of detrainment on the prognostic variances.

  13. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  15. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  16. Asian Adolescents with Excess Weight are at Higher Risk for Insulin Resistance than Non-Asian Peers.

    PubMed

    Elsamadony, Ahmed; Yates, Kathy F; Sweat, Victoria; Yau, Po Lai; Mangone, Alex; Joseph, Adriana; Fierman, Arthur; Convit, Antonio

    2017-11-01

    The purpose of this study was to evaluate whether Asian American adolescents have higher metabolic risk from excess weight than non-Asians. Seven hundred thirty-three students, aged 14 to 19 years old, completed a school-based health screening. The 427 Asian and 306 non-Asian students were overall equivalent on age, sex, and family income. Height, weight, waist circumference, percent body fat, and blood pressure were measured. Fasting triglycerides, high- and low-density lipoproteins, glucose, and insulin levels were measured. Asian and non-Asians in lean or overweight/obesity groups were contrasted on the five factors that make up the metabolic syndrome. Asian adolescents carrying excess weight had significantly higher insulin resistance (IR), triglyceride levels, and waist-height ratios (W/H), despite a significantly lower overall BMI than corresponding non-Asians. Similarly, Asians had a stronger relationship between W/H and the degree of IR than non-Asian counterparts; 35% and 18% of the variances were explained (R 2  = 0.35, R 2  = 0.18) respectively, resulting in a significant W/H by racial group interaction (F change [1,236] = 11.56, P < 0.01). Despite lower overall BMI, Asians have higher IR and triglyceride levels from excess weight than their non-Asian counterparts. One-size-fits-all public health policies targeting youth should be reconsidered and attention paid to Asian adolescents, including those with mild degrees of excess weight. © 2017 The Obesity Society.

  17. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less

  18. Fat-free mass is not lower 24 months postbariatric surgery than nonoperated matched controls.

    PubMed

    Strain, Gladys Witt; Ebel, Faith; Honohan, Jamie; Gagner, Michel; Dakin, Gregory F; Pomp, Alfons; Gallagher, Dympna

    2017-01-01

    Concerns about an excessive loss of fat-free mass (FFM) after bariatric surgery prompted this comparison of operated versus matched nonoperated controls regarding FFM. University Hospital and University Research Unit in an urban medical center. Body composition with bioelectric impedance (Tanita 310, Tanita Corp, Arlington Heights, IL) was measured approximately 2 years after bariatric surgery in weight stable patients and nonoperated weight stable controls matched for body mass index (BMI), gender, and age. t tests provided comparisons. Analysis of variance was used to compare FFM changes for 4 procedures. Levene's test evaluated variance. Patients (n = 252; 24.7±15 mo after surgery) and nonoperated controls (n = 252) were matched for gender (71.8% female), age (44.5±11.0 yr), and BMI (32.8±7.0 kg/m 2 ). Patients had different surgical procedures: 107 gastric bypasses (RYGBs), 62 biliopancreatic diversions with duodenal switch (BPD/DSs), 40 adjustable gastric bands (AGBs), and 43 sleeve gastrectomies (LSGs). FFM percentage was significantly higher in the operated patients than controls, 66% versus 62%, P<.0001. For 3 procedures, the FFM was significantly higher; however, AGBs changed only 7.3 BMI units and FFM was not significantly different from their matched controls, 59.8% versus 58.2%. Across surgical groups, FFM percentage differed, P<.0001 (RYGB 66.5±9.2%, BPD/DS 74.0±9.3%, AGB 59.8±7.0%, LSG 59.6±9.3%). Variance was not different (P = .17). Weight-reduced bariatric surgery patients have greater FFM compared with nonoperated matched controls. These findings support surgically assisted weight loss as a physiologic process and in general patients do not suffer from excessive FFM depletion after bariatric procedures. Copyright © 2017 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  19. Masticatory muscle activity assessment and reliability of a portable electromyographic instrument.

    PubMed

    Bowley, J F; Marx, D B

    2001-03-01

    Masticatory muscle hyperactivity is thought to produce muscle pain and tension headaches and can cause excessive wear or breakage of restorative dental materials used in the treatment of prosthodontic patients. The quantification and identification of this type of activity is an important consideration in the preoperative diagnosis and treatment planning phase of prosthodontic care. This study investigated the quantification process in complete denture/overdenture patients with natural mandibular tooth abutments and explored the reliability of instrumentation used to assess this parafunctional activity. The nocturnal EMG activity in asymptomatic complete denture/overdenture subjects was assessed with and without prostheses worn during sleep. Because of the large variance within and between subjects, the investigators evaluated the reliability of the 3 instruments used to test nocturnal EMG activity in the sample. Electromyographic activity data of denture/overdenture subjects revealed no differences between prostheses worn versus not worn during sleep but demonstrated a very large variance factor. Further investigation of the instrumentation demonstrated a consistent in vitro as well as in vivo reliability in controlled laboratory studies. The portable EMG instrumentation used in this study revealed a large, uncontrollable variance factor within and between subjects that greatly complicated the diagnosis of parafunctional activity in prosthodontic patients.

  20. Recent worldwide expansion of Nosema ceranae (Microsporidia) in Apis mellifera populations inferred from multilocus patterns of genetic variation.

    PubMed

    Gómez-Moracho, T; Bartolomé, C; Bello, X; Martín-Hernández, R; Higes, M; Maside, X

    2015-04-01

    Nosema ceranae has been found infecting Apismellifera colonies with increasing frequency and it now represents a major threat to the health and long-term survival of these honeybees worldwide. However, so far little is known about the population genetics of this parasite. Here, we describe the patterns of genetic variation at three genomic loci in a collection of isolates from all over the world. Our main findings are: (i) the levels of genetic polymorphism (πS≈1%) do not vary significantly across its distribution range, (ii) there is substantial evidence for recombination among haplotypes, (iii) the best part of the observed genetic variance corresponds to differences within bee colonies (up to 88% of the total variance), (iv) parasites collected from Asian honeybees (Apis cerana and Apis florea) display significant differentiation from those obtained from Apismellifera (8-16% of the total variance, p<0.01) and (v) there is a significant excess of low frequency variants over neutral expectations among samples obtained from A. mellifera, but not from Asian honeybees. Overall these results are consistent with a recent colonization and rapid expansion of N. ceranae throughout A. mellifera colonies. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Measuring systems of hard to get objects: problems with analysis of measurement results

    NASA Astrophysics Data System (ADS)

    Gilewska, Grazyna

    2005-02-01

    The problem accessibility of metrological parameters features of objects appeared in many measurements. Especially if it is biological object which parameters very often determined on the basis of indirect research. Accidental component predominate in forming of measurement results with very limited access to measurement objects. Every measuring process has a lot of conditions limiting its abilities to any way processing (e.g. increase number of measurement repetition to decrease random limiting error). It may be temporal, financial limitations, or in case of biological object, small volume of sample, influence measuring tool and observers on object, or whether fatigue effects e.g. at patient. It's taken listing difficulties into consideration author worked out and checked practical application of methods outlying observation reduction and next innovative methods of elimination measured data with excess variance to decrease of mean standard deviation of measured data, with limited aomunt of data and accepted level of confidence. Elaborated methods wee verified on the basis of measurement results of knee-joint width space got from radiographs. Measurements were carried out by indirectly method on the digital images of radiographs. Results of examination confirmed legitimacy to using of elaborated methodology and measurement procedures. Such methodology has special importance when standard scientific ways didn't bring expectations effects.

  2. Estimation of Additive, Dominance, and Imprinting Genetic Variance Using Genomic Data

    PubMed Central

    Lopes, Marcos S.; Bastiaansen, John W. M.; Janss, Luc; Knol, Egbert F.; Bovenhuis, Henk

    2015-01-01

    Traditionally, exploration of genetic variance in humans, plants, and livestock species has been limited mostly to the use of additive effects estimated using pedigree data. However, with the development of dense panels of single-nucleotide polymorphisms (SNPs), the exploration of genetic variation of complex traits is moving from quantifying the resemblance between family members to the dissection of genetic variation at individual loci. With SNPs, we were able to quantify the contribution of additive, dominance, and imprinting variance to the total genetic variance by using a SNP regression method. The method was validated in simulated data and applied to three traits (number of teats, backfat, and lifetime daily gain) in three purebred pig populations. In simulated data, the estimates of additive, dominance, and imprinting variance were very close to the simulated values. In real data, dominance effects account for a substantial proportion of the total genetic variance (up to 44%) for these traits in these populations. The contribution of imprinting to the total phenotypic variance of the evaluated traits was relatively small (1–3%). Our results indicate a strong relationship between additive variance explained per chromosome and chromosome length, which has been described previously for other traits in other species. We also show that a similar linear relationship exists for dominance and imprinting variance. These novel results improve our understanding of the genetic architecture of the evaluated traits and shows promise to apply the SNP regression method to other traits and species, including human diseases. PMID:26438289

  3. Relationship of work-family conflict, self-reported social support and job satisfaction to burnout syndrome among medical workers in southwest China: A cross-sectional study

    PubMed Central

    Yang, Shujuan; Liu, Danping; Liu, Hongbo; Zhang, Juying; Duan, Zhanqi

    2017-01-01

    Background Burnout is a psychosomatic syndrome widely observed in Chinese medical workers due to the increasing cost of medical treatment, excessive workload, and excessive prescribing behavior. No studies have evaluated the interrelationship among occupational burnout, work-family conflict, social support, and job satisfaction in medical workers. The aim of this study was to evaluate these relationships among medical workers in southwest China. Methods This cross-sectional study was conducted between March 2013 and December 2013, and was based on the fifth National Health Service Survey (NHSS). A total of 1382 medical workers were enrolled in the study. Pearson correlation analysis and general linear model univariate analysis were used to evaluate the relationship of work-family conflict, self-reported social support, and job satisfaction with burnout syndrome in medical workers. Results We observed that five dimensions of job satisfaction and self-reported social support were negatively associated with burnout syndrome, whereas three dimensions of work-family conflict showed a positive correlation. In a four-stage general linear model analysis, we found that demographic factors accounted for 5.4% of individual variance in burnout syndrome (F = 4.720, P<0.001, R2 = 0.054), and that work-family conflict, self-reported social support, and job satisfaction accounted for 2.6% (F = 5.93, P<0.001, R2 = 0.080), 5.7% (F = 9.532, P<0.001, R2 = 0.137) and 17.8% (F = 21.608, P<0.001, R2 = 0.315) of the variance, respectively. In the fourth stage of analysis, female gender and a lower technical title correlated to a higher level of burnout syndrome, and medical workers without administrative duties had more serious burnout syndrome than those with administrative duties. Conclusions In conclusion, the present study suggests that work-family conflict and self-reported social support slightly affect the level of burnout syndrome, and that job satisfaction is a much stronger influence on burnout syndrome in medical workers of southwest China. PMID:28207821

  4. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  5. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  6. Modification of inertial oscillations by the mesoscale eddy field

    NASA Astrophysics Data System (ADS)

    Elipot, Shane; Lumpkin, Rick; Prieto, GermáN.

    2010-09-01

    The modification of near-surface near-inertial oscillations (NIOs) by the geostrophic vorticity is studied globally from an observational standpoint. Surface drifter are used to estimate NIO characteristics. Despite its spatial resolution limits, altimetry is used to estimate the geostrophic vorticity. Three characteristics of NIOs are considered: the relative frequency shift with respect to the local inertial frequency; the near-inertial variance; and the inverse excess bandwidth, which is interpreted as a decay time scale. The geostrophic mesoscale flow shifts the frequency of NIOs by approximately half its vorticity. Equatorward of 30°N and S, this effect is added to a global pattern of blue shift of NIOs. While the global pattern of near-inertial variance is interpretable in terms of wind forcing, it is also observed that the geostrophic vorticity organizes the near-inertial variance; it is maximum for near zero values of the Laplacian of the vorticity and decreases for nonzero values, albeit not as much for positive as for negative values. Because the Laplacian of vorticity and vorticity are anticorrelated in the altimeter data set, overall, more near-inertial variance is found in anticyclonic vorticity regions than in cyclonic regions. While this is compatible with anticyclones trapping NIOs, the organization of near-inertial variance by the Laplacian of vorticity is also in very good agreement with previous theoretical and numerical predictions. The inverse bandwidth is a decreasing function of the gradient of vorticity, which acts like the gradient of planetary vorticity to increase the decay of NIOs from the ocean surface. Because the altimetry data set captures the largest vorticity gradients in energetic mesoscale regions, it is also observed that NIOs decay faster in large geostrophic eddy kinetic energy regions.

  7. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  8. Cosmic variance of the galaxy cluster weak lensing signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruen, D.; Seitz, S.; Becker, M. R.

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  9. Cosmic variance of the galaxy cluster weak lensing signal

    DOE PAGES

    Gruen, D.; Seitz, S.; Becker, M. R.; ...

    2015-04-13

    Intrinsic variations of the projected density profiles of clusters of galaxies at fixed mass are a source of uncertainty for cluster weak lensing. We present a semi-analytical model to account for this effect, based on a combination of variations in halo concentration, ellipticity and orientation, and the presence of correlated haloes. We calibrate the parameters of our model at the 10 per cent level to match the empirical cosmic variance of cluster profiles at M 200m ≈ 10 14…10 15h –1M ⊙, z = 0.25…0.5 in a cosmological simulation. We show that weak lensing measurements of clusters significantly underestimate massmore » uncertainties if intrinsic profile variations are ignored, and that our model can be used to provide correct mass likelihoods. Effects on the achievable accuracy of weak lensing cluster mass measurements are particularly strong for the most massive clusters and deep observations (with ≈20 per cent uncertainty from cosmic variance alone at M 200m ≈ 10 15h –1M ⊙ and z = 0.25), but significant also under typical ground-based conditions. We show that neglecting intrinsic profile variations leads to biases in the mass-observable relation constrained with weak lensing, both for intrinsic scatter and overall scale (the latter at the 15 per cent level). Furthermore, these biases are in excess of the statistical errors of upcoming surveys and can be avoided if the cosmic variance of cluster profiles is accounted for.« less

  10. Chandra Detection of Intracluster X-Ray sources in Virgo

    NASA Astrophysics Data System (ADS)

    Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze

    2017-09-01

    We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.

  11. An experimental 392-year documentary-based multi-proxy (vine and grain) reconstruction of May-July temperatures for Kőszeg, West-Hungary

    NASA Astrophysics Data System (ADS)

    Kiss, Andrea; Wilson, Rob; Bariska, István

    2011-07-01

    In this paper, we present a 392-year-long preliminary temperature reconstruction for western Hungary. The reconstructed series is based on five vine- and grain-related historical phenological series from the town of Kőszeg. We apply dendrochronological methods for both signal assessment of the phenological series and the resultant temperature reconstruction. As a proof of concept, the present reconstruction explains 57% of the temperature variance of May-July Budapest mean temperatures and is well verified with coefficient of efficiency values in excess of 0.45. The developed temperature reconstruction portrays warm conditions during the late seventeenth and early eighteenth centuries with a period of cooling until the coldest reconstructed period centred around 1815, which was followed by a period of warming until the 1860s. The phenological evidence analysed here represent an important data source from which non-biased estimates of past climate can be derived that may provide information at all possible time-scales.

  12. Microclimate monitoring of Ariadne’s house (Pompeii, Italy) for preventive conservation of fresco paintings

    PubMed Central

    2012-01-01

    Background Ariadne’s house, located at the city center of ancient Pompeii, is of great archaeological value due to the fresco paintings decorating several rooms. In order to assess the risks for long-term conservation affecting the valuable mural paintings, 26 temperature data-loggers and 26 relative humidity data-loggers were located in four rooms of the house for the monitoring of ambient conditions. Results Data recorded during 372 days were analyzed by means of graphical descriptive methods and analysis of variance (ANOVA). Results revealed an effect of the roof type and number of walls of the room. Excessive temperatures were observed during the summer in rooms covered with transparent roofs, and corrective actions were taken. Moreover, higher humidity values were recorded by sensors on the floor level. Conclusions The present work provides guidelines about the type, number, calibration and position of thermohygrometric sensors recommended for the microclimate monitoring of mural paintings in outdoor or semi-confined environments. PMID:23190798

  13. Exercise dependence as a mediator of the exercise and eating disorders relationship: a pilot study.

    PubMed

    Cook, Brian; Hausenblas, Heather; Crosby, Ross D; Cao, Li; Wonderlich, Stephen A

    2015-01-01

    Excessive exercise is a common feature of eating disorders (ED) and is associated with earlier ED onset, more ED symptoms, and higher persistence of ED behavior. Research indicates that exercise amount alone is not associated with ED. The purpose of this study was to investigate pathological attitudes and behaviors related to exercise (e.g., exercise dependence) as a mediator of the exercise and ED relationship. Participants were 43 women with an ED who completed measures of ED symptoms, exercise behavior, and exercise dependence. Analyses were conducted using the indirect bootstrapping method for examining mediation. Exercise dependence mediated the relationship between exercise and ED. This mediation model accounted for 14.34% of the variance in the relationship. Our results extend the literature by offering preliminary evidence of a psychological variable that may be a candidate for future interventions on the exercise and ED relationship. Implications and suggestions for future research are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  15. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    PubMed

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Application of spatial synoptic classification in evaluating links between heat stress and cardiovascular mortality and morbidity in Prague, Czech Republic

    NASA Astrophysics Data System (ADS)

    Urban, Aleš; Kyselý, Jan

    2018-01-01

    Spatial synoptic classification (SSC) is here first employed in assessing heat-related mortality and morbidity in Central Europe. It is applied for examining links between weather patterns and cardiovascular (CVD) mortality and morbidity in an extended summer season (16 May-15 September) during 1994-2009. As in previous studies, two SSC air masses (AMs)—dry tropical (DT) and moist tropical (MT)—are associated with significant excess CVD mortality in Prague, while effects on CVD hospital admissions are small and insignificant. Excess mortality for ischaemic heart diseases is more strongly associated with DT, while MT has adverse effect especially on cerebrovascular mortality. Links between the oppressive AMs and excess mortality relate also to conditions on previous days, as DT and MT occur in typical sequences. The highest CVD mortality deviations are found 1 day after a hot spell's onset, when temperature as well as frequency of the oppressive AMs are highest. Following this peak is typically DT- to MT-like weather transition, characterized by decrease in temperature and increase in humidity. The transition between upward (DT) and downward (MT) phases is associated with the largest excess CVD mortality, and the change contributes to the increased and more lagged effects on cerebrovascular mortality. The study highlights the importance of critically evaluating SSC's applicability and benefits within warning systems relative to other synoptic and epidemiological approaches. Only a subset of days with the oppressive AMs is associated with excess mortality, and regression models accounting for possible meteorological and other factors explain little of the mortality variance.

  17. Geographical variation of cerebrovascular disease in New York State: the correlation with income

    PubMed Central

    Han, Daikwon; Carrow, Shannon S; Rogerson, Peter A; Munschauer, Frederick E

    2005-01-01

    Background Income is known to be associated with cerebrovascular disease; however, little is known about the more detailed relationship between cerebrovascular disease and income. We examined the hypothesis that the geographical distribution of cerebrovascular disease in New York State may be predicted by a nonlinear model using income as a surrogate socioeconomic risk factor. Results We used spatial clustering methods to identify areas with high and low prevalence of cerebrovascular disease at the ZIP code level after smoothing rates and correcting for edge effects; geographic locations of high and low clusters of cerebrovascular disease in New York State were identified with and without income adjustment. To examine effects of income, we calculated the excess number of cases using a non-linear regression with cerebrovascular disease rates taken as the dependent variable and income and income squared taken as independent variables. The resulting regression equation was: excess rate = 32.075 - 1.22*10-4(income) + 8.068*10-10(income2), and both income and income squared variables were significant at the 0.01 level. When income was included as a covariate in the non-linear regression, the number and size of clusters of high cerebrovascular disease prevalence decreased. Some 87 ZIP codes exceeded the critical value of the local statistic yielding a relative risk of 1.2. The majority of low cerebrovascular disease prevalence geographic clusters disappeared when the non-linear income effect was included. For linear regression, the excess rate of cerebrovascular disease falls with income; each $10,000 increase in median income of each ZIP code resulted in an average reduction of 3.83 observed cases. The significant nonlinear effect indicates a lessening of this income effect with increasing income. Conclusion Income is a non-linear predictor of excess cerebrovascular disease rates, with both low and high observed cerebrovascular disease rate areas associated with higher income. Income alone explains a significant amount of the geographical variance in cerebrovascular disease across New York State since both high and low clusters of cerebrovascular disease dissipate or disappear with income adjustment. Geographical modeling, including non-linear effects of income, may allow for better identification of other non-traditional risk factors. PMID:16242043

  18. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  19. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  20. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  1. An application of the LC-LSTM framework to the self-esteem instability case.

    PubMed

    Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John

    2013-10-01

    The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.

  2. Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.

    PubMed

    Beauducel, André

    2018-02-01

    The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  4. [A proposal for a new definition of excess mortality associated with influenza-epidemics and its estimation].

    PubMed

    Takahashi, M; Tango, T

    2001-05-01

    As methods for estimating excess mortality associated with influenza-epidemic, the Serfling's cyclical regression model and the Kawai and Fukutomi model with seasonal indices have been proposed. Excess mortality under the old definition (i.e., the number of deaths actually recorded in excess of the number expected on the basis of past seasonal experience) covers the random error for that portion of variation regarded as due to chance. In addition, it disregards the range of random variation of mortality with the season. In this paper, we propose a new definition of excess mortality associated with influenza-epidemics and a new estimation method, considering these questions with the Kawai and Fukutomi method. The new definition of excess mortality and a novel method for its estimation were generated as follows. Factors bringing about variation in mortality in months with influenza-epidemics may be divided into two groups: 1. Influenza itself, 2. others (practically random variation). The range of variation of mortality due to the latter (normal range) can be estimated from the range for months in the absence of influenza-epidemics. Excess mortality is defined as death over the normal range. A new definition of excess mortality associated with influenza-epidemics and an estimation method are proposed. The new method considers variation in mortality in months in the absence of influenza-epidemics. Consequently, it provides reasonable estimates of excess mortality by separating the portion of random variation. Further, it is a characteristic that the proposed estimate can be used as a criterion of statistical significance test.

  5. An Empirical Assessment of Defense Contractor Risk 1976-1984.

    DTIC Science & Technology

    1986-06-01

    Model to evaluate the. Department of Defense contract pricing , financing, and profit policies . ’ D*’ ’ *NTV D? 7A’:: TA E *A l ..... -:- A-i SN 0102...defense con- tractor risk-return relationship is performed utilizing four methods: mean-variance analysis of rate of return, the Capital Asset Pricing Model ...relationship is performed utilizing four methods: mean- variance analysis of rate of return, the Capital Asset Pricing Model , mean-variance analysis of total

  6. The effects of an exercise with a stick on the lumbar spine and hip movement patterns during forward bending in patients with lumbar flexion syndrome.

    PubMed

    Yoon, Ji-yeon; Kim, Ji-won; Kang, Min-hyeok; An, Duk-hyun; Oh, Jae-seop

    2015-01-01

    Forward bending is frequently performed in daily activities. However, excessive lumbar flexion during forward bending has been reported as a risk factor for low back pain. Therefore, we examined the effects of an exercise strategy using a stick on the angular displacement and movement onset of lumbar and hip flexion during forward-bending exercises in patients with lumbar flexion syndrome. Eighteen volunteers with lumbar flexion syndrome were recruited in this study. Subjects performed forward-bending exercises with and without a straight stick in standing. The angular displacement and movement onset of lumbar and hip flexion during forward-bending exercises were measured by using a three dimensional motion analysis system. The significances of differences between the two conditions (with stick vs. without stick) was assessed using a one-way repeated analysis of variance. When using a stick during a forward-bending exercise, the peak angular displacement of lumbar flexion decreased significantly, and those of right and left-hip flexion increased significantly compared with those without a stick. The movement onset of lumbar flexion occurred significantly later, and the onset of right-hip flexion occurred significantly earlier with than without a stick. Based on these findings, a stick exercise was an effective method to prevent excessive lumbar flexion and more helpful in developing hip flexion during a forward-bending exercise. These findings will be useful for clinicians to teach self-exercise during forward bending in patients with lumbar flexion syndrome.

  7. Apparatuses and methods for removal of ink buildup

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cudzinovic, Michael; Pass, Thomas; Rogers, Rob

    A substrate patterning method including the steps of spraying ink on a surface of a substrate, the spraying of the ink resulting in an overspray of excess ink past an edge of the substrate; changing a temperature of the excess ink to cause a change in a viscosity of the excess ink; and removing the excess ink having the changed viscosity.

  8. An apparent contradiction: increasing variability to achieve greater precision?

    PubMed

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  9. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, Peter; Varghese, Philip; Goldstein, David

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less

  10. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  11. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application.

    PubMed

    Zahodne, Laura B; Manly, Jennifer J; Brickman, Adam M; Narkhede, Atul; Griffith, Erica Y; Guzman, Vanessa A; Schupf, Nicole; Stern, Yaakov

    2015-10-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. Copyright © 2015. Published by Elsevier Ltd.

  12. Is residual memory variance a valid method for quantifying cognitive reserve? A longitudinal application

    PubMed Central

    Zahodne, Laura B.; Manly, Jennifer J.; Brickman, Adam M.; Narkhede, Atul; Griffith, Erica Y.; Guzman, Vanessa A.; Schupf, Nicole; Stern, Yaakov

    2016-01-01

    Cognitive reserve describes the mismatch between brain integrity and cognitive performance. Older adults with high cognitive reserve are more resilient to age-related brain pathology. Traditionally, cognitive reserve is indexed indirectly via static proxy variables (e.g., years of education). More recently, cross-sectional studies have suggested that reserve can be expressed as residual variance in episodic memory performance that remains after accounting for demographic factors and brain pathology (whole brain, hippocampal, and white matter hyperintensity volumes). The present study extends these methods to a longitudinal framework in a community-based cohort of 244 older adults who underwent two comprehensive neuropsychological and structural magnetic resonance imaging sessions over 4.6 years. On average, residual memory variance decreased over time, consistent with the idea that cognitive reserve is depleted over time. Individual differences in change in residual memory variance predicted incident dementia, independent of baseline residual memory variance. Multiple-group latent difference score models revealed tighter coupling between brain and language changes among individuals with decreasing residual memory variance. These results suggest that changes in residual memory variance may capture a dynamic aspect of cognitive reserve and could be a useful way to summarize individual cognitive responses to brain changes. Change in residual memory variance among initially non-demented older adults was a better predictor of incident dementia than residual memory variance measured at one time-point. PMID:26348002

  13. Statistical aspects of quantitative real-time PCR experiment design.

    PubMed

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  14. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound.

    PubMed

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.

  15. Non-Gaussian Distribution of DNA Barcode Extension In Nanochannels Using High-throughput Imaging

    NASA Astrophysics Data System (ADS)

    Sheats, Julian; Reinhart, Wesley; Reifenberger, Jeff; Gupta, Damini; Muralidhar, Abhiram; Cao, Han; Dorfman, Kevin

    2015-03-01

    We present experimental data for the extension of internal segments of highly confined DNA using a high-­throughput experimental setup. Barcode­-labeled E. coli genomic DNA molecules were imaged at a high areal density in square nanochannels with sizes ranging from 40 nm to 51 nm in width. Over 25,000 molecules were used to obtain more than 1,000,000 measurements for genomic distances between 2,500 bp and 100,000 bp. The distribution of extensions has positive excess kurtosis and is skew­ left due to weak backfolding in the channel. As a result, the two Odijk theories for the chain extension and variance bracket the experimental data. We compared to predictions of a harmonic approximation for the confinement free energy and show that it produces a substantial error in the variance. These results suggest an inherent error associated with any statistical analysis of barcoded DNA that relies on harmonic models for chain extension. Present address: Department of Chemical and Biological Engineering, Princeton University.

  16. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Yue; Liu, Xin; Loeb, Abraham

    We perform a systematic search for sub-parsec binary supermassive black holes (BHs) in normal broad-line quasars at z < 0.8, using multi-epoch Sloan Digital Sky Survey (SDSS) spectroscopy of the broad Hβ line. Our working model is that (1) one and only one of the two BHs in the binary is active; (2) the active BH dynamically dominates its own broad-line region (BLR) in the binary system, so that the mean velocity of the BLR reflects the mean velocity of its host BH; (3) the inactive companion BH is orbiting at a distance of a few R{sub BLR}, where R{submore » BLR} ∼ 0.01-0.1 pc is the BLR size. We search for the expected line-of-sight acceleration of the broad-line velocity from binary orbital motion by cross-correlating SDSS spectra from two epochs separated by up to several years in the quasar rest frame. Out of ∼700 pairs of spectra for which we have good measurements of the velocity shift between two epochs (1σ error ∼40 km s{sup –1}), we detect 28 systems with significant velocity shifts in broad Hβ, among which 7 are the best candidates for the hypothesized binaries, 4 are most likely due to broad-line variability in single BHs, and the rest are ambiguous. Continued spectroscopic observations of these candidates will easily strengthen or disprove these claims. We use the distribution of the observed accelerations (mostly non-detections) to place constraints on the abundance of such binary systems among the general quasar population. Excess variance in the velocity shift is inferred for observations separated by longer than 0.4 yr (quasar rest frame). Attributing all the excess to binary motion would imply that most of the quasars in this sample must be in binaries, that the inactive BH must be on average more massive than the active one, and that the binary separation is at most a few times the size of the BLR. However, if this excess variance is partly or largely due to long-term broad-line variability, the requirement of a large population of close binaries is much weakened or even disfavored for massive companions. Future time-domain spectroscopic surveys of normal quasars can provide vital prior information on the structure function of stochastic velocity shifts induced by broad-line variability in single BHs. Such surveys with improved spectral quality, increased time baseline, and more epochs can greatly improve the statistical constraints of this method on the general binary population in broad-line quasars, further shrink the allowed binary parameter space, and detect true sub-parsec binaries.« less

  18. The Threat of Common Method Variance Bias to Theory Building

    ERIC Educational Resources Information Center

    Reio, Thomas G., Jr.

    2010-01-01

    The need for more theory building scholarship remains one of the pressing issues in the field of HRD. Researchers can employ quantitative, qualitative, and/or mixed methods to support vital theory-building efforts, understanding however that each approach has its limitations. The purpose of this article is to explore common method variance bias as…

  19. ADHD and Method Variance: A Latent Variable Approach Applied to a Nationally Representative Sample of College Freshmen

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Glutting, Joseph J.

    2008-01-01

    This study employed a correlated trait-correlated method application of confirmatory factor analysis to disentangle trait and method variance from measures of attention-deficit/hyperactivity disorder obtained at the college level. The two trait factors were "Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition" ("DSM-IV")…

  20. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    NASA Astrophysics Data System (ADS)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  1. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  2. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  3. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  4. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  5. Deflation as a method of variance reduction for estimating the trace of a matrix inverse

    DOE PAGES

    Gambhir, Arjun Singh; Stathopoulos, Andreas; Orginos, Kostas

    2017-04-06

    Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors aremore » random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Furthermore, using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.« less

  6. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  7. Use of the 50-g glucose challenge test to predict excess delivery weight.

    PubMed

    Beksac, M Sinan; Tanacan, Atakan; Hakli, Duygu A; Ozyuncu, Ozgur

    2018-07-01

    To identify a cut-off value for the 50-g glucose challenge test (GCT) that predicts excess delivery weight. A retrospective study was conducted among pregnant women who undertook a 50-g GCT at Hacettepe University Hospital, Ankara, Turkey, between January 1, 2000, and December 31, 2016. Patients with singleton pregnancies who delivered live neonates after 28 weeks of pregnancy were included. Patients were classified according to their 50-g GCT values into group 1 (<7.770 mmol/L); group 2 (7.770 to <8.880 mmol/L, group 3 (8.880-9.990 mmol/L); or group 4 (>9.990 mmol/L). Classification and regression tree data mining was performed to identify the 50-g GCT cut-off value corresponding to a substantial increase in delivery weight. Median delivery weight were 3100 g in group 1 (n=352), 3200 g in group 2 (n=165), 3720 g in group 3 (n=47), and 3865 g in group 4 (n=20). Gravidity, 50-g GCT value, and pregnancy duration at delivery explained 30.6% of the observed variance in delivery weight. The cut-off required for maternal blood glucose level to predict excessive delivery weight was 8.741 mmol/L. The 50-g GCT can be used to identify women at risk of delivering offspring with excessive delivery weight. © 2018 International Federation of Gynecology and Obstetrics.

  8. Simple Epidemiological Dynamics Explain Phylogenetic Clustering of HIV from Patients with Recent Infection

    PubMed Central

    Volz, Erik M.; Koopman, James S.; Ward, Melissa J.; Brown, Andrew Leigh; Frost, Simon D. W.

    2012-01-01

    Phylogenies of highly genetically variable viruses such as HIV-1 are potentially informative of epidemiological dynamics. Several studies have demonstrated the presence of clusters of highly related HIV-1 sequences, particularly among recently HIV-infected individuals, which have been used to argue for a high transmission rate during acute infection. Using a large set of HIV-1 subtype B pol sequences collected from men who have sex with men, we demonstrate that virus from recent infections tend to be phylogenetically clustered at a greater rate than virus from patients with chronic infection (‘excess clustering’) and also tend to cluster with other recent HIV infections rather than chronic, established infections (‘excess co-clustering’), consistent with previous reports. To determine the role that a higher infectivity during acute infection may play in excess clustering and co-clustering, we developed a simple model of HIV infection that incorporates an early period of intensified transmission, and explicitly considers the dynamics of phylogenetic clusters alongside the dynamics of acute and chronic infected cases. We explored the potential for clustering statistics to be used for inference of acute stage transmission rates and found that no single statistic explains very much variance in parameters controlling acute stage transmission rates. We demonstrate that high transmission rates during the acute stage is not the main cause of excess clustering of virus from patients with early/acute infection compared to chronic infection, which may simply reflect the shorter time since transmission in acute infection. Higher transmission during acute infection can result in excess co-clustering of sequences, while the extent of clustering observed is most sensitive to the fraction of infections sampled. PMID:22761556

  9. Correction of Excessive Precipitation Over Steep and High Mountains in a General Circulation Model

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.

    2012-01-01

    Excessive precipitation over steep and high mountains (EPSM) is a well-known problem in GCMs and meso-scale models. This problem impairs simulation and data assimilation products. Among the possible causes investigated in this study, we found that the most important one, by far, is a missing upward transport of heat out of the boundary layer due to the vertical circulations forced by the daytime upslope winds, which are forced by the heated boundary layer on subgrid-scale slopes. These upslope winds are associated with large subgrid-scale topographic variation, which is found over steep and high mountains. Without such subgridscale heat ventilation, the resolvable-scale upslope flow in the boundary layer generated by surface sensible heat flux along the mountain slopes is excessive. Such an excessive resolvablescale upslope flow combined with the high moisture content in the boundary layer results in excessive moisture transport toward mountaintops, which in turn gives rise to EPSM. Other possible causes of EPSM that we have investigated include 1) a poorly-designed horizontal moisture flux in the terrain-following coordinates, 2) the condition for cumulus convection being too easily satisfied at mountaintops, 3) the presence of conditional instability of the computational kind, and 4) the absence of blocked flow drag. These are all minor or inconsequential. We have parameterized the ventilation effects of the subgrid-scale heated-slope-induced vertical circulation (SHVC) by removing heat from the boundary layer and depositing it in layers higher up when the topographic variance exceeds a critical value. Test results using NASA/Goddard's GEOS-S GCM have shown that this largely solved the EPSM problem.

  10. Alcohol expectancies longitudinally predict drinking and the alcohol myopia effects of relief, self-inflation, and excess.

    PubMed

    Lac, Andrew; Brack, Nathaniel

    2018-02-01

    Alcohol myopia theory posits that alcohol consumption attenuates information processing capacity, and that expectancy beliefs together with intake level are responsible for experiences in myopic effects (relief, self-inflation, and excess). Adults (N=413) averaging 36.39 (SD=13.02) years of age completed the Comprehensive Effects of Alcohol questionnaire at baseline, followed by alcohol use measures (frequency and quantity) and the Alcohol Myopia Scale one month later. Three structural equation models based on differing construct manifestations of alcohol expectancies served to longitudinally forecast alcohol use and myopia. In Model 1, overall expectancy predicted greater alcohol use and higher levels of all three myopic effects. In Model 2, specifying separate positive and negative expectancy factors, positive but not negative expectancy predicted greater use. Furthermore, positive expectancy and use explained higher myopic relief and higher self-inflation, whereas positive expectancy, negative expectancy, and use explained higher myopic excess. In Model 3, the seven specific expectancy subscales (sociability, tension reduction, liquid courage, sexuality, cognitive and behavioral impairment, risk and aggression, and self-perception) were simultaneously specified as predictors. Tension reduction expectancy, sexuality expectancy, and use contributed to higher myopic relief; sexuality expectancy and use explained higher myopic self-inflation; and risk and aggression expectancy and use accounted for higher myopic excess. Across all three predictive models, the total variance explained ranged from 12 to 19% for alcohol use, 50 to 51% for relief, 29 to 34% for self-inflation, and 32 to 35% for excess. Findings support that the type of alcohol myopia experienced is a concurrent function of self-fulfilling alcohol prophecies and drinking levels. The interpreted measurement manifestation of expectancy yielded different prevention implications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  12. Some refinements on the comparison of areal sampling methods via simulation

    Treesearch

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  13. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  14. Intelligent ensemble T-S fuzzy neural networks with RCDPSO_DM optimization for effective handling of complex clinical pathway variances.

    PubMed

    Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

    2013-07-01

    Takagi-Sugeno (T-S) fuzzy neural networks (FNNs) can be used to handle complex, fuzzy, uncertain clinical pathway (CP) variances. However, there are many drawbacks, such as slow training rate, propensity to become trapped in a local minimum and poor ability to perform a global search. In order to improve overall performance of variance handling by T-S FNNs, a new CP variance handling method is proposed in this study. It is based on random cooperative decomposing particle swarm optimization with double mutation mechanism (RCDPSO_DM) for T-S FNNs. Moreover, the proposed integrated learning algorithm, combining the RCDPSO_DM algorithm with a Kalman filtering algorithm, is applied to optimize antecedent and consequent parameters of constructed T-S FNNs. Then, a multi-swarm cooperative immigrating particle swarm algorithm ensemble method is used for intelligent ensemble T-S FNNs with RCDPSO_DM optimization to further improve stability and accuracy of CP variance handling. Finally, two case studies on liver and kidney poisoning variances in osteosarcoma preoperative chemotherapy are used to validate the proposed method. The result demonstrates that intelligent ensemble T-S FNNs based on the RCDPSO_DM achieves superior performances, in terms of stability, efficiency, precision and generalizability, over PSO ensemble of all T-S FNNs with RCDPSO_DM optimization, single T-S FNNs with RCDPSO_DM optimization, standard T-S FNNs, standard Mamdani FNNs and T-S FNNs based on other algorithms (cooperative particle swarm optimization and particle swarm optimization) for CP variance handling. Therefore, it makes CP variance handling more effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Additive Partial Least Squares for efficient modelling of independent variance sources demonstrated on practical case studies.

    PubMed

    Luoma, Pekka; Natschläger, Thomas; Malli, Birgit; Pawliczek, Marcin; Brandstetter, Markus

    2018-05-12

    A model recalibration method based on additive Partial Least Squares (PLS) regression is generalized for multi-adjustment scenarios of independent variance sources (referred to as additive PLS - aPLS). aPLS allows for effortless model readjustment under changing measurement conditions and the combination of independent variance sources with the initial model by means of additive modelling. We demonstrate these distinguishing features on two NIR spectroscopic case-studies. In case study 1 aPLS was used as a readjustment method for an emerging offset. The achieved RMS error of prediction (1.91 a.u.) was of similar level as before the offset occurred (2.11 a.u.). In case-study 2 a calibration combining different variance sources was conducted. The achieved performance was of sufficient level with an absolute error being better than 0.8% of the mean concentration, therefore being able to compensate negative effects of two independent variance sources. The presented results show the applicability of the aPLS approach. The main advantages of the method are that the original model stays unadjusted and that the modelling is conducted on concrete changes in the spectra thus supporting efficient (in most cases straightforward) modelling. Additionally, the method is put into context of existing machine learning algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.

  17. A Multilevel AR(1) Model: Allowing for Inter-Individual Differences in Trait-Scores, Inertia, and Innovation Variance.

    PubMed

    Jongerling, Joran; Laurenceau, Jean-Philippe; Hamaker, Ellen L

    2015-01-01

    In this article we consider a multilevel first-order autoregressive [AR(1)] model with random intercepts, random autoregression, and random innovation variance (i.e., the level 1 residual variance). Including random innovation variance is an important extension of the multilevel AR(1) model for two reasons. First, between-person differences in innovation variance are important from a substantive point of view, in that they capture differences in sensitivity and/or exposure to unmeasured internal and external factors that influence the process. Second, using simulation methods we show that modeling the innovation variance as fixed across individuals, when it should be modeled as a random effect, leads to biased parameter estimates. Additionally, we use simulation methods to compare maximum likelihood estimation to Bayesian estimation of the multilevel AR(1) model and investigate the trade-off between the number of individuals and the number of time points. We provide an empirical illustration by applying the extended multilevel AR(1) model to daily positive affect ratings from 89 married women over the course of 42 consecutive days.

  18. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  19. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  20. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  1. Excess amino acid polymorphism in mitochondrial DNA: contrasts among genes from Drosophila, mice, and humans.

    PubMed

    Rand, D M; Kann, L M

    1996-07-01

    Recent studies of mitochondrial DNA (mtDNA) variation in mammals and Drosophila have shown an excess of amino acid variation within species (replacement polymorphism) relative to the number of silent and replacement differences fixed between species. To examine further this pattern of nonneutral mtDNA evolution, we present sequence data for the ND3 and ND5 genes from 59 lines of Drosophila melanogaster and 29 lines of D. simulans. Of interest are the frequency spectra of silent and replacement polymorphisms, and potential variation among genes and taxa in the departures from neutral expectations. The Drosophila ND3 and ND5 data show no significant excess of replacement polymorphism using the McDonald-Kreitman test. These data are in contrast to significant departures from neutrality for the ND3 gene in mammals and other genes in Drosophila mtDNA (cytochrome b and ATPase 6). Pooled across genes, however, both Drosophila and human mtDNA show very significant excesses of amino acid polymorphism. Silent polymorphisms at ND5 show a significantly higher variance in frequency than replacement polymorphisms, and the latter show a significant skew toward low frequencies (Tajima's D = -1.954). These patterns are interpreted in light of the nearly neutral theory where mildly deleterious amino acid haplotypes are observed as ephemeral variants within species but do not contribute to divergence. The patterns of polymorphism and divergence at charge-altering amino acid sites are presented for the Drosophila ND5 gene to examine the evolution of functionally distinct mutations. Excess charge-altering polymorphism is observed at the carboxyl terminal and excess charge-altering divergence is detected at the amino terminal. While the mildly deleterious model fits as a net effect in the evolution of nonrecombining mitochondrial genomes, these data suggest that opposing evolutionary pressures may act on different regions of mitochondrial genes and genomes.

  2. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  3. Investigation of noise properties in grating-based x-ray phase tomography with reverse projection method

    NASA Astrophysics Data System (ADS)

    Bao, Yuan; Wang, Yan; Gao, Kun; Wang, Zhi-Li; Zhu, Pei-Ping; Wu, Zi-Yu

    2015-10-01

    The relationship between noise variance and spatial resolution in grating-based x-ray phase computed tomography (PCT) imaging is investigated with reverse projection extraction method, and the noise variances of the reconstructed absorption coefficient and refractive index decrement are compared. For the differential phase contrast method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both theoretical analysis and simulations demonstrate that in PCT the noise variance of the reconstructed refractive index decrement scales with spatial resolution follows an inverse linear relationship at fixed slice thickness, while the noise variance of the reconstructed absorption coefficient conforms with the inverse cubic law. The results indicate that, for the same noise variance level, PCT imaging may enable higher spatial resolution than conventional absorption computed tomography (ACT), while ACT benefits more from degraded spatial resolution. This could be a useful guidance in imaging the inner structure of the sample in higher spatial resolution. Project supported by the National Basic Research Program of China (Grant No. 2012CB825800), the Science Fund for Creative Research Groups, the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant Nos. KJCX2-YW-N42 and Y4545320Y2), the National Natural Science Foundation of China (Grant Nos. 11475170, 11205157, 11305173, 11205189, 11375225, 11321503, 11179004, and U1332109).

  4. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  5. Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.

    PubMed

    Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe

    2015-01-01

    Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.

  6. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  7. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  8. Procedures for estimating confidence intervals for selected method performance parameters.

    PubMed

    McClure, F D; Lee, J K

    2001-01-01

    Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.

  9. Sampling in freshwater environments: suspended particle traps and variability in the final data.

    PubMed

    Barbizzi, Sabrina; Pati, Alessandra

    2008-11-01

    This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.

  10. Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid

    PubMed Central

    Gorczynska, Iwona; Migacz, Justin V.; Zawadzki, Robert J.; Capps, Arlie G.; Werner, John S.

    2016-01-01

    We compared the performance of three OCT angiography (OCTA) methods: speckle variance, amplitude decorrelation and phase variance for imaging of the human retina and choroid. Two averaging methods, split spectrum and volume averaging, were compared to assess the quality of the OCTA vascular images. All data were acquired using a swept-source OCT system at 1040 nm central wavelength, operating at 100,000 A-scans/s. We performed a quantitative comparison using a contrast-to-noise (CNR) metric to assess the capability of the three methods to visualize the choriocapillaris layer. For evaluation of the static tissue noise suppression in OCTA images we proposed to calculate CNR between the photoreceptor/RPE complex and the choriocapillaris layer. Finally, we demonstrated that implementation of intensity-based OCT imaging and OCT angiography methods allows for visualization of retinal and choroidal vascular layers known from anatomic studies in retinal preparations. OCT projection imaging of data flattened to selected retinal layers was implemented to visualize retinal and choroidal vasculature. User guided vessel tracing was applied to segment the retinal vasculature. The results were visualized in a form of a skeletonized 3D model. PMID:27231598

  11. iTemplate: A template-based eye movement data analysis approach.

    PubMed

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  12. Imaging and characterizing shear wave and shear modulus under orthogonal acoustic radiation force excitation using OCT Doppler variance method.

    PubMed

    Zhu, Jiang; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K Kirk; Zhou, Qifa; Chen, Zhongping

    2015-05-01

    We report on a novel acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) technique for imaging shear wave and quantifying shear modulus under orthogonal acoustic radiation force (ARF) excitation using the optical coherence tomography (OCT) Doppler variance method. The ARF perpendicular to the OCT beam is produced by a remote ultrasonic transducer. A shear wave induced by ARF excitation propagates parallel to the OCT beam. The OCT Doppler variance method, which is sensitive to the transverse vibration, is used to measure the ARF-induced vibration. For analysis of the shear modulus, the Doppler variance method is utilized to visualize shear wave propagation instead of Doppler OCT method, and the propagation velocity of the shear wave is measured at different depths of one location with the M scan. In order to quantify shear modulus beyond the OCT imaging depth, we move ARF to a deeper layer at a known step and measure the time delay of the shear wave propagating to the same OCT imaging depth. We also quantitatively map the shear modulus of a cross-section in a tissue-equivalent phantom after employing the B scan.

  13. In-vitro antioxidant and antibacterial activities of Xanthium strumarium L. extracts on methicillin-susceptible and methicillin-resistant Staphylococcus aureus

    PubMed Central

    Rad, Javad Sharifi; Alfatemi, Seyedeh Mahsan Hoseini; Rad, Majid Sharifi; Iriti, Marcello

    2013-01-01

    Background and Aims: The excessive and repeated use of antibiotics in medicine has led to the development of antibiotic-resistant microbial strains, including Staphylococcus aureus whose emergence of antibiotic-resistant strains has reduced the number of antibiotics available to treat clinical infections caused by this bacterium. In this study, antioxidant and antimicrobial activities of methanolic extract of Xanthium strumarium L. leaves were evaluated on methicillin-susceptible and methicillin-resistant Staphylococcus aureus (MRSA) spp. Materials and Methods: Antiradical and antioxidant activities X. strumarium L. leaf extract were evaluated based on its ability to scavenge the synthetic 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical and by the paired diene method, respectively, whereas the antimicrobial activity was assayed by the disc diffusion method. Statistical Analysis: Data were subjected to analysis of variance following an entirely random design to determine the least significant difference at P < 0.05 using SPSS v. 11.5. Results and Conclusions: The IC50 values of the extract were 0.02 mg/mL and 0.09 mg/mL for the antioxidant and DPPH-scavenging capacity, respectively. X. strumarium extract affected both methicillin-sensitive Staphylococcus aureus and MRSA, though antibacterial activity was more effective on methicillin-susceptible S. aureus spp. The antibacterial and antioxidant activities exhibited by the methanol extract may justify the traditional use of this plant as a folk remedy worldwide. PMID:25284944

  14. In vivo macular pigment measurements: a comparison of resonance Raman spectroscopy and heterochromatic flicker photometry

    PubMed Central

    Hogg, R E; Anderson, R S; Stevenson, M R; Zlatkova, M B; Chakravarthy, U

    2007-01-01

    Aim To investigate whether two methods of measuring macular pigment—namely, heterochromatic flicker photometry (HFP) and resonance Raman spectroscopy (RRS)—yield comparable data. Methods Macular pigment was measured using HFP and RRS in the right eye of 107 participants aged 20–79 years. Correlations between methods were sought and regression models generated. RRS was recorded as Raman counts and HFP as macular pigment optical density (MPOD). The average of the top three of five Raman counts was compared with MPOD obtained at 0.5° eccentricity, and an integrated measure (spatial profile; MPODsp) computed from four stimulus sizes on HFP. Results The coefficient of variation was 12.0% for MPODsp and 13.5% for Raman counts. MPODsp exhibited significant correlations with Raman counts (r = 0.260, p = 0.012), whereas MPOD at 0.5° did not correlate significantly (r = 0.163, p = 0.118). MPODsp was not significantly correlated with age (p = 0.062), whereas MPOD at 0.5° was positively correlated (p = 0.011). Raman counts showed a significant decrease with age (p = 0.002) and were significantly lower when pupil size was smaller (p = 0.015). Conclusions Despite a statistically significant correlation, the correlations were weak, with those in excess of 90% of the variance between MPODsp and Raman counts remaining unexplained, meriting further research. PMID:16825281

  15. Means and Variances without Calculus

    ERIC Educational Resources Information Center

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  16. Distribution and magnitude of type I error of model-based multipoint lod scores: implications for multipoint mod scores.

    PubMed

    Xing, Chao; Elston, Robert C

    2006-07-01

    The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.

  17. An improved method for bivariate meta-analysis when within-study correlations are unknown.

    PubMed

    Hong, Chuan; D Riley, Richard; Chen, Yong

    2018-03-01

    Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  19. A simple and exploratory way to determine the mean-variance relationship in generalized linear models.

    PubMed

    Tsou, Tsung-Shan

    2007-03-30

    This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.

  20. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.

  1. Heuristics for Understanding the Concepts of Interaction, Polynomial Trend, and the General Linear Model.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…

  2. Benefits of Using Planned Comparisons Rather Than Post Hoc Tests: A Brief Review with Examples.

    ERIC Educational Resources Information Center

    DuRapau, Theresa M.

    The rationale behind analysis of variance (including analysis of covariance and multiple analyses of variance and covariance) methods is reviewed, and unplanned and planned methods of evaluating differences between means are briefly described. Two advantages of using planned or a priori tests over unplanned or post hoc tests are presented. In…

  3. Economic Pressure and Marital Quality: An Illustration of the Method Variance Problem in the Causal Modeling of Family Processes.

    ERIC Educational Resources Information Center

    Lorenz, Frederick O.; And Others

    1991-01-01

    Examined effects of method variance on models linking family economic pressure, marital quality, and expressions of hostility and warmth among 76 couples. Observer reports yielded results linking economic pressure to marital quality indirectly through interactional processes such as hostility. Self-reports or spouses' reports made it difficult to…

  4. Research on the Characteristics of Alzheimer's Disease Using EEG

    NASA Astrophysics Data System (ADS)

    Ueda, Taishi; Musha, Toshimitsu; Yagi, Tohru

    In this paper, we proposed a new method for diagnosing Alzheimer's disease (AD) on the basis of electroencephalograms (EEG). The method, which is termed Power Variance Function (PVF) method, indicates the variance of the power at each frequency. By using the proposed method, the power of EEG at each frequency was calculated using Wavelet transform, and the corresponding variances were defined as PVF. After the PVF histogram of 55 healthy people was approximated as a Generalized Extreme Value (GEV) distribution, we evaluated the PVF of 22 patients with AD and 25 patients with mild cognitive impairment (MCI). As a result, the values for all AD and MCI subjects were abnormal. In particular, the PVF in the θ band for MCI patients was abnormally high, and the PVF in the α band for AD patients was low.

  5. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  6. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  7. Importance Sampling Variance Reduction in GRESS ATMOSIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wakeford, Daniel Tyler

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  8. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  9. Evidence of Convergent and Discriminant Validity of Child, Teacher, and Peer Reports of Teacher-Student Support

    PubMed Central

    Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan

    2012-01-01

    This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024

  10. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  11. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  12. Data assimilation method based on the constraints of confidence region

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  13. Method for controlling powertrain pumps

    DOEpatents

    Sime, Karl Andrew; Spohn, Brian L; Demirovic, Besim; Martini, Ryan D; Miller, Jean Marie

    2013-10-22

    A method of controlling a pump supplying a fluid to a transmission includes sensing a requested power and an excess power for a powertrain. The requested power substantially meets the needs of the powertrain, while the excess power is not part of the requested power. The method includes sensing a triggering condition in response to the ability to convert the excess power into heat in the transmission, and determining that an operating temperature of the transmission is below a maximum. The method also includes determining a calibrated baseline and a dissipation command for the pump. The calibrated baseline command is configured to supply the fluid based upon the requested power, and the dissipation command is configured to supply additional fluid and consume the excess power with the pump. The method operates the pump at a combined command, which is equal to the calibrated baseline command plus the dissipation command.

  14. The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.

    ERIC Educational Resources Information Center

    Peet, Martha W.

    This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301…

  15. Adult Second Language Reading in the USA: The Effects of Readers' Gender and Test Method

    ERIC Educational Resources Information Center

    Brantmeier, Cindy

    2006-01-01

    Bernhardt (2003) claims that half of the variance in second language (L2) reading is accounted for by first language literacy (20%) and second language knowledge (30%), and that one of the central goals of current L2 reading research should be to investigate the 50% of variance that remains unexplained. Part of this variance takes consists of…

  16. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  17. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  18. Easy and accurate variance estimation of the nonparametric estimator of the partial area under the ROC curve and its application.

    PubMed

    Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D

    2016-06-15

    The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. A comparison of selection at list time and time-stratified sampling for estimating suspended sediment loads

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...

  20. 40 CFR 52.2020 - Identification of plan.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Compliance and test methods 11/1/97 6/8/98, 63 FR 31116 (c)(131). Subchapter D—Motor Vehicle Emissions... 70893 (c)(229). Section 130.107 Variances 10/5/02 12/8/04, 69 FR 70893 (c)(229). Section 130.108 Test... 70895 (c)(230). Section 130.414 Modification of variance 10/11/08 10/18/10, 75 FR 63717. TEST METHODS...

  1. 40 CFR 52.2020 - Identification of plan.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...)(131). Section 126.303(a) Compliance and test methods 11/1/97 6/8/98, 63 FR 31116 (c)(131). Subchapter...)(229). Section 130.107 Variances 10/5/02 12/8/04, 69 FR 70893 (c)(229). Section 130.108 Test procedures... 70895 (c)(230). Section 130.414 Modification of variance 10/11/08 10/18/10, 75 FR 63717. TEST METHODS...

  2. Multimethod Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    ERIC Educational Resources Information Center

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of "internalizing" (INT; anxiety, depression) and "externalizing" (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings…

  3. Optimized doppler optical coherence tomography for choroidal capillary vasculature imaging

    NASA Astrophysics Data System (ADS)

    Liu, Gangjun; Qi, Wenjuan; Yu, Lingfeng; Chen, Zhongping

    2011-03-01

    In this paper, we analyzed the retinal and choroidal blood vasculature in the posterior segment of the human eye with optimized color Doppler and Doppler variance optical coherence tomography. Depth-resolved structure, color Doppler and Doppler variance images were compared. Blood vessels down to capillary level were able to be obtained with the optimized optical coherence color Doppler and Doppler variance method. For in-vivo imaging of human eyes, bulkmotion induced bulk phase must be identified and removed before using color Doppler method. It was found that the Doppler variance method is not sensitive to bulk motion and the method can be used without removing the bulk phase. A novel, simple and fast segmentation algorithm to indentify retinal pigment epithelium (RPE) was proposed and used to segment the retinal and choroidal layer. The algorithm was based on the detected OCT signal intensity difference between different layers. A spectrometer-based Fourier domain OCT system with a central wavelength of 890 nm and bandwidth of 150nm was used in this study. The 3-dimensional imaging volume contained 120 sequential two dimensional images with 2048 A-lines per image. The total imaging time was 12 seconds and the imaging area was 5x5 mm2.

  4. A note on the kappa statistic for clustered dichotomous data.

    PubMed

    Zhou, Ming; Yang, Zhao

    2014-06-30

    The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  6. Association Between Medicare Hospital Readmission Penalties and 30-Day Combined Excess Readmission and Mortality

    PubMed Central

    Abdul-Aziz, Ahmad A.; Hayward, Rodney A.; Aaronson, Keith D.; Hummel, Scott L.

    2017-01-01

    IMPORTANCE US hospitals receive financial penalties for excess risk–standardized 30-day readmissions and mortality in Medicare patients. Under current policy, readmission prevention is incentivized over 10-fold more than mortality reduction. OBJECTIVE To determine how penalties for US hospitals would change if policy equally weighted 30-day readmissions and mortality. DESIGN, SETTING, AND PARTICIPANTS Publicly available hospital-level data for fiscal year 2014 was obtained, including excess readmission ratio (ERR; risk-standardized predicted over expected 30-day readmissions) and 30-day mortality rates for heart failure, pneumonia, and acute myocardial infarction, as well as readmission penalties (as percent of Medicare Diagnosis Group payments). An excess mortality ratio (EMR) was calculated by dividing the risk-standardized predicted mortality by the national average mortality. Case-weighted aggregate ERR (ERRAGG) and EMR (EMRAGG) were calculated, and an excess combined outcome ratio (ECORAGG) was created by averaging ERRAGG and EMRAGG. We examined associations between readmission penalties, ERRAGG, EMRAGG, and ECORAGG. Analysis of variance was used to compare readmission penalties in hospitals with concordant (both ratios >1 or <1) and discordant performance by ERRAGG and ECORAGG. MAIN OUTCOMES AND MEASURES The primary outcome investigated was the association between readmission penalties and the calculated excess combined outcome ratio (ECORAGG). RESULTS In 1963 US hospitals with complete data, readmission penalties closely tracked excess readmissions (r = 0.81; P < .001), but were minimally and inversely related with excess mortality (r = −0.12; P < .001) and only modestly correlated with excess combined readmission and mortality (r = 0.36; P < .001). Using hospitals with concordant ERRAGG and ECORAGG as the reference group, 17%of hospitals had an ECORAGG ratio less than 1 (ie, superior combined mortality/readmission outcome) with an ERRAGG ratio greater than 1, and received higher mean (SD) readmission penalties (0.41% [0.28%] vs 0.29% [0.37%]; P < .001); 16%of US hospitals had an ECORAGG ratio of greater than 1 (ie, inferior combined mortality/readmission outcome) with an ERRAGG ratio less than 1, and received minimal mean (SD) readmission penalties (0.08%[0.12%]; P < .001 for comparison with reference). CONCLUSIONS AND RELEVANCE In fiscal year 2014, financial penalties for one-third of US hospitals would have been substantially altered if 30-day readmission and mortality were considered equally important. Under most circumstances, patients would rather avoid death than rehospitalization. Current Medicare financial penalties do not meet the goals of aligning incentives and fairly reimbursing hospitals for patient-centered outcomes. PMID:27784054

  7. Predictors of Abdominal Pain in Depressed Pediatric Inflammatory Bowel Disease Patients

    PubMed Central

    Srinath, Arvind I.; Goyal, Alka; Zimmerman, Lori A.; Newara, Melissa C.; Kirshner, Margaret A.; McCarthy, F. Nicole; Keljo, David; Binion, David; Bousvaros, Athos; DeMaso, David R.; Youk, Ada; Szigethy, Eva M.

    2015-01-01

    Background Pediatric patients with inflammatory bowel disease (IBD) have high rates of abdominal pain. The study aims were to (1) Evaluate biological and psychological correlates of abdominal pain in depressed youth with IBD, (2) Determine predictors of abdominal pain in Crohn’s disease (CD) and ulcerative colitis (UC). Methods 765 patients ages 9–17 with IBD seen over 3 years at two sites were screened for depression. Depressed youth completed comprehensive assessments for abdominal pain, psychological (depression and anxiety), and biological (IBD-related, through disease activity indices and laboratory values) realms. Results 217 patients with IBD (161 CD, 56 UC) were depressed. 163 (120 CD, 43 UC) patients had complete API scores. In CD, abdominal pain was associated with depression (r=0.33; p<0.001), diarrhea (r=0.34; p=0.001), ESR (r=0.22; p=0.02), low albumin (r=0.24; p=.01), weight loss (r=0.33; p=0.001), and abdominal tenderness (r=0.38, p=0.002). A multivariate model with these significant correlates represented 32% of the variance in pain. Only depression (p=0.03), weight loss (p=0.04), and abdominal tenderness (p=0.01) predicted pain for CD patients. In UC, pain was associated with depression (r=0.46; p=0.002) and nocturnal stools (r=.32; p=.046). In the multivariate model with these significant correlates 23% of the variance was explained, and only depression (p=0.02) predicted pain. Conclusions The psychological state of pediatric patients with IBD may increase the sensitivity to abdominal pain. Thus, screening for and treating comorbid depression may prevent excessive medical testing and unnecessary escalation of IBD medications. PMID:24983975

  8. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  9. Spectral analysis of the Earth's topographic potential via 2D-DFT: a new data-based degree variance model to degree 90,000

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian

    2015-09-01

    Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.

  10. Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.

    The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less

  11. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  12. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  13. A consistent transported PDF model for treating differential molecular diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  14. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  15. 30 CFR 817.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Disposal of excess spoil: Durable rock fills...-UNDERGROUND MINING ACTIVITIES § 817.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  16. 30 CFR 816.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Disposal of excess spoil: Durable rock fills...-SURFACE MINING ACTIVITIES § 816.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  17. 30 CFR 817.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Disposal of excess spoil: Durable rock fills...-UNDERGROUND MINING ACTIVITIES § 817.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  18. 30 CFR 817.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 3 2010-07-01 2010-07-01 false Disposal of excess spoil: Durable rock fills...-UNDERGROUND MINING ACTIVITIES § 817.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  19. 30 CFR 817.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Disposal of excess spoil: Durable rock fills...-UNDERGROUND MINING ACTIVITIES § 817.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  20. 30 CFR 817.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Disposal of excess spoil: Durable rock fills...-UNDERGROUND MINING ACTIVITIES § 817.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  1. 30 CFR 816.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Disposal of excess spoil: Durable rock fills...-SURFACE MINING ACTIVITIES § 816.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  2. 30 CFR 816.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Disposal of excess spoil: Durable rock fills...-SURFACE MINING ACTIVITIES § 816.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  3. 30 CFR 816.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Disposal of excess spoil: Durable rock fills...-SURFACE MINING ACTIVITIES § 816.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  4. 30 CFR 816.73 - Disposal of excess spoil: Durable rock fills.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Disposal of excess spoil: Durable rock fills...-SURFACE MINING ACTIVITIES § 816.73 Disposal of excess spoil: Durable rock fills. The regulatory authority may approve the alternative method of disposal of excess durable rock spoil by gravity placement in...

  5. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  6. Negative Marking and the Student Physician–-A Descriptive Study of Nigerian Medical Schools

    PubMed Central

    Ndu, Ikenna Kingsley; Ekwochi, Uchenna; Di Osuorah, Chidiebere; Asinobi, Isaac Nwabueze; Nwaneri, Michael Osita; Uwaezuoke, Samuel Nkachukwu; Amadi, Ogechukwu Franscesca; Okeke, Ifeyinwa Bernadette; Chinawa, Josephat Maduabuchi; Orjioke, Casmir James Ginikanwa

    2016-01-01

    Background There is considerable debate about the two most commonly used scoring methods, namely, the formula scoring (popularly referred to as negative marking method in our environment) and number right scoring methods. Although the negative marking scoring system attempts to discourage students from guessing in order to increase test reliability and validity, there is the view that it is an excessive and unfair penalty that also increases anxiety. Feedback from students is part of the education process; thus, this study assessed the perception of medical students about negative marking method for multiple choice question (MCQ) examination formats and also the effect of gender and risk-taking behavior on scores obtained with this assessment method. Methods This was a prospective multicenter survey carried out among fifth year medical students in Enugu State University and the University of Nigeria. A structured questionnaire was administered to 175 medical students from the two schools, while a class test was administered to medical students from Enugu State University. Qualitative statistical methods including frequencies, percentages, and chi square were used to analyze categorical variables. Quantitative statistics using analysis of variance was used to analyze continuous variables. Results Inquiry into assessment format revealed that most of the respondents preferred MCQs (65.9%). One hundred and thirty students (74.3%) had an unfavorable perception of negative marking. Thirty-nine students (22.3%) agreed that negative marking reduces the tendency to guess and increases the validity of MCQs examination format in testing knowledge content of a subject compared to 108 (61.3%) who disagreed with this assertion (χ2 = 23.0, df = 1, P = 0.000). The median score of the students who were not graded with negative marking was significantly higher than the score of the students graded with negative marking (P = 0.001). There was no statistically significant difference in the risk-taking behavior between male and female students in their MCQ answering patterns with negative marking method (P = 0.618). Conclusions In the assessment of students, it is more desirable to adopt fair penalties for discouraging guessing rather than excessive penalties for incorrect answers, which could intimidate students in negative marking schemes. There is no consensus on the penalty for an incorrect answer. Thus, there is a need for continued research into an effective and objective assessment tool that will ensure that the students’ final score in a test truly represents their level of knowledge. PMID:29349304

  7. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  8. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  9. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    PubMed

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  10. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance

    PubMed Central

    Poplová, Michaela; Sovka, Pavel

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207

  11. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  12. 40 CFR 52.2020 - Identification of plan.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...)(131). Section 126.303(a) Compliance and test methods 11/1/97 6/8/98, 63 FR 31116 (c)(131). Subchapter....107 Variances 10/5/02 12/8/04, 69 FR 70893 (c)(229). Section 130.108 Test procedures 10/5/02 12/8/04... 70895 (c)(230). Section 130.414 Modification of variance 10/11/08 10/18/10, 75 FR 63717. TEST METHODS...

  13. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed Central

    Patlak, J B

    1993-01-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed. Images FIGURE 2 FIGURE 4 FIGURE 8 FIGURE 9 PMID:7690261

  14. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  15. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  16. Weighted analysis of paired microarray experiments.

    PubMed

    Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle

    2005-01-01

    In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.

  17. Imaging shear wave propagation for elastic measurement using OCT Doppler variance method

    NASA Astrophysics Data System (ADS)

    Zhu, Jiang; Miao, Yusi; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping

    2016-03-01

    In this study, we have developed an acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) method for the visualization of the shear wave and the calculation of the shear modulus based on the OCT Doppler variance method. The vibration perpendicular to the OCT detection direction is induced by the remote acoustic radiation force (ARF) and the shear wave propagating along the OCT beam is visualized by the OCT M-scan. The homogeneous agar phantom and two-layer agar phantom are measured using the ARFOE-OCE system. The results show that the ARFOE-OCE system has the ability to measure the shear modulus beyond the OCT imaging depth. The OCT Doppler variance method, instead of the OCT Doppler phase method, is used for vibration detection without the need of high phase stability and phase wrapping correction. An M-scan instead of the B-scan for the visualization of the shear wave also simplifies the data processing.

  18. Multi-population Genomic Relationships for Estimating Current Genetic Variances Within and Genetic Correlations Between Populations.

    PubMed

    Wientjes, Yvonne C J; Bijma, Piter; Vandenplas, Jérémie; Calus, Mario P L

    2017-10-01

    Different methods are available to calculate multi-population genomic relationship matrices. Since those matrices differ in base population, it is anticipated that the method used to calculate genomic relationships affects the estimate of genetic variances, covariances, and correlations. The aim of this article is to define the multi-population genomic relationship matrix to estimate current genetic variances within and genetic correlations between populations. The genomic relationship matrix containing two populations consists of four blocks, one block for population 1, one block for population 2, and two blocks for relationships between the populations. It is known, based on literature, that by using current allele frequencies to calculate genomic relationships within a population, current genetic variances are estimated. In this article, we theoretically derived the properties of the genomic relationship matrix to estimate genetic correlations between populations and validated it using simulations. When the scaling factor of across-population genomic relationships is equal to the product of the square roots of the scaling factors for within-population genomic relationships, the genetic correlation is estimated unbiasedly even though estimated genetic variances do not necessarily refer to the current population. When this property is not met, the correlation based on estimated variances should be multiplied by a correction factor based on the scaling factors. In this study, we present a genomic relationship matrix which directly estimates current genetic variances as well as genetic correlations between populations. Copyright © 2017 by the Genetics Society of America.

  19. Hyperactivity in Anorexia Nervosa: Warming Up Not Just Burning-Off Calories

    PubMed Central

    Carrera, Olaia; Adan, Roger A. H.; Gutierrez, Emilio; Danner, Unna N.; Hoek, Hans W.; van Elburg, Annemarie A.; Kas, Martien J. H.

    2012-01-01

    Excessive physical activity is a common feature in Anorexia Nervosa (AN) that interferes with the recovery process. Animal models have demonstrated that ambient temperature modulates physical activity in semi-starved animals. The aim of the present study was to assess the effect of ambient temperature on physical activity in AN patients in the acute phase of the illness. Thirty-seven patients with AN wore an accelerometer to measure physical activity within the first week of contacting a specialized eating disorder center. Standardized measures of anxiety, depression and eating disorder psychopathology were assessed. Corresponding daily values for ambient temperature were obtained from local meteorological stations. Ambient temperature was negatively correlated with physical activity (p = −.405) and was the only variable that accounted for a significant portion of the variance in physical activity (p = .034). Consistent with recent research with an analogous animal model of the disorder, our findings suggest that ambient temperature is a critical factor contributing to the expression of excessive physical activity levels in AN. Keeping patients warm may prove to be a beneficial treatment option for this symptom. PMID:22848634

  20. Non-Immunogenic Structurally and Biologically Intact Tissue Matrix Grafts for the Immediate Repair of Ballistic-Induced Vascular and Nerve Tissue Injury in Combat Casualty Care

    DTIC Science & Technology

    2005-07-01

    as an access graft is addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the...addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the sample of grafts tested in...measured using a refractometer (Brix % method). The equilibration data are shown in Graph 1. The results suggest the following equilibration scheme: 40% v/v

  1. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  2. A comparison of two follow-up analyses after multiple analysis of variance, analysis of variance, and descriptive discriminant analysis: A case study of the program effects on education-abroad programs

    Treesearch

    Alvin H. Yu; Garry Chick

    2010-01-01

    This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...

  3. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  4. A Study of Impact Point Detecting Method Based on Seismic Signal

    NASA Astrophysics Data System (ADS)

    Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong

    The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.

  5. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  6. Optimal design criteria - prediction vs. parameter estimation

    NASA Astrophysics Data System (ADS)

    Waldl, Helmut

    2014-05-01

    G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.

  7. Alternative methods of salt disposal at the seven salt sites for a nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-02-01

    This study discusses the various alternative salt management techniques for the disposal of excess mined salt at seven potentially acceptable nuclear waste repository sites: Deaf Smith and Swisher Counties, Texas; Richton and Cypress Creek Domes, Mississippi; Vacherie Dome, Louisiana; and Davis and Lavender Canyons, Utah. Because the repository development involves the underground excavation of corridors and waste emplacement rooms, in either bedded or domed salt formations, excess salt will be mined and must be disposed of offsite. The salt disposal alternatives examined for all the sites include commercial use, ocean disposal, deep well injection, landfill disposal, and underground mine disposal.more » These alternatives (and other site-specific disposal methods) are reviewed, using estimated amounts of excavated, backfilled, and excess salt. Methods of transporting the excess salt are discussed, along with possible impacts of each disposal method and potential regulatory requirements. A preferred method of disposal is recommended for each potentially acceptable repository site. 14 refs., 5 tabs.« less

  8. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  9. The patterns of genomic variances and covariances across genome for milk production traits between Chinese and Nordic Holstein populations.

    PubMed

    Li, Xiujin; Lund, Mogens Sandø; Janss, Luc; Wang, Chonglong; Ding, Xiangdong; Zhang, Qin; Su, Guosheng

    2017-03-15

    With the development of SNP chips, SNP information provides an efficient approach to further disentangle different patterns of genomic variances and covariances across the genome for traits of interest. Due to the interaction between genotype and environment as well as possible differences in genetic background, it is reasonable to treat the performances of a biological trait in different populations as different but genetic correlated traits. In the present study, we performed an investigation on the patterns of region-specific genomic variances, covariances and correlations between Chinese and Nordic Holstein populations for three milk production traits. Variances and covariances between Chinese and Nordic Holstein populations were estimated for genomic regions at three different levels of genome region (all SNP as one region, each chromosome as one region and every 100 SNP as one region) using a novel multi-trait random regression model which uses latent variables to model heterogeneous variance and covariance. In the scenario of the whole genome as one region, the genomic variances, covariances and correlations obtained from the new multi-trait Bayesian method were comparable to those obtained from a multi-trait GBLUP for all the three milk production traits. In the scenario of each chromosome as one region, BTA 14 and BTA 5 accounted for very large genomic variance, covariance and correlation for milk yield and fat yield, whereas no specific chromosome showed very large genomic variance, covariance and correlation for protein yield. In the scenario of every 100 SNP as one region, most regions explained <0.50% of genomic variance and covariance for milk yield and fat yield, and explained <0.30% for protein yield, while some regions could present large variance and covariance. Although overall correlations between two populations for the three traits were positive and high, a few regions still showed weakly positive or highly negative genomic correlations for milk yield and fat yield. The new multi-trait Bayesian method using latent variables to model heterogeneous variance and covariance could work well for estimating the genomic variances and covariances for all genome regions simultaneously. Those estimated genomic parameters could be useful to improve the genomic prediction accuracy for Chinese and Nordic Holstein populations using a joint reference data in the future.

  10. Heat and solute tracers: how do they compare in heterogeneous aquifers?

    PubMed

    Irvine, Dylan J; Simmons, Craig T; Werner, Adrian D; Graf, Thomas

    2015-04-01

    A comparison of groundwater velocity in heterogeneous aquifers estimated from hydraulic methods, heat and solute tracers was made using numerical simulations. Aquifer heterogeneity was described by geostatistical properties of the Borden, Cape Cod, North Bay, and MADE aquifers. Both heat and solute tracers displayed little systematic under- or over-estimation in velocity relative to a hydraulic control. The worst cases were under-estimates of 6.63% for solute and 2.13% for the heat tracer. Both under- and over-estimation of velocity from the heat tracer relative to the solute tracer occurred. Differences between the estimates from the tracer methods increased as the mean velocity decreased, owing to differences in rates of molecular diffusion and thermal conduction. The variance in estimated velocity using all methods increased as the variance in log-hydraulic conductivity (K) and correlation length scales increased. The variance in velocity for each scenario was remarkably small when compared to σ2 ln(K) for all methods tested. The largest variability identified was for the solute tracer where 95% of velocity estimates ranged by a factor of 19 in simulations where 95% of the K values varied by almost four orders of magnitude. For the same K-fields, this range was a factor of 11 for the heat tracer. The variance in estimated velocity was always lowest when using heat as a tracer. The study results suggest that a solute tracer will provide more understanding about the variance in velocity caused by aquifer heterogeneity and a heat tracer provides a better approximation of the mean velocity. © 2013, National Ground Water Association.

  11. Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.

    PubMed

    Li, Qiang; Doi, Kunio

    2006-04-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.

  12. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  13. Global self-esteem and method effects: competing factor structures, longitudinal invariance, and response styles in adolescents.

    PubMed

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2014-06-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.

  14. Global self-esteem and method effects: competing factor structures, longitudinal invariance and response styles in adolescents

    PubMed Central

    Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt

    2013-01-01

    The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931

  15. A semi-learning algorithm for noise rejection: an fNIRS study on ADHD children

    NASA Astrophysics Data System (ADS)

    Sutoko, Stephanie; Funane, Tsukasa; Katura, Takusige; Sato, Hiroki; Kiguchi, Masashi; Maki, Atsushi; Monden, Yukifumi; Nagashima, Masako; Yamagata, Takanori; Dan, Ippeita

    2017-02-01

    In pediatrics studies, the quality of functional near infrared spectroscopy (fNIRS) signals is often reduced by motion artifacts. These artifacts likely mislead brain functionality analysis, causing false discoveries. While noise correction methods and their performance have been investigated, these methods require several parameter assumptions that apparently result in noise overfitting. In contrast, the rejection of noisy signals serves as a preferable method because it maintains the originality of the signal waveform. Here, we describe a semi-learning algorithm to detect and eliminate noisy signals. The algorithm dynamically adjusts noise detection according to the predetermined noise criteria, which are spikes, unusual activation values (averaged amplitude signals within the brain activation period), and high activation variances (among trials). Criteria were sequentially organized in the algorithm and orderly assessed signals based on each criterion. By initially setting an acceptable rejection rate, particular criteria causing excessive data rejections are neglected, whereas others with tolerable rejections practically eliminate noises. fNIRS data measured during the attention response paradigm (oddball task) in children with attention deficit/hyperactivity disorder (ADHD) were utilized to evaluate and optimize the algorithm's performance. This algorithm successfully substituted the visual noise identification done in the previous studies and consistently found significantly lower activation of the right prefrontal and parietal cortices in ADHD patients than in typical developing children. Thus, we conclude that the semi-learning algorithm confers more objective and standardized judgment for noise rejection and presents a promising alternative to visual noise rejection

  16. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  17. Decomposing variation in male reproductive success: age-specific variances and covariances through extra-pair and within-pair reproduction.

    PubMed

    Lebigre, Christophe; Arcese, Peter; Reid, Jane M

    2013-07-01

    Age-specific variances and covariances in reproductive success shape the total variance in lifetime reproductive success (LRS), age-specific opportunities for selection, and population demographic variance and effective size. Age-specific (co)variances in reproductive success achieved through different reproductive routes must therefore be quantified to predict population, phenotypic and evolutionary dynamics in age-structured populations. While numerous studies have quantified age-specific variation in mean reproductive success, age-specific variances and covariances in reproductive success, and the contributions of different reproductive routes to these (co)variances, have not been comprehensively quantified in natural populations. We applied 'additive' and 'independent' methods of variance decomposition to complete data describing apparent (social) and realised (genetic) age-specific reproductive success across 11 cohorts of socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia). We thereby quantified age-specific (co)variances in male within-pair and extra-pair reproductive success (WPRS and EPRS) and the contributions of these (co)variances to the total variances in age-specific reproductive success and LRS. 'Additive' decomposition showed that within-age and among-age (co)variances in WPRS across males aged 2-4 years contributed most to the total variance in LRS. Age-specific (co)variances in EPRS contributed relatively little. However, extra-pair reproduction altered age-specific variances in reproductive success relative to the social mating system, and hence altered the relative contributions of age-specific reproductive success to the total variance in LRS. 'Independent' decomposition showed that the (co)variances in age-specific WPRS, EPRS and total reproductive success, and the resulting opportunities for selection, varied substantially across males that survived to each age. Furthermore, extra-pair reproduction increased the variance in age-specific reproductive success relative to the social mating system to a degree that increased across successive age classes. This comprehensive decomposition of the total variances in age-specific reproductive success and LRS into age-specific (co)variances attributable to two reproductive routes showed that within-age and among-age covariances contributed substantially to the total variance and that extra-pair reproduction can alter the (co)variance structure of age-specific reproductive success. Such covariances and impacts should consequently be integrated into theoretical assessments of demographic and evolutionary processes in age-structured populations. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  18. Fuel cell stack monitoring and system control

    DOEpatents

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2004-02-17

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell.

  19. Validity and extension of the SCS-CN method for computing infiltration and rainfall-excess rates

    NASA Astrophysics Data System (ADS)

    Mishra, Surendra Kumar; Singh, Vijay P.

    2004-12-01

    A criterion is developed for determining the validity of the Soil Conservation Service curve number (SCS-CN) method. According to this criterion, the existing SCS-CN method is found to be applicable when the potential maximum retention, S, is less than or equal to twice the total rainfall amount. The criterion is tested using published data of two watersheds. Separating the steady infiltration from capillary infiltration, the method is extended for predicting infiltration and rainfall-excess rates. The extended SCS-CN method is tested using 55 sets of laboratory infiltration data on soils varying from Plainfield sand to Yolo light clay, and the computed and observed infiltration and rainfall-excess rates are found to be in good agreement.

  20. Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2017-10-01

    The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.

  1. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  2. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my

    Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less

  4. Optimal Superpositioning of Flexible Molecule Ensembles

    PubMed Central

    Gapsys, Vytautas; de Groot, Bert L.

    2013-01-01

    Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072

  5. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change. Copyright © 2012 SETAC.

  6. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Analytic variance estimates of Swank and Fano factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank, E-mail: frank.samuelson@fda.hhs.gov

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data frommore » a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.« less

  8. The Statistical Power of Planned Comparisons.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…

  9. Blind Deconvolution Method of Image Deblurring Using Convergence of Variance

    DTIC Science & Technology

    2011-03-24

    random variable x is [9] fX (x) = 1√ 2πσ e−(x−m) 2/2σ2 −∞ < x <∞, σ > 0 (6) where m is the mean and σ is the variance. 7 Figure 1: Gaussian distribution...of the MAP Estimation algorithm when N was set to 50. The APEX method is not without its own difficulties when dealing with astro - nomical data

  10. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  11. Swarm based mean-variance mapping optimization (MVMOS) for solving economic dispatch

    NASA Astrophysics Data System (ADS)

    Khoa, T. H.; Vasant, P. M.; Singh, M. S. Balbir; Dieu, V. N.

    2014-10-01

    The economic dispatch (ED) is an essential optimization task in the power generation system. It is defined as the process of allocating the real power output of generation units to meet required load demand so as their total operating cost is minimized while satisfying all physical and operational constraints. This paper introduces a novel optimization which named as Swarm based Mean-variance mapping optimization (MVMOS). The technique is the extension of the original single particle mean-variance mapping optimization (MVMO). Its features make it potentially attractive algorithm for solving optimization problems. The proposed method is implemented for three test power systems, including 3, 13 and 20 thermal generation units with quadratic cost function and the obtained results are compared with many other methods available in the literature. Test results have indicated that the proposed method can efficiently implement for solving economic dispatch.

  12. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    PubMed

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  13. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  14. 31 CFR 351.67 - What happens if any person purchases book-entry Series EE savings bonds in excess of the maximum...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... bonds in excess of the maximum annual amount? We reserve the right to take any action we deem necessary to adjust the excess, including the right to remove the excess bonds from your New Treasury Direct account and refund the payment price to your bank account of record using the ACH method of payment. ...

  15. Quantum cryptography approaching the classical limit.

    PubMed

    Weedbrook, Christian; Pirandola, Stefano; Lloyd, Seth; Ralph, Timothy C

    2010-09-10

    We consider the security of continuous-variable quantum cryptography as we approach the classical limit, i.e., when the unknown preparation noise at the sender's station becomes significantly noisy or thermal (even by as much as 10(4) times greater than the variance of the vacuum mode). We show that, provided the channel transmission losses do not exceed 50%, the security of quantum cryptography is not dependent on the channel transmission, and is therefore incredibly robust against significant amounts of excess preparation noise. We extend these results to consider for the first time quantum cryptography at wavelengths considerably longer than optical and find that regions of security still exist all the way down to the microwave.

  16. An empirical study of statistical properties of variance partition coefficients for multi-level logistic regression models

    USGS Publications Warehouse

    Li, Ji; Gray, B.R.; Bates, D.M.

    2008-01-01

    Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.

  17. In-vitro antioxidant and antibacterial activities of Xanthium strumarium L. extracts on methicillin-susceptible and methicillin-resistant Staphylococcus aureus.

    PubMed

    Rad, Javad Sharifi; Alfatemi, Seyedeh Mahsan Hoseini; Rad, Majid Sharifi; Iriti, Marcello

    2013-10-01

    The excessive and repeated use of antibiotics in medicine has led to the development of antibiotic-resistant microbial strains, including Staphylococcus aureus whose emergence of antibiotic-resistant strains has reduced the number of antibiotics available to treat clinical infections caused by this bacterium. In this study, antioxidant and antimicrobial activities of methanolic extract of Xanthium strumarium L. leaves were evaluated on methicillin-susceptible and methicillin-resistant Staphylococcus aureus (MRSA) spp. Antiradical and antioxidant activities X. strumarium L. leaf extract were evaluated based on its ability to scavenge the synthetic 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical and by the paired diene method, respectively, whereas the antimicrobial activity was assayed by the disc diffusion method. Data were subjected to analysis of variance following an entirely random design to determine the least significant difference at P < 0.05 using SPSS v. 11.5. The IC50 values of the extract were 0.02 mg/mL and 0.09 mg/mL for the antioxidant and DPPH-scavenging capacity, respectively. X. strumarium extract affected both methicillin-sensitive Staphylococcus aureus and MRSA, though antibacterial activity was more effective on methicillin-susceptible S. aureus spp. The antibacterial and antioxidant activities exhibited by the methanol extract may justify the traditional use of this plant as a folk remedy worldwide.

  18. Sociodemographic, perceived and objective need indicators of mental health treatment use and treatment-seeking intentions among primary care medical patients.

    PubMed

    Elhai, Jon D; Voorhees, Summer; Ford, Julian D; Min, Kyeong Sam; Frueh, B Christopher

    2009-01-30

    We explored sociodemographic and illness/need associations with both recent mental healthcare utilization intensity and self-reported behavioral intentions to seek treatment. Data were examined from a community sample of 201 participants presenting for medical appointments at a Midwestern U.S. primary care clinic, in a cross-sectional survey study. Using non-linear regression analyses accounting for the excess of zero values in treatment visit counts, we found that both sociodemographic and illness/need models were significantly predictive of both recent treatment utilization intensity and intentions to seek treatment. Need models added substantial variance in prediction, above and beyond sociodemographic models. Variables with the greatest predictive role in explaining past treatment utilization intensity were greater depression severity, perceived need for treatment, older age, and lower income. Robust variables in predicting intentions to seek treatment were greater depression severity, perceived need for treatment, and more positive treatment attitudes. This study extends research findings on mental health treatment utilization, specifically addressing medical patients and using statistical methods appropriate to examining treatment visit counts, and demonstrates the importance of both objective and subjective illness/need variables in predicting recent service use intensity and intended future utilization.

  19. Tanning addiction and psychopathology: Further evaluation of anxiety disorders and substance abuse

    PubMed Central

    Ashrafioun, Lisham; Bonar, Erin E.

    2014-01-01

    Background Little research has investigated the correlates of problematic tanning and tanning dependence. Objective To identify characteristics associated with problematic tanning and tanning dependence, and to evaluate simultaneously the associations of variables as correlates of problematic tanning and tanning dependence. Method To assess tanning-related characteristics, psychopathology, and demographics, we administered questionnaires to 533 tanning university students; 31% met criteria for tanning dependence, 12% for problematic tanning. Results Both problematic tanning and tanning dependence were significantly associated with being female (p < .001; p < .001, respectively) and with higher scores on screening measures of obsessive-compulsive (p < .001, p = .005, respectively) and body dysmorphic disorders (p = .019, p < .001, respectively). Frequency of tanning in the past month was the strongest correlate of problematic tanning (p < .001) and tanning dependence (p < .001) when included in a model that controlled for shared variance among demographics and psychopathology. Limitations The sample was recruited from one university and contained only self-report measures. Conclusion Results suggest that those who engage in excessive tanning may also have significant psychiatric distress. Additional research is needed to characterize compulsive, problematic tanning as well as its rates, correlates, and risk factors among diverse samples. PMID:24373775

  20. Tongue Motion Patterns in Post-Glossectomy and Typical Speakers: A Principal Components Analysis

    PubMed Central

    Stone, Maureen; Langguth, Julie M.; Woo, Jonghye; Chen, Hegang; Prince, Jerry L.

    2015-01-01

    Purpose In this study, the authors examined changes in tongue motion caused by glossectomy surgery. A speech task that involved subtle changes in tongue-tip positioning (the motion from /i/ to /s/) was measured. The hypothesis was that patients would have limited motion on the tumor (resected) side and would compensate with greater motion on the nontumor side in order to elevate the tongue tip and blade for /s/. Method Velocity fields were extracted from tagged magnetic resonance images in the left, middle, and right tongue of 3 patients and 10 controls. Principal components (PCs) analysis quantified motion differences and distinguished between the subject groups. Results PCs 1 and 2 represented variance in (a) size and independence of the tongue tip, and (b) direction of motion of the tip, body, or both. Patients and controls were correctly separated by a small number of PCs. Conclusions Motion of the tumor slice was different between patients and controls, but the nontumor side of the patients’ tongues did not show excessive or adaptive motion. Both groups contained apical and laminal /s/ users, and 1 patient created apical /s/ in a highly unusual manner. PMID:24023377

  1. Social media addiction: What is the role of content in YouTube?

    PubMed Central

    Balakrishnan, Janarthanan; Griffiths, Mark D.

    2017-01-01

    Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it. PMID:28914072

  2. Social media addiction: What is the role of content in YouTube?

    PubMed

    Balakrishnan, Janarthanan; Griffiths, Mark D

    2017-09-01

    Background YouTube, the online video creation and sharing site, supports both video content viewing and content creation activities. For a minority of people, the time spent engaging with YouTube can be excessive and potentially problematic. Method This study analyzed the relationship between content viewing, content creation, and YouTube addiction in a survey of 410 Indian-student YouTube users. It also examined the influence of content, social, technology, and process gratifications on user inclination toward YouTube content viewing and content creation. Results The results demonstrated that content creation in YouTube had a closer relationship with YouTube addiction than content viewing. Furthermore, social gratification was found to have a significant influence on both types of YouTube activities, whereas technology gratification did not significantly influence them. Among all perceived gratifications, content gratification had the highest relationship coefficient value with YouTube content creation inclination. The model fit and variance extracted by the endogenous constructs were good, which further validated the results of the analysis. Conclusion The study facilitates new ways to explore user gratification in using YouTube and how the channel responds to it.

  3. Vector space methods of photometric analysis. II - Refinement of the MK grid for B stars. III - The two components of ultraviolet reddening

    NASA Technical Reports Server (NTRS)

    Massa, D.

    1980-01-01

    This paper discusses systematic errors which arise from exclusive use of the MK system to determine reddening. It is found that implementation of uvby, beta photometry to refine the qualitative MK grid substantially reduces stellar mismatch error. A working definition of 'identical' ubvy, beta types is investigated and the relationship of uvby to B-V color excesses is determined. A comparison is also made of the hydrogen based uvby, beta types with the MK system based on He and metal lines. A small core correlated effective temperature luminosity error in the MK system for the early B stars is observed along with a breakdown of the MK luminosity criteria for the late B stars. The second part investigates the wavelength dependence of interstellar extinction in the ultraviolet wavelength range observed with the TD-1 satellite. In this study the sets of identical stars employed to find reddening are determined more precisely than in previous studies and consist only of normal, nonsupergiant stars. A multivariate analysis of variance techniques in an unbiased coordinate system is used for determining the wavelength dependence of reddening.

  4. A Simple Method for Deriving the Confidence Regions for the Penalized Cox’s Model via the Minimand Perturbation†

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496

  5. A Simple Method for Deriving the Confidence Regions for the Penalized Cox's Model via the Minimand Perturbation.

    PubMed

    Lin, Chen-Yen; Halabi, Susan

    2017-01-01

    We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.

  6. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  7. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  8. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  9. TESTING HOMOGENEITY WITH GALAXY STAR FORMATION HISTORIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, Ben; Jimenez, Raul; Tojeiro, Rita

    2013-01-01

    Observationally confirming spatial homogeneity on sufficiently large cosmological scales is of importance to test one of the underpinning assumptions of cosmology, and is also imperative for correctly interpreting dark energy. A challenging aspect of this is that homogeneity must be probed inside our past light cone, while observations take place on the light cone. The star formation history (SFH) in the galaxy fossil record provides a novel way to do this. We calculate the SFH of stacked luminous red galaxy (LRG) spectra obtained from the Sloan Digital Sky Survey. We divide the LRG sample into 12 equal-area contiguous sky patchesmore » and 10 redshift slices (0.2 < z < 0.5), which correspond to 120 blocks of volume {approx}0.04 Gpc{sup 3}. Using the SFH in a time period that samples the history of the universe between look-back times 11.5 and 13.4 Gyr as a proxy for homogeneity, we calculate the posterior distribution for the excess large-scale variance due to inhomogeneity, and find that the most likely solution is no extra variance at all. At 95% credibility, there is no evidence of deviations larger than 5.8%.« less

  10. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  11. Non-Invasive Methods for Iron Concentration Assessment

    NASA Astrophysics Data System (ADS)

    Carneiro, Antonio A. O.; Baffa, Oswaldo; Angulo, Ivan L.; Covas, Dimas T.

    2002-08-01

    Iron excess is commonly observed in patients with transfusional iron overload. The iron chelation therapy in these patients require accurate determination of the magnitude of iron excess. The most promising method for noninvasive assessment of iron stores is based on measurements of hepatic magnetic susceptibility.

  12. Selecting band combinations with thematic mapper data

    NASA Technical Reports Server (NTRS)

    Sheffield, C. A.

    1983-01-01

    A problem arises in making color composite images because there are 210 different possible color presentations of TM three-band images. A method is given for reducing that 210 to a single choice, decided by the statistics of a scene or subscene, and taking into full account any correlations that exist between different bands. Instead of using total variance as the measure for information content of the band triplets, the ellipsoid of maximum volume is selected which discourages selection of bands with high correlation. The band triplet is obtained by computing and ranking in order the determinants of each 3 x 3 principal submatrix of the original matrix M. After selection of the best triplet, the assignment of colors is made by using the actual variances (the diagonal elements of M): green (maximum variance), red (second largest variance), blue (smallest variance).

  13. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  14. Trends in Elevated Triglyceride in Adults: United States, 2001-2012

    MedlinePlus

    ... All variance estimates accounted for the complex survey design using Taylor series linearization ( 10 ). Percentage estimates for the total adult ... al. National Health and Nutrition Examination Survey: Sample design, 2007–2010. ... KM. Taylor series methods. In: Introduction to variance estimation. 2nd ed. ...

  15. Testing Interaction Effects without Discarding Variance.

    ERIC Educational Resources Information Center

    Lopez, Kay A.

    Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…

  16. Dominance, Information, and Hierarchical Scaling of Variance Space.

    ERIC Educational Resources Information Center

    Ceurvorst, Robert W.; Krus, David J.

    1979-01-01

    A method for computation of dominance relations and for construction of their corresponding hierarchical structures is presented. The link between dominance and variance allows integration of the mathematical theory of information with least squares statistical procedures without recourse to logarithmic transformations of the data. (Author/CTM)

  17. Validating genomic reliabilities and gains from phenotypic updates

    USDA-ARS?s Scientific Manuscript database

    Reliability can be validated from the variance of the difference of earlier and later estimated breeding values as a fraction of the genetic variance. This new method avoids using squared correlations that can be biased downward by selection. Published genomic reliabilities of U.S. young bulls agree...

  18. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  19. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  20. Clinical excellence: evidence on the assessment of senior doctors' applications to the UK Advisory Committee on Clinical Excellence Awards. Analysis of complete national data set

    PubMed Central

    Campbell, John L; Abel, Gary

    2016-01-01

    Objectives To inform the rational deployment of assessor resource in the evaluation of applications to the UK Advisory Committee on Clinical Excellence Awards (ACCEA). Setting ACCEA are responsible for a scheme to financially reward senior doctors in England and Wales who are assessed to be working over and above the standard expected of their role. Participants Anonymised applications of consultants and senior academic GPs for awards were considered by members of 14 regional subcommittees and 2 national assessing committees during the 2014–2015 round of applications. Design It involved secondary analysis of complete anonymised national data set. Primary and secondary outcome measures We analysed scores for each of 1916 applications for a clinical excellence award across 4 levels of award. Scores were provided by members of 16 subcommittees. We assessed the reliability of assessments and described the variance in the assessment of scores. Results Members of regional subcommittees assessed 1529 new applications and 387 renewal applications. Average scores increased with the level of application being made. On average, applications were assessed by 9.5 assessors. The highest contributions to the variance in individual assessors' assessments of applications were attributable to assessors or to residual variance. The applicant accounted for around a quarter of the variance in scores for new bronze applications, with this proportion decreasing for higher award levels. Reliability in excess of 0.7 can be attained where 4 assessors score bronze applications, with twice as many assessors being required for higher levels of application. Conclusions Assessment processes pertaining in the competitive allocation of public funds need to be credible and efficient. The present arrangements for assessing and scoring applications are defensible, depending on the level of reliability judged to be required in the assessment process. Some relatively minor reconfiguration in approaches to scoring might usefully be considered in future rounds of assessment. PMID:27256095

  1. Burnout contagion among intensive care nurses.

    PubMed

    Bakker, Arnold B; Le Blanc, Pascale M; Schaufeli, Wilmar B

    2005-08-01

    This paper reports a study investigating whether burnout is contagious. Burnout has been recognized as a problem in intensive care units for a long time. Previous research has focused primarily on its organizational antecedents, such as excessive workload or high patient care demands, time pressure and intensive use of sophisticated technology. The present study took a totally different perspective by hypothesizing that--in intensive care units--burnout is communicated from one nurse to another. A questionnaire on work and well-being was completed by 1849 intensive care unit nurses working in one of 80 intensive care units in 12 different European countries in 1994. The results are being reported now because they formed part of a larger study that was only finally analysed recently. The questionnaire was translated from English to the language of each of these countries, and then back-translated to English. Respondents indicated the prevalence of burnout among their colleagues, and completed scales to assess working conditions and job burnout. Analysis of variance indicated that the between-unit variance on a measure of perceived burnout complaints among colleagues was statistically significant and substantially larger than the within-unit variance. This implies that there is considerable agreement (consensus) within intensive care units regarding the prevalence of burnout. In addition, the results of multilevel analyses showed that burnout complaints among colleagues in intensive care units made a statistically significant and unique contribution to explaining variance in individual nurses' and whole units' experiences of burnout, i.e. emotional exhaustion, depersonalization and reduced personal accomplishment. Moreover, for both emotional exhaustion and depersonalization, perceived burnout complaints among colleagues was the most important predictor of burnout at the individual and unit levels, even after controlling for the impact of well-known organizational stressors as conceptualized in the demand-control model. Burnout is contagious: it may cross over from one nurse to another.

  2. Idiosyncratic risk in the Dow Jones Eurostoxx50 Index

    NASA Astrophysics Data System (ADS)

    Daly, Kevin; Vo, Vinh

    2008-07-01

    Recent evidence by Campbell et al. [J.Y. Campbell, M. Lettau B.G. Malkiel, Y. Xu, Have individual stocks become more volatile? An empirical exploration of idiosyncratic risk, The Journal of Finance (February) (2001)] shows an increase in firm-level volatility and a decline of the correlation among stock returns in the US. In relation to the Euro-Area stock markets, we find that both aggregate firm-level volatility and average stock market correlation have trended upwards. We estimate a linear model of the market risk-return relationship nested in an EGARCH(1, 1)-M model for conditional second moments. We then show that traditional estimates of the conditional risk-return relationship, that use ex-post excess-returns as the conditioning information set, lead to joint tests of the theoretical model (usually the ICAPM) and of the Efficient Market Hypothesis in its strong form. To overcome this problem we propose alternative measures of expected market risk based on implied volatility extracted from traded option prices and we discuss the conditions under which implied volatility depends solely on expected risk. We then regress market excess-returns on lagged market implied variance computed from implied market volatility to estimate the relationship between expected market excess-returns and expected market risk.We investigate whether, as predicted by the ICAPM, the expected market risk is the main factor in explaining the market risk premium and the latter is independent of aggregate idiosyncratic risk.

  3. A two step Bayesian approach for genomic prediction of breeding values.

    PubMed

    Shariati, Mohammad M; Sørensen, Peter; Janss, Luc

    2012-05-21

    In genomic models that assign an individual variance to each marker, the contribution of one marker to the posterior distribution of the marker variance is only one degree of freedom (df), which introduces many variance parameters with only little information per variance parameter. A better alternative could be to form clusters of markers with similar effects where markers in a cluster have a common variance. Therefore, the influence of each marker group of size p on the posterior distribution of the marker variances will be p df. The simulated data from the 15th QTL-MAS workshop were analyzed such that SNP markers were ranked based on their effects and markers with similar estimated effects were grouped together. In step 1, all markers with minor allele frequency more than 0.01 were included in a SNP-BLUP prediction model. In step 2, markers were ranked based on their estimated variance on the trait in step 1 and each 150 markers were assigned to one group with a common variance. In further analyses, subsets of 1500 and 450 markers with largest effects in step 2 were kept in the prediction model. Grouping markers outperformed SNP-BLUP model in terms of accuracy of predicted breeding values. However, the accuracies of predicted breeding values were lower than Bayesian methods with marker specific variances. Grouping markers is less flexible than allowing each marker to have a specific marker variance but, by grouping, the power to estimate marker variances increases. A prior knowledge of the genetic architecture of the trait is necessary for clustering markers and appropriate prior parameterization.

  4. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  5. Dangers in Using Analysis of Covariance Procedures.

    ERIC Educational Resources Information Center

    Campbell, Kathleen T.

    Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…

  6. Explaining the Sex Difference in Dyslexia

    ERIC Educational Resources Information Center

    Arnett, Anne B.; Pennington, Bruce F.; Peterson, Robin L.; Willcutt, Erik G.; DeFries, John C.; Olson, Richard K.

    2017-01-01

    Background: Males are diagnosed with dyslexia more frequently than females, even in epidemiological samples. This may be explained by greater variance in males' reading performance. Methods: We expand on previous research by rigorously testing the variance difference theory, and testing for mediation of the sex difference by cognitive correlates.…

  7. Analysis of Developmental Data: Comparison Among Alternative Methods

    ERIC Educational Resources Information Center

    Wilson, Ronald S.

    1975-01-01

    To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)

  8. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  9. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  10. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2014-01-01

    Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

  11. Variance Estimation Using Replication Methods in Structural Equation Modeling with Complex Sample Data

    ERIC Educational Resources Information Center

    Stapleton, Laura M.

    2008-01-01

    This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…

  12. 20180312 - Reproducibility and variance of liver effects in subchronic and chronic repeat dose toxicity studies (SOT)

    EPA Science Inventory

    In vivo studies provide reference data to evaluate alternative methods for predicting toxicity. However, the reproducibility and variance of effects observed across multiple in vivo studies is not well understood. The US EPA’s Toxicity Reference Database (ToxRefDB) stores d...

  13. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  14. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  15. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: A large-scale cross-sectional study.

    PubMed

    Schou Andreassen, Cecilie; Billieux, Joël; Griffiths, Mark D; Kuss, Daria J; Demetrovics, Zsolt; Mazzoni, Elvis; Pallesen, Ståle

    2016-03-01

    Over the last decade, research into "addictive technological behaviors" has substantially increased. Research has also demonstrated strong associations between addictive use of technology and comorbid psychiatric disorders. In the present study, 23,533 adults (mean age 35.8 years, ranging from 16 to 88 years) participated in an online cross-sectional survey examining whether demographic variables, symptoms of attention-deficit/hyperactivity disorder (ADHD), obsessive-compulsive disorder (OCD), anxiety, and depression could explain variance in addictive use (i.e., compulsive and excessive use associated with negative outcomes) of two types of modern online technologies: social media and video games. Correlations between symptoms of addictive technology use and mental disorder symptoms were all positive and significant, including the weak interrelationship between the two addictive technological behaviors. Age appeared to be inversely related to the addictive use of these technologies. Being male was significantly associated with addictive use of video games, whereas being female was significantly associated with addictive use of social media. Being single was positively related to both addictive social networking and video gaming. Hierarchical regression analyses showed that demographic factors explained between 11 and 12% of the variance in addictive technology use. The mental health variables explained between 7 and 15% of the variance. The study significantly adds to our understanding of mental health symptoms and their role in addictive use of modern technology, and suggests that the concept of Internet use disorder (i.e., "Internet addiction") as a unified construct is not warranted. (c) 2016 APA, all rights reserved).

  16. Long memory in patterns of mobile phone usage

    NASA Astrophysics Data System (ADS)

    Owczarczuk, Marcin

    2012-02-01

    In this article we show that usage of a mobile phone, i.e. daily series of number of calls made by a customer, exhibits long memory. We use a sample of 4502 postpaid users from a Polish mobile operator and study their two-year billing history. We estimate Hurst exponent by nine estimators: aggregated variance method, differencing the variance, absolute values of the aggregated series, Higuchi's method, residuals of regression, the R/S method, periodogram method, modified periodogram method and Whittle estimator. We also analyze empirically relations between estimators. Long memory implies an inertial effect in clients' behavior which may be used by mobile operators to accelerate usage and gain additional profit.

  17. Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs

    PubMed Central

    Heuven, Henri CM; van Wijk, Rik HJ; Dibbits, Bert; van Kampen, Tony A; Knol, Egbert F; Bovenhuis, Henk

    2009-01-01

    Quantitative trait loci (QTL) affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A) for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA), several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham) and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only. PMID:19284675

  18. Automated data processing and radioassays.

    PubMed

    Samols, E; Barrows, G H

    1978-04-01

    Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.

  19. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  1. Time-on-task decrements in "steer clear" performance of patients with sleep apnea and narcolepsy

    NASA Technical Reports Server (NTRS)

    Findley, L. J.; Suratt, P. M.; Dinges, D. F.

    1999-01-01

    Loss of attention with time-on-task reflects the increasing instability of the waking state during performance in experimentally induced sleepiness. To determine whether patients with disorders of excessive sleepiness also displayed time-on-task decrements indicative of wake state instability, visual sustained attention performance on "Steer Clear," a computerized simple RT driving simulation task, was compared among 31 patients with untreated sleep apnea, 16 patients with narcolepsy, and 14 healthy control subjects. Vigilance decrement functions were generated by analyzing the number of collisions in each of six four-minute periods of Steer Clear task performance in a mixed-model analysis of variance and linear regression equations. As expected, patients had more Steer Clear collisions than control subjects (p=0.006). However, the inter-subject variability in errors among the narcoleptic patients was four-fold that of the apnea patients, and 100-fold that of the controls volunteers; the variance in errors among untreated apnea patients was 27-times that of controls. The results of transformed collision data revealed main effects for group (p=0.006), time-on-task (p=0.001), and a significant interaction (p=0.022). Control subjects showed no clear evidence of increasing collision errors with time-on-task (adjusted R2=0.22), while apnea patients showed a trend toward vigilance decrement (adjusted R2=0.42, p=0.097), and narcolepsy patients evidenced a robust linear vigilance decrement (adjusted R2=0.87, p=0.004). The association of disorders of excessive somnolence with escalating time-on-task decrements makes it imperative that when assessment of neurobehavioral performance is conducted in patients, it involves task durations and analyses that will evaluate the underlying vulnerability of potentially sleepy patients to decrements over time in tasks that require sustained attention and timely responses, both of which are key components in safe driving performance.

  2. Analysis of messy data with heteroscedastic in mean models

    NASA Astrophysics Data System (ADS)

    Trianasari, Nurvita; Sumarni, Cucu

    2016-02-01

    In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.

  3. Variances and uncertainties of the sample laboratory-to-laboratory variance (S(L)2) and standard deviation (S(L)) associated with an interlaboratory study.

    PubMed

    McClure, Foster D; Lee, Jung K

    2012-01-01

    The validation process for an analytical method usually employs an interlaboratory study conducted as a balanced completely randomized model involving a specified number of randomly chosen laboratories, each analyzing a specified number of randomly allocated replicates. For such studies, formulas to obtain approximate unbiased estimates of the variance and uncertainty of the sample laboratory-to-laboratory (lab-to-lab) STD (S(L)) have been developed primarily to account for the uncertainty of S(L) when there is a need to develop an uncertainty budget that includes the uncertainty of S(L). For the sake of completeness on this topic, formulas to estimate the variance and uncertainty of the sample lab-to-lab variance (S(L)2) were also developed. In some cases, it was necessary to derive the formulas based on an approximate distribution for S(L)2.

  4. Quantifying noise in optical tweezers by allan variance.

    PubMed

    Czerwinski, Fabian; Richardson, Andrew C; Oddershede, Lene B

    2009-07-20

    Much effort is put into minimizing noise in optical tweezers experiments because noise and drift can mask fundamental behaviours of, e.g., single molecule assays. Various initiatives have been taken to reduce or eliminate noise but it has been difficult to quantify their effect. We propose to use Allan variance as a simple and efficient method to quantify noise in optical tweezers setups.We apply the method to determine the optimal measurement time, frequency, and detection scheme, and quantify the effect of acoustic noise in the lab. The method can also be used on-the-fly for determining optimal parameters of running experiments.

  5. Excess electrons in methanol clusters: Beyond the one-electron picture

    NASA Astrophysics Data System (ADS)

    Pohl, Gábor; Mones, Letif; Turi, László

    2016-10-01

    We performed a series of comparative quantum chemical calculations on various size negatively charged methanol clusters, ("separators=" CH 3 OH ) n - . The clusters are examined in their optimized geometries (n = 2-4), and in geometries taken from mixed quantum-classical molecular dynamics simulations at finite temperature (n = 2-128). These latter structures model potential electron binding sites in methanol clusters and in bulk methanol. In particular, we compute the vertical detachment energy (VDE) of an excess electron from increasing size methanol cluster anions using quantum chemical computations at various levels of theory including a one-electron pseudopotential model, several density functional theory (DFT) based methods, MP2 and coupled-cluster CCSD(T) calculations. The results suggest that at least four methanol molecules are needed to bind an excess electron on a hydrogen bonded methanol chain in a dipole bound state. Larger methanol clusters are able to form stronger interactions with an excess electron. The two simulated excess electron binding motifs in methanol clusters, interior and surface states, correlate well with distinct, experimentally found VDE tendencies with size. Interior states in a solvent cavity are stabilized significantly stronger than electron states on cluster surfaces. Although we find that all the examined quantum chemistry methods more or less overestimate the strength of the experimental excess electron stabilization, MP2, LC-BLYP, and BHandHLYP methods with diffuse basis sets provide a significantly better estimate of the VDE than traditional DFT methods (BLYP, B3LYP, X3LYP, PBE0). A comparison to the better performing many electron methods indicates that the examined one-electron pseudopotential can be reasonably used in simulations for systems of larger size.

  6. Excess electrons in methanol clusters: Beyond the one-electron picture.

    PubMed

    Pohl, Gábor; Mones, Letif; Turi, László

    2016-10-28

    We performed a series of comparative quantum chemical calculations on various size negatively charged methanol clusters, CH 3 OH n - . The clusters are examined in their optimized geometries (n = 2-4), and in geometries taken from mixed quantum-classical molecular dynamics simulations at finite temperature (n = 2-128). These latter structures model potential electron binding sites in methanol clusters and in bulk methanol. In particular, we compute the vertical detachment energy (VDE) of an excess electron from increasing size methanol cluster anions using quantum chemical computations at various levels of theory including a one-electron pseudopotential model, several density functional theory (DFT) based methods, MP2 and coupled-cluster CCSD(T) calculations. The results suggest that at least four methanol molecules are needed to bind an excess electron on a hydrogen bonded methanol chain in a dipole bound state. Larger methanol clusters are able to form stronger interactions with an excess electron. The two simulated excess electron binding motifs in methanol clusters, interior and surface states, correlate well with distinct, experimentally found VDE tendencies with size. Interior states in a solvent cavity are stabilized significantly stronger than electron states on cluster surfaces. Although we find that all the examined quantum chemistry methods more or less overestimate the strength of the experimental excess electron stabilization, MP2, LC-BLYP, and BHandHLYP methods with diffuse basis sets provide a significantly better estimate of the VDE than traditional DFT methods (BLYP, B3LYP, X3LYP, PBE0). A comparison to the better performing many electron methods indicates that the examined one-electron pseudopotential can be reasonably used in simulations for systems of larger size.

  7. Annual variation in event-scale precipitation δ2H at Barrow, AK, reflects vapor source region

    NASA Astrophysics Data System (ADS)

    Putman, Annie L.; Feng, Xiahong; Sonder, Leslie J.; Posmentier, Eric S.

    2017-04-01

    In this study, precipitation isotopic variations at Barrow, AK, USA, are linked to conditions at the moisture source region, along the transport path, and at the precipitation site. Seventy precipitation events between January 2009 and March 2013 were analyzed for δ2H and deuterium excess. For each precipitation event, vapor source regions were identified with the hybrid single-particle Lagrangian integrated trajectory (HYSPLIT) air parcel tracking program in back-cast mode. The results show that the vapor source region migrated annually, with the most distal (proximal) and southerly (northerly) vapor source regions occurring during the winter (summer). This may be related to equatorial expansion and poleward contraction of the polar circulation cell and the extent of Arctic sea ice cover. Annual cycles of vapor source region latitude and δ2H in precipitation were in phase; depleted (enriched) δ2H values were associated with winter (summer) and distal (proximal) vapor source regions. Precipitation δ2H responded to variation in vapor source region as reflected by significant correlations between δ2H with the following three parameters: (1) total cooling between lifted condensation level (LCL) and precipitating cloud at Barrow, ΔTcool, (2) meteorological conditions at the evaporation site quantified by 2 m dew point, Td, and (3) whether the vapor transport path crossed the Brooks and/or Alaskan ranges, expressed as a Boolean variable, mtn. These three variables explained 54 % of the variance (p<0. 001) in precipitation δ2H with a sensitivity of -3.51 ± 0.55 ‰ °C-1 (p<0. 001) to ΔTcool, 3.23 ± 0.83 ‰ °C-1 (p<0. 001) to Td, and -32.11 ± 11.04 ‰ (p = 0. 0049) depletion when mtn is true. The magnitude of each effect on isotopic composition also varied with vapor source region proximity. For storms with proximal vapor source regions (where ΔTcool <7 °C), ΔTcool explained 3 % of the variance in δ2H, Td alone accounted for 43 %, while mtn explained 2 %. For storms with distal vapor sources (ΔTcool > 7°C), ΔTcool explained 22 %, Td explained only 1 %, and mtn explained 18 %. The deuterium excess annual cycle lagged by 2-3 months during the δ2H cycle, so the direct correlation between the two variables is weak. Vapor source region relative humidity with respect to the sea surface temperature, hss, explained 34 % of variance in deuterium excess, (-0.395 ± 0.067 ‰ %-1, p<0. 001). The patterns in our data suggest that on an annual scale, isotopic ratios of precipitation at Barrow may respond to changes in the southerly extent of the polar circulation cell, a relationship that may be applicable to interpretation of long-term climate change records like ice cores.

  8. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  9. Systems biology analysis of drivers underlying hallmarks of cancer cell metabolism

    NASA Astrophysics Data System (ADS)

    Zielinski, Daniel C.; Jamshidi, Neema; Corbett, Austin J.; Bordbar, Aarash; Thomas, Alex; Palsson, Bernhard O.

    2017-01-01

    Malignant transformation is often accompanied by significant metabolic changes. To identify drivers underlying these changes, we calculated metabolic flux states for the NCI60 cell line collection and correlated the variance between metabolic states of these lines with their other properties. The analysis revealed a remarkably consistent structure underlying high flux metabolism. The three primary uptake pathways, glucose, glutamine and serine, are each characterized by three features: (1) metabolite uptake sufficient for the stoichiometric requirement to sustain observed growth, (2) overflow metabolism, which scales with excess nutrient uptake over the basal growth requirement, and (3) redox production, which also scales with nutrient uptake but greatly exceeds the requirement for growth. We discovered that resistance to chemotherapeutic drugs in these lines broadly correlates with the amount of glucose uptake. These results support an interpretation of the Warburg effect and glutamine addiction as features of a growth state that provides resistance to metabolic stress through excess redox and energy production. Furthermore, overflow metabolism observed may indicate that mitochondrial catabolic capacity is a key constraint setting an upper limit on the rate of cofactor production possible. These results provide a greater context within which the metabolic alterations in cancer can be understood.

  10. Response of mantle transition zone thickness to plume buoyancy flux

    NASA Astrophysics Data System (ADS)

    Das Sharma, S.; Ramesh, D. S.; Li, X.; Yuan, X.; Sreenivas, B.; Kind, R.

    2010-01-01

    The debate concerning thermal plumes in the Earth's mantle, their geophysical detection and depth characterization remains contentious. Available geophysical, petrological and geochemical evidence is at variance regarding the very existence of mantle plumes. Utilizing P-to-S converted seismic waves (P receiver functions) from the 410 and 660 km discontinuities, we investigate disposition of these boundaries beneath a number of prominent hotspot regions. The thickness of the mantle transition zone (MTZ), measured as P660s-P410s differential times (tMTZ), is determined. Our analyses suggest that the MTZ thickness beneath some hotspots correlates with the plume strength. The relationship between tMTZ, in response to the thermal perturbation, and the strength of plumes, as buoyancy flux B, follows a power law. This B-tMTZ behavior provides unprecedented insights into the relation of buoyancy flux and excess temperature at 410-660 km depth below hotspots. We find that the strongest hotspots, which are located in the Pacific, are indeed plumes originating at the MTZ or deeper. According to the detected power law, even the strongest plumes may not shrink the transition zone by significantly more than ~40 km (corresponding to a maximum of 300-400° excess temperature).

  11. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  12. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  13. Statistical characteristics of excess fiber length in loose tubes of optical cable

    NASA Astrophysics Data System (ADS)

    Andreev, Vladimir A.; Gavryushin, Sergey A.; Popov, Boris V.; Popov, Victor B.; Vazhdaev, Michael A.

    2017-04-01

    This paper presents an analysis of the data measurements of excess fiber length in the loose tubes of optical cable during the post-process quality control of ready-made products. At determining estimates of numerical characteristics of excess fiber length method of results processing of direct multiple equally accurate measurements has been used. The results of experimental research of the excess length value at the manufacturing technology of loose tube remains constant.

  14. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  15. Trunk and hip biomechanics influence anterior cruciate loading mechanisms in physically active participants.

    PubMed

    Frank, Barnett; Bell, David R; Norcross, Marc F; Blackburn, J Troy; Goerger, Benjamin M; Padua, Darin A

    2013-11-01

    Excessive trunk motion and deficits in neuromuscular control (NMC) of the lumbopelvic hip complex are risk factors for anterior cruciate ligament (ACL) injury. However, the relationship between trunk motion, NMC of the lumbopelvic hip complex, and triplanar knee loads during a sidestep cutting task has not been examined. To determine if there is an association between multiplanar trunk motion, NMC of the lumbopelvic hip complex, and triplanar knee loads with ACL injury during a sidestep cutting task. Descriptive laboratory study. The hip and knee biomechanics and trunk motion of 30 participants (15 male, 15 female) were analyzed during a sidestep cutting task using an optoelectric camera system interfaced to a force plate. Trunk and lower extremity biomechanics were calculated from the kinematic and ground-reaction force data during the first 50% of the stance time during the cutting task. Pearson product moment correlation coefficients were calculated between trunk and lower extremity biomechanics. Multiple linear regression analyses were carried out to determine the amount of variance in triplanar knee loading explained by trunk motion and hip moments. A greater internal knee varus moment (mean, 0.11 ± 0.12 N·m/kg*m) was associated with less transverse-plane trunk rotation away from the stance limb (mean, 20.25° ± 4.42°; r = -0.46, P = .011) and a greater internal hip adduction moment (mean, 0.33 ± 0.25 N·m/kg*m; r = 0.83, P < .05). A greater internal knee external rotation moment (mean, 0.11 ± 0.08 N·m/kg*m) was associated with a greater forward trunk flexion (mean, 7.62° ± 5.28°; r = 0.42, P = .020) and a greater hip internal rotation moment (mean, 0.15 ± 0.16 N·m/kg*m; r = 0.59, P = .001). Trunk rotation and hip adduction moment explained 81% (P < .05) of the variance in knee varus moment. Trunk flexion and hip internal rotation moment explained 48% (P < .05) of the variance in knee external rotation moment. Limited trunk rotation displacement toward the new direction of travel and hip adduction moment are associated with an increased internal knee varus moment, while a combined increase in trunk flexion displacement and hip internal rotation moment is associated with a higher internal knee external rotation moment. Prevention interventions for ACL injury should encourage trunk rotation toward the new direction of travel and limit excessive trunk flexion while adjusting frontal- and transverse-plane hip NMC.

  16. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    PubMed

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  17. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method

    PubMed Central

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929

  18. Gap-filling methods to impute eddy covariance flux data by preserving variance.

    NASA Astrophysics Data System (ADS)

    Kunwor, S.; Staudhammer, C. L.; Starr, G.; Loescher, H. W.

    2015-12-01

    To represent carbon dynamics, in terms of exchange of CO2 between the terrestrial ecosystem and the atmosphere, eddy covariance (EC) data has been collected using eddy flux towers from various sites across globe for more than two decades. However, measurements from EC data are missing for various reasons: precipitation, routine maintenance, or lack of vertical turbulence. In order to have estimates of net ecosystem exchange of carbon dioxide (NEE) with high precision and accuracy, robust gap-filling methods to impute missing data are required. While the methods used so far have provided robust estimates of the mean value of NEE, little attention has been paid to preserving the variance structures embodied by the flux data. Preserving the variance of these data will provide unbiased and precise estimates of NEE over time, which mimic natural fluctuations. We used a non-linear regression approach with moving windows of different lengths (15, 30, and 60-days) to estimate non-linear regression parameters for one year of flux data from a long-leaf pine site at the Joseph Jones Ecological Research Center. We used as our base the Michaelis-Menten and Van't Hoff functions. We assessed the potential physiological drivers of these parameters with linear models using micrometeorological predictors. We then used a parameter prediction approach to refine the non-linear gap-filling equations based on micrometeorological conditions. This provides us an opportunity to incorporate additional variables, such as vapor pressure deficit (VPD) and volumetric water content (VWC) into the equations. Our preliminary results indicate that improvements in gap-filling can be gained with a 30-day moving window with additional micrometeorological predictors (as indicated by lower root mean square error (RMSE) of the predicted values of NEE). Our next steps are to use these parameter predictions from moving windows to gap-fill the data with and without incorporation of potential driver variables of the parameters traditionally used. Then, comparisons of the predicted values from these methods and 'traditional' gap-filling methods (using 12 fixed monthly windows) will be assessed to show the scale of preserving variance. Further, this method will be applied to impute artificially created gaps for analyzing if variance is preserved.

  19. About the dark and bright sides of self-efficacy: workaholism and work engagement.

    PubMed

    Del Líbano, Mario; Llorens, Susana; Salanoval, Marisa; Schaufeli, Wilmar B

    2012-07-01

    Taking the Resources-Experiences-Demands Model (RED Model) by Salanova and colleagues as our starting point, we tested how work self-efficacy relates positively to negative (i.e., work overload and work-family conflict) and positive outcomes (i.e., job satisfaction and organizational commitment), through the mediating role of workaholism (health impairment process) and work engagement (motivational process). In a sample of 386 administrative staff from a Spanish University (65% women), Structural Equation Modeling provided full evidence for the research model. In addition, Multivariate Analyses of Variance showed that self-efficacy was only related positively to one of the two dimensions of workaholism, namely, working excessively. Finally, we discuss the theoretical and practical contributions in terms of the RED Model.

  20. Evaluation of the impact of observations on blended sea surface winds in a two-dimensional variational scheme using degrees of freedom

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Xiang, Jie; Fei, Jianfang; Wang, Yi; Liu, Chunxia; Li, Yuanxiang

    2017-12-01

    This paper presents an evaluation of the observational impacts on blended sea surface winds from a two-dimensional variational data assimilation (2D-Var) scheme. We begin by briefly introducing the analysis sensitivity with respect to observations in variational data assimilation systems and its relationship with the degrees of freedom for signal (DFS), and then the DFS concept is applied to the 2D-Var sea surface wind blending scheme. Two methods, a priori and a posteriori, are used to estimate the DFS of the zonal ( u) and meridional ( v) components of winds in the 2D-Var blending scheme. The a posteriori method can obtain almost the same results as the a priori method. Because only by-products of the blending scheme are used for the a posteriori method, the computation time is reduced significantly. The magnitude of the DFS is critically related to the observational and background error statistics. Changing the observational and background error variances can affect the DFS value. Because the observation error variances are assumed to be uniform, the observational influence at each observational location is related to the background error variance, and the observations located at the place where there are larger background error variances have larger influences. The average observational influence of u and v with respect to the analysis is about 40%, implying that the background influence with respect to the analysis is about 60%.

  1. Visually scoring hirsutism

    PubMed Central

    Yildiz, Bulent O.; Bolour, Sheila; Woods, Keslie; Moore, April; Azziz, Ricardo

    2010-01-01

    BACKGROUND Hirsutism is the presence of excess body or facial terminal (coarse) hair growth in females in a male-like pattern, affects 5–15% of women, and is an important sign of underlying androgen excess. Different methods are available for the assessment of hair growth in women. METHODS We conducted a literature search and analyzed the published studies that reported methods for the assessment of hair growth. We review the basic physiology of hair growth, the development of methods for visually quantifying hair growth, the comparison of these methods with objective measurements of hair growth, how hirsutism may be defined using a visual scoring method, the influence of race and ethnicity on hirsutism, and the impact of hirsutism in diagnosing androgen excess and polycystic ovary syndrome. RESULTS Objective methods for the assessment of hair growth including photographic evaluations and microscopic measurements are available but these techniques have limitations for clinical use, including a significant degree of complexity and a high cost. Alternatively, methods for visually scoring or quantifying the amount of terminal body and facial hair growth have been in use since the early 1920s; these methods are semi-quantitative at best and subject to significant inter-observer variability. The most common visual method of scoring the extent of body and facial terminal hair growth in use today is based on a modification of the method originally described by Ferriman and Gallwey in 1961 (i.e. the mFG method). CONCLUSION Overall, the mFG scoring method is a useful visual instrument for assessing excess terminal hair growth, and the presence of hirsutism, in women. PMID:19567450

  2. On the multiple imputation variance estimator for control-based and delta-adjusted pattern mixture models.

    PubMed

    Tang, Yongqiang

    2017-12-01

    Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.

  3. Empirical single sample quantification of bias and variance in Q-ball imaging.

    PubMed

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  5. Influence diagnostics in meta-regression model.

    PubMed

    Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua

    2017-09-01

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Quantitative PET Imaging in Drug Development: Estimation of Target Occupancy.

    PubMed

    Naganawa, Mika; Gallezot, Jean-Dominique; Rossano, Samantha; Carson, Richard E

    2017-12-11

    Positron emission tomography, an imaging tool using radiolabeled tracers in humans and preclinical species, has been widely used in recent years in drug development, particularly in the central nervous system. One important goal of PET in drug development is assessing the occupancy of various molecular targets (e.g., receptors, transporters, enzymes) by exogenous drugs. The current linear mathematical approaches used to determine occupancy using PET imaging experiments are presented. These algorithms use results from multiple regions with different target content in two scans, a baseline (pre-drug) scan and a post-drug scan. New mathematical estimation approaches to determine target occupancy, using maximum likelihood, are presented. A major challenge in these methods is the proper definition of the covariance matrix of the regional binding measures, accounting for different variance of the individual regional measures and their nonzero covariance, factors that have been ignored by conventional methods. The novel methods are compared to standard methods using simulation and real human occupancy data. The simulation data showed the expected reduction in variance and bias using the proper maximum likelihood methods, when the assumptions of the estimation method matched those in simulation. Between-method differences for data from human occupancy studies were less obvious, in part due to small dataset sizes. These maximum likelihood methods form the basis for development of improved PET covariance models, in order to minimize bias and variance in PET occupancy studies.

  7. Sound recovery via intensity variations of speckle pattern pixels selected with variance-based method

    NASA Astrophysics Data System (ADS)

    Zhu, Ge; Yao, Xu-Ri; Qiu, Peng; Mahmood, Waqas; Yu, Wen-Kai; Sun, Zhi-Bin; Zhai, Guang-Jie; Zhao, Qing

    2018-02-01

    In general, the sound waves can cause the vibration of the objects that are encountered in the traveling path. If we make a laser beam illuminate the rough surface of an object, it will be scattered into a speckle pattern that vibrates with these sound waves. Here, an efficient variance-based method is proposed to recover the sound information from speckle patterns captured by a high-speed camera. This method allows us to select the proper pixels that have large variances of the gray-value variations over time, from a small region of the speckle patterns. The gray-value variations of these pixels are summed together according to a simple model to recover the sound with a high signal-to-noise ratio. Meanwhile, our method will significantly simplify the computation compared with the traditional digital-image-correlation technique. The effectiveness of the proposed method has been verified by applying a variety of objects. The experimental results illustrate that the proposed method is robust to the quality of the speckle patterns and costs more than one-order less time to perform the same number of the speckle patterns. In our experiment, a sound signal of time duration 1.876 s is recovered from various objects with time consumption of 5.38 s only.

  8. A method for estimating peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area

    USGS Publications Warehouse

    Asquith, William H.; Cleveland, Theodore G.; Roussel, Meghan C.

    2011-01-01

    Estimates of peak and time of peak streamflow for small watersheds (less than about 640 acres) in a suburban to urban, low-slope setting are needed for drainage design that is cost-effective and risk-mitigated. During 2007-10, the U.S. Geological Survey (USGS), in cooperation with the Harris County Flood Control District and the Texas Department of Transportation, developed a method to estimate peak and time of peak streamflow from excess rainfall for 10- to 640-acre watersheds in the Houston, Texas, metropolitan area. To develop the method, 24 watersheds in the study area with drainage areas less than about 3.5 square miles (2,240 acres) and with concomitant rainfall and runoff data were selected. The method is based on conjunctive analysis of rainfall and runoff data in the context of the unit hydrograph method and the rational method. For the unit hydrograph analysis, a gamma distribution model of unit hydrograph shape (a gamma unit hydrograph) was chosen and parameters estimated through matching of modeled peak and time of peak streamflow to observed values on a storm-by-storm basis. Watershed mean or watershed-specific values of peak and time to peak ("time to peak" is a parameter of the gamma unit hydrograph and is distinct from "time of peak") of the gamma unit hydrograph were computed. Two regression equations to estimate peak and time to peak of the gamma unit hydrograph that are based on watershed characteristics of drainage area and basin-development factor (BDF) were developed. For the rational method analysis, a lag time (time-R), volumetric runoff coefficient, and runoff coefficient were computed on a storm-by-storm basis. Watershed-specific values of these three metrics were computed. A regression equation to estimate time-R based on drainage area and BDF was developed. Overall arithmetic means of volumetric runoff coefficient (0.41 dimensionless) and runoff coefficient (0.25 dimensionless) for the 24 watersheds were used to express the rational method in terms of excess rainfall (the excess rational method). Both the unit hydrograph method and excess rational method are shown to provide similar estimates of peak and time of peak streamflow. The results from the two methods can be combined by using arithmetic means. A nomograph is provided that shows the respective relations between the arithmetic-mean peak and time of peak streamflow to drainage areas ranging from 10 to 640 acres. The nomograph also shows the respective relations for selected BDF ranging from undeveloped to fully developed conditions. The nomograph represents the peak streamflow for 1 inch of excess rainfall based on drainage area and BDF; the peak streamflow for design storms from the nomograph can be multiplied by the excess rainfall to estimate peak streamflow. Time of peak streamflow is readily obtained from the nomograph. Therefore, given excess rainfall values derived from watershed-loss models, which are beyond the scope of this report, the nomograph represents a method for estimating peak and time of peak streamflow for applicable watersheds in the Houston metropolitan area. Lastly, analysis of the relative influence of BDF on peak streamflow is provided, and the results indicate a 0:04log10 cubic feet per second change of peak streamflow per positive unit of change in BDF. This relative change can be used to adjust peak streamflow from the method or other hydrologic methods for a given BDF to other BDF values; example computations are provided.

  9. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  10. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  11. Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Arav, Marina

    2006-01-01

    In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…

  12. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  13. Cognitive and Linguistic Sources of Variance in 2-Year-Olds' Speech-Sound Discrimination: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-01-01

    Purpose: This preliminary investigation explored potential cognitive and linguistic sources of variance in 2- year-olds' speech-sound discrimination by using the toddler change/no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Method: Twenty typically…

  14. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    ERIC Educational Resources Information Center

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  15. An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Harwell, Michael

    1997-01-01

    The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)

  16. Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory

    ERIC Educational Resources Information Center

    Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya

    2015-01-01

    Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…

  17. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  18. Asymptotic Effect of Misspecification in the Random Part of the Multilevel Model

    ERIC Educational Resources Information Center

    Berkhof, Johannes; Kampen, Jarl Kennard

    2004-01-01

    The authors examine the asymptotic effect of omitting a random coefficient in the multilevel model and derive expressions for the change in (a) the variance components estimator and (b) the estimated variance of the fixed effects estimator. They apply the method of moments, which yields a closed form expression for the omission effect. In…

  19. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  20. Fuel cell stack monitoring and system control

    DOEpatents

    Keskula, Donald H.; Doan, Tien M.; Clingerman, Bruce J.

    2005-01-25

    A control method for monitoring a fuel cell stack in a fuel cell system in which the actual voltage and actual current from the fuel cell stack are monitored. A preestablished relationship between voltage and current over the operating range of the fuel cell is established. A variance value between the actual measured voltage and the expected voltage magnitude for a given actual measured current is calculated and compared with a predetermined allowable variance. An output is generated if the calculated variance value exceeds the predetermined variance. The predetermined voltage-current for the fuel cell is symbolized as a polarization curve at given operating conditions of the fuel cell. Other polarization curves may be generated and used for fuel cell stack monitoring based on different operating pressures, temperatures, hydrogen quantities.

  1. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  2. Watching pornographic pictures on the Internet: role of sexual arousal ratings and psychological-psychiatric symptoms for using Internet sex sites excessively.

    PubMed

    Brand, Matthias; Laier, Christian; Pawlikowski, Mirko; Schächtle, Ulrich; Schöler, Tobias; Altstötter-Gleich, Christine

    2011-06-01

    Excessive or addictive Internet use can be linked to different online activities, such as Internet gaming or cybersex. The usage of Internet pornography sites is one important facet of online sexual activity. The aim of the present work was to examine potential predictors of a tendency toward cybersex addiction in terms of subjective complaints in everyday life due to online sexual activities. We focused on the subjective evaluation of Internet pornographic material with respect to sexual arousal and emotional valence, as well as on psychological symptoms as potential predictors. We examined 89 heterosexual, male participants with an experimental task assessing subjective sexual arousal and emotional valence of Internet pornographic pictures. The Internet Addiction Test (IAT) and a modified version of the IAT for online sexual activities (IATsex), as well as several further questionnaires measuring psychological symptoms and facets of personality were also administered to the participants. Results indicate that self-reported problems in daily life linked to online sexual activities were predicted by subjective sexual arousal ratings of the pornographic material, global severity of psychological symptoms, and the number of sex applications used when being on Internet sex sites in daily life, while the time spent on Internet sex sites (minutes per day) did not significantly contribute to explanation of variance in IATsex score. Personality facets were not significantly correlated with the IATsex score. The study demonstrates the important role of subjective arousal and psychological symptoms as potential correlates of development or maintenance of excessive online sexual activity.

  3. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  4. Dimensionality and noise in energy selective x-ray imaging

    PubMed Central

    Alvarez, Robert E.

    2013-01-01

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442

  5. Read-noise characterization of focal plane array detectors via mean-variance analysis.

    PubMed

    Sperline, R P; Knight, A K; Gresham, C A; Koppenaal, D W; Hieftje, G M; Denton, M B

    2005-11-01

    Mean-variance analysis is described as a method for characterization of the read-noise and gain of focal plane array (FPA) detectors, including charge-coupled devices (CCDs), charge-injection devices (CIDs), and complementary metal-oxide-semiconductor (CMOS) multiplexers (infrared arrays). Practical FPA detector characterization is outlined. The nondestructive readout capability available in some CIDs and FPA devices is discussed as a means for signal-to-noise ratio improvement. Derivations of the equations are fully presented to unify understanding of this method by the spectroscopic community.

  6. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  7. 31 CFR 359.52 - What happens if any person purchases book-entry Series I savings bonds in excess of the maximum...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the maximum amount? We reserve the right to take any action we deem necessary to adjust the excess, including the right to remove the excess bonds from your New Treasury Direct account and refund the payment price to your bank account of record using the ACH method of payment. ...

  8. Total haemoglobin mass, but not haemoglobin concentration, is associated with preoperative cardiopulmonary exercise testing-derived oxygen-consumption variables.

    PubMed

    Otto, J M; Plumb, J O M; Wakeham, D; Clissold, E; Loughney, L; Schmidt, W; Montgomery, H E; Grocott, M P W; Richards, T

    2017-05-01

    Cardiopulmonary exercise testing (CPET) measures peak exertional oxygen consumption ( V˙O2peak ) and that at the anaerobic threshold ( V˙O2 at AT, i.e. the point at which anaerobic metabolism contributes substantially to overall metabolism). Lower values are associated with excess postoperative morbidity and mortality. A reduced haemoglobin concentration ([Hb]) results from a reduction in total haemoglobin mass (tHb-mass) or an increase in plasma volume. Thus, tHb-mass might be a more useful measure of oxygen-carrying capacity and might correlate better with CPET-derived fitness measures in preoperative patients than does circulating [Hb]. Before major elective surgery, CPET was performed, and both tHb-mass (optimized carbon monoxide rebreathing method) and circulating [Hb] were determined. In 42 patients (83% male), [Hb] was unrelated to V˙O2 at AT and V˙O2peak ( r =0.02, P =0.89 and r =0.04, P =0.80, respectively) and explained none of the variance in either measure. In contrast, tHb-mass was related to both ( r =0.661, P <0.0001 and r =0.483, P =0.001 for V˙O2 at AT and V˙O2peak , respectively). The tHb-mass explained 44% of variance in V˙O2 at AT ( P <0.0001) and 23% in V˙O2peak ( P =0.001). In contrast to [Hb], tHb-mass is an important determinant of physical fitness before major elective surgery. Further studies should determine whether low tHb-mass is predictive of poor outcome and whether targeted increases in tHb-mass might thus improve outcome. © The Author 2017. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Strong evidence for a genetic contribution to late-onset Alzheimer's disease mortality: a population-based study.

    PubMed

    Kauwe, John S K; Ridge, Perry G; Foster, Norman L; Cannon-Albright, Lisa A

    2013-01-01

    Alzheimer's disease (AD) is an international health concern that has a devastating effect on patients and families. While several genetic risk factors for AD have been identified much of the genetic variance in AD remains unexplained. There are limited published assessments of the familiality of Alzheimer's disease. Here we present the largest genealogy-based analysis of AD to date. We assessed the familiality of AD in The Utah Population Database (UPDB), a population-based resource linking electronic health data repositories for the state with the computerized genealogy of the Utah settlers and their descendants. We searched UPDB for significant familial clustering of AD to evaluate the genetic contribution to disease. We compared the Genealogical Index of Familiality (GIF) between AD individuals and randomly selected controls and estimated the Relative Risk (RR) for a range of family relationships. Finally, we identified pedigrees with a significant excess of AD deaths. The GIF analysis showed that pairs of individuals dying from AD were significantly more related than expected. This excess of relatedness was observed for both close and distant relationships. RRs for death from AD among relatives of individuals dying from AD were significantly increased for both close and more distant relatives. Multiple pedigrees had a significant excess of AD deaths. These data strongly support a genetic contribution to the observed clustering of individuals dying from AD. This report is the first large population-based assessment of the familiality of AD mortality and provides the only reported estimates of relative risk of AD mortality in extended relatives to date. The high-risk pedigrees identified show a true excess of AD mortality (not just multiple cases) and are greater in depth and width than published AD pedigrees. The presence of these high-risk pedigrees strongly supports the possibility of rare predisposition variants not yet identified.

  10. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  11. Soil moisture sensitivity of autotrophic and heterotrophic forest floor respiration in boreal xeric pine and mesic spruce forests

    NASA Astrophysics Data System (ADS)

    Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi

    2016-04-01

    Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.

  12. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  13. A comparison between Poisson and zero-inflated Poisson regression models with an application to number of black spots in Corriedale sheep

    PubMed Central

    Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel

    2008-01-01

    Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072

  14. Meaningless comparisons lead to false optimism in medical machine learning

    PubMed Central

    Kording, Konrad; Recht, Benjamin

    2017-01-01

    A new trend in medicine is the use of algorithms to analyze big datasets, e.g. using everything your phone measures about you for diagnostics or monitoring. However, these algorithms are commonly compared against weak baselines, which may contribute to excessive optimism. To assess how well an algorithm works, scientists typically ask how well its output correlates with medically assigned scores. Here we perform a meta-analysis to quantify how the literature evaluates their algorithms for monitoring mental wellbeing. We find that the bulk of the literature (∼77%) uses meaningless comparisons that ignore patient baseline state. For example, having an algorithm that uses phone data to diagnose mood disorders would be useful. However, it is possible to explain over 80% of the variance of some mood measures in the population by simply guessing that each patient has their own average mood—the patient-specific baseline. Thus, an algorithm that just predicts that our mood is like it usually is can explain the majority of variance, but is, obviously, entirely useless. Comparing to the wrong (population) baseline has a massive effect on the perceived quality of algorithms and produces baseless optimism in the field. To solve this problem we propose “user lift” that reduces these systematic errors in the evaluation of personalized medical monitoring. PMID:28949964

  15. Variance Component Selection With Applications to Microbiome Taxonomic Data.

    PubMed

    Zhai, Jing; Kim, Juhyun; Knox, Kenneth S; Twigg, Homer L; Zhou, Hua; Zhou, Jin J

    2018-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator) penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV) infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  16. Comparison of variance estimators for meta-analysis of instrumental variable estimates

    PubMed Central

    Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F

    2016-01-01

    Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262

  17. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  18. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  19. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  20. Validated LC-MS/MS method for the determination of 3-hydroxflavone and its glucuronide in blood and bioequivalent buffers: application to pharmacokinetic, absorption, and metabolism studies.

    PubMed

    Xu, Beibei; Yang, Guanyi; Ge, Shufan; Yin, Taijun; Hu, Ming; Gao, Song

    2013-11-01

    The purpose of this study is to develop an UPLC-MS/MS method to quantify 3-hydroxyflavone (3-HF) and its metabolite, 3-hydroxyflavone-glucuronide (3-HFG) from biological samples. A Waters BEH C8 column was used with acetonitrile/0.1% formic acid in water as mobile phases. The mass analysis was performed in an API 5500 Qtrap mass spectrometer via multiple reaction monitoring (MRM) with positive scan mood. The one-step protein precipitation by acetonitrile was used to extract the analytes from blood. The results showed that the linear response range was 0.61-2500.00 nM for 3-HF and 0.31-2500.00 nM for 3-HFG. The intra-day variance is less than 16.5% and accuracy is in 77.7-90.6% for 3-HF and variance less than 15.9%, accuracy in 85.1-114.7% for 3-HFG. The inter-day variance is less than 20.2%, accuracy is in 110.6-114.2% for 3-HF and variance less than 15.6%, accuracy in 83.0-89.4% for 3-HFG. The analysis was done within 4.0 min. Only 10 μl of blood is needed due to the high sensitivity of this method. The validated method was successfully used to pharmacokinetic study in A/J mouse, transport study in the Caco-2 cell culture model, and glucuronidation study using mice liver and intestine microsomes. The applications revealed that this method can be used for 3-HF and 3-HFG analysis in blood as well as in bioequivalent buffers such HBSS and KPI. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Validated LC-MS/MS method for the determination of 3-Hydroxflavone and its glucuronide in blood and bioequivalent buffers: Application to pharmacokinetic, absorption, and metabolism studies

    PubMed Central

    Xu, Beibei; Yang, Guanyi; Ge, Shufan; Yin, Taijun; Hu, Ming; Gao, Song

    2015-01-01

    The purpose of this study is to develop an UPLC-MS/MS method to quantify 3-hydroxy-flavone (3-HF) and its metabolite, 3-hydroxyflavone-glucuronide (3-HFG) from biological samples. A Waters BEH C8 column was used with acetonitrile/0.1 % formic acid in water as mobile phases. The mass analysis was performed in an API 5500 Qtrap mass spectrometer via multiple reaction monitoring (MRM) with positive scan mood. The one-step protein precipitation by acetonitrile was used to extract the analytes from plasma. The results showed that the linear response range was 0.61– 2,500.00 nM for 3-HF and 0.31– 2,500.00 nM for 3-HFG. The intra-day variance is less 16.48 % and accuracy is in 77.70–90.64 % for 3-HF and variance less than 15.86%, accuracy in 85.08–114.70 % for 3-HFG. The inter-day variance is less than 20.23 %, accuracy is in 110.58–114.2 % for 3-HF and variance less than 15.59 %, accuracy in 83.00–89.40% for 3-HFG. The analysis was done within 4.0 min. Only 10 μL of blood is needed due to the high sensitivity of this method. The validated method was successfully used to pharmacokinetic study in A/J mouse, transport study in the Caco-2 cell culture model, and glucuronidation study using mice liver and intestine microsomes. The applications revealed that this method can be used for 3-HF and 3-HFG analysis in blood as well as in bioequivalent buffers such HBSS and KPI. PMID:23973631

  2. Two proposed convergence criteria for Monte Carlo solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such asmore » statistical error reduction proportional to 1/[radical]N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf).« less

  3. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models

    PubMed Central

    2013-01-01

    Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014

  4. Chiral Analysis by Tandem Mass Spectrometry Using the Kinetic Method, by Polarimetry, and by [Superscript 1]H NMR Spectroscopy

    ERIC Educational Resources Information Center

    Fedick, Patrick W.; Bain, Ryan M.; Bain, Kinsey; Cooks, R. Graham

    2017-01-01

    The goal of this laboratory exercise was for students to understand the concept of chirality and how enantiomeric excess (ee) is experimentally determined using the analysis of ibuprofen as an example. Students determined the enantiomeric excess of the analyte by three different instrumental methods: mass spectrometry, nuclear magnetic resonance…

  5. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  6. Interlaboratory study of a liquid chromatography method for erythromycin: determination of uncertainty.

    PubMed

    Dehouck, P; Vander Heyden, Y; Smeyers-Verbeke, J; Massart, D L; Marini, R D; Chiap, P; Hubert, Ph; Crommen, J; Van de Wauw, W; De Beer, J; Cox, R; Mathieu, G; Reepmeyer, J C; Voigt, B; Estevenon, O; Nicolas, A; Van Schepdael, A; Adams, E; Hoogmartens, J

    2003-08-22

    Erythromycin is a mixture of macrolide antibiotics produced by Saccharopolyspora erythreas during fermentation. A new method for the analysis of erythromycin by liquid chromatography has previously been developed. It makes use of an Astec C18 polymeric column. After validation in one laboratory, the method was now validated in an interlaboratory study. Validation studies are commonly used to test the fitness of the analytical method prior to its use for routine quality testing. The data derived in the interlaboratory study can be used to make an uncertainty statement as well. The relationship between validation and uncertainty statement is not clear for many analysts and there is a need to show how the existing data, derived during validation, can be used in practice. Eight laboratories participated in this interlaboratory study. The set-up allowed the determination of the repeatability variance, s(2)r and the between-laboratory variance, s(2)L. Combination of s(2)r and s(2)L results in the reproducibility variance s(2)R. It has been shown how these data can be used in future by a single laboratory that wants to make an uncertainty statement concerning the same analysis.

  7. Point focusing using loudspeaker arrays from the perspective of optimal beamforming.

    PubMed

    Bai, Mingsian R; Hsieh, Yu-Hao

    2015-06-01

    Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.

  8. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  9. A binding energy study of the Atomic Mass Evaluation 2012 and an updated beta-decay study of neutron-rich 74Cu

    NASA Astrophysics Data System (ADS)

    Tracy, James L., Jr.

    A study of ground state binding energy values listed in the Atomic Mass Evaluation 2012 (AME2012) using an interpretive approach, as opposed to the exploratory methods of previous models, is presented. This model is based on a postulate requiring all protons to pair with available neutrons to form bound alpha clusters as the ground state for an N = Z core upon which excess neutrons are added. For each core, the trend of the binding energy as a function of excess neutrons in the isotopic chain can be fit with a three-term quadratic function. The quadratic parameter reveals a smooth decaying exponential function. By re-envisioning the determination of mass excess, the constant-term fit parameters, representing N = Z nuclei, reveal a near-symmetry around Z = 50. The linear fit parameters exhibit trends which are linear functions of core size. A neutron drip-line prediction is compared against current models. By considering the possibility of an alpha-cluster core, a new ground-state structure grouping scheme is presented; nucleon-nucleon pairing is shown to have a greater role in level filling. This model, referred to as the Alpha-Deuteron-Neutron Model, yields promising first results when considering root-mean-square variances from the AME2012. The beta-decay of the neutron-rich isotope 74Cu has been studied using three high-purity Germanium clover detectors at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory. A high-resolution mass separator greatly improved the purity of the 74Cu beam by removing isobaric contaminants, thus allowing decay through its isobar chain to the stable 74Ge at the center of the LeRIBSS detector array without any decay chain member dominating. Using coincidence gating techniques, 121 gamma-rays associated with 74Cu were isolated from the collective singles spectrum. Eighty-seven of these were placed in an expanded level scheme, and updated beta-feeding level intensities and log( ft) values are presented based on multiple newly-placed excited states up to 6.8 MeV. The progression of simulated Total Absorption gamma-ray Spectroscopy (TAGS) based on known levels and beta feeding values from previous measurements to this evaluation are presented and demonstrate the need for a TAGS measurement of this isotope to gain a more complete understanding of its decay scheme.

  10. Unequal Cell Frequencies in Analysis of Variance: A Review and Extension of Methodology for Multiple Missing Observations.

    ERIC Educational Resources Information Center

    Proger, Barton B.; And Others

    Many researchers assume that unequal cell frequencies in analysis of variance (ANOVA) designs result from poor planning. However, there are several valid reasons why one might have to analyze an unequal-n data matrix. The present study reviewed four categories of methods for treating unequal-n matrices by ANOVA: (a) unaltered data (least-squares…

  11. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  12. Turbulent variance characteristics of temperature and humidity over a non-uniform land surface for an agricultural ecosystem in China

    NASA Astrophysics Data System (ADS)

    Gao, Z. Q.; Bian, L. G.; Chen, Z. G.; Sparrow, M.; Zhang, J. H.

    2006-05-01

    This paper describes the application of the variance method for flux estimation over a mixed agricultural region in China. Eddy covariance and flux variance measurements were conducted in a near-surface layer over a non-uniform land surface in the central plain of China from 7 June to 20 July 2002. During this period, the mean canopy height was about 0.50 m. The study site consisted of grass (10% of area), beans (15%), corn (15%) and rice (60%). Under unstable conditions, the standard deviations of temperature and water vapor density (normalized by appropriate scaling parameters), observed by a single instrument, followed the Monin-Obukhov similarity theory. The similarity constants for heat (C-T) and water vapor (C-q) were 1.09 and 1.49, respectively. In comparison with direct measurements using eddy covariance techniques, the flux variance method, on average, underestimated sensible heat flux by 21% and latent heat flux by 24%, which may be attributed to the fact that the observed slight deviations (20% or 30% at most) of the similarity "constants" may be within the expected range of variation of a single instrument from the generally-valid relations.

  13. Intra-individual variation in urinary iodine concentration: effect of statistical correction on population distribution using seasonal three-consecutive-day spot urine in children

    PubMed Central

    Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun

    2016-01-01

    Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442

  14. Genomic BLUP including additive and dominant variation in purebreds and F1 crossbreds, with an application in pigs.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Elsen, Jean-Michel; Misztal, Ignacy; Herring, William; Legarra, Andrès

    2016-01-29

    Most developments in quantitative genetics theory focus on the study of intra-breed/line concepts. With the availability of massive genomic information, it becomes necessary to revisit the theory for crossbred populations. We propose methods to construct genomic covariances with additive and non-additive (dominance) inheritance in the case of pure lines and crossbred populations. We describe substitution effects and dominant deviations across two pure parental populations and the crossbred population. Gene effects are assumed to be independent of the origin of alleles and allelic frequencies can differ between parental populations. Based on these assumptions, the theoretical variance components (additive and dominant) are obtained as a function of marker effects and allelic frequencies. The additive genetic variance in the crossbred population includes the biological additive and dominant effects of a gene and a covariance term. Dominance variance in the crossbred population is proportional to the product of the heterozygosity coefficients of both parental populations. A genomic BLUP (best linear unbiased prediction) equivalent model is presented. We illustrate this approach by using pig data (two pure lines and their cross, including 8265 phenotyped and genotyped sows). For the total number of piglets born, the dominance variance in the crossbred population represented about 13 % of the total genetic variance. Dominance variation is only marginally important for litter size in the crossbred population. We present a coherent marker-based model that includes purebred and crossbred data and additive and dominant actions. Using this model, it is possible to estimate breeding values, dominant deviations and variance components in a dataset that comprises data on purebred and crossbred individuals. These methods can be exploited to plan assortative mating in pig, maize or other species, in order to generate superior crossbred individuals in terms of performance.

  15. [Application of single-band brightness variance ratio to the interference dissociation of cloud for satellite data].

    PubMed

    Qu, Wei-ping; Liu, Wen-qing; Liu, Jian-guo; Lu, Yi-huai; Zhu, Jun; Qin, Min; Liu, Cheng

    2006-11-01

    In satellite remote-sensing detection, cloud as an interference plays a negative role in data retrieval. How to discern the cloud fields with high fidelity thus comes as a need to the following research. A new method rooting in atmospheric radiation characteristics of cloud layer, in the present paper, presents a sort of solution where single-band brightness variance ratio is used to detect the relative intensity of cloud clutter so as to delineate cloud field rapidly and exactly, and the formulae of brightness variance ratio of satellite image, image reflectance variance ratio, and brightness temperature variance ratio of thermal infrared image are also given to enable cloud elimination to produce data free from cloud interference. According to the variance of the penetrating capability for different spectra bands, an objective evaluation is done on cloud penetration of them with the factors that influence penetration effect. Finally, a multi-band data fusion task is completed using the image data of infrared penetration from cirrus nothus. Image data reconstruction is of good quality and exactitude to show the real data of visible band covered by cloud fields. Statistics indicates the consistency of waveband relativity with image data after the data fusion.

  16. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  17. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    NASA Astrophysics Data System (ADS)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  18. Genomic scan as a tool for assessing the genetic component of phenotypic variance in wild populations.

    PubMed

    Herrera, Carlos M

    2012-01-01

    Methods for estimating quantitative trait heritability in wild populations have been developed in recent years which take advantage of the increased availability of genetic markers to reconstruct pedigrees or estimate relatedness between individuals, but their application to real-world data is not exempt from difficulties. This chapter describes a recent marker-based technique which, by adopting a genomic scan approach and focusing on the relationship between phenotypes and genotypes at the individual level, avoids the problems inherent to marker-based estimators of relatedness. This method allows the quantification of the genetic component of phenotypic variance ("degree of genetic determination" or "heritability in the broad sense") in wild populations and is applicable whenever phenotypic trait values and multilocus data for a large number of genetic markers (e.g., amplified fragment length polymorphisms, AFLPs) are simultaneously available for a sample of individuals from the same population. The method proceeds by first identifying those markers whose variation across individuals is significantly correlated with individual phenotypic differences ("adaptive loci"). The proportion of phenotypic variance in the sample that is statistically accounted for by individual differences in adaptive loci is then estimated by fitting a linear model to the data, with trait value as the dependent variable and scores of adaptive loci as independent ones. The method can be easily extended to accommodate quantitative or qualitative information on biologically relevant features of the environment experienced by each sampled individual, in which case estimates of the environmental and genotype × environment components of phenotypic variance can also be obtained.

  19. Added sugars: consumption and associated factors among adults and the elderly. São Paulo, Brazil.

    PubMed

    Bueno, Milena Baptista; Marchioni, Dirce Maria Lobo; César, Chester Luis Galvão; Fisberg, Regina Mara

    2012-06-01

    To investigate added sugar intake, main dietary sources and factors associated with excessive intake of added sugar. A population-based household survey was carried out in São Paulo, the largest city in Brazil. Cluster sampling was performed and the study sample comprised 689 adults and 622 elderly individuals. Dietary intake was estimated based on a 24-hour food recall. Usual nutrient intake was estimated by correcting for the within-person variance of intake using the Iowa State University (ISU) method. Linear regression analysis was conducted to identify factors associated with added sugar intake. Average of energy intake (EI) from added sugars was 9.1% (95% CI: 8.9%; 9.4%) among adults and 8.4% (95% CI: 8.2%; 8.7%) among the elderly (p < 0.05). Average added sugar intake (% EI) was higher among women than among men (p < 0.05). Soft drink was the main source of added sugar among adults, while table sugar was the main source of added sugar among the elderly. Added sugar intake increased with age among adults. Moreover, higher socioeconomic level was associated with added sugar intake in the same group. Added sugar intake is higher among younger adults of higher socioeconomic level. Soft drink and table sugar accounted for more than 50% of the sugar consumed.

  20. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  1. Methods for Improving Information from ’Undesigned’ Human Factors Experiments.

    DTIC Science & Technology

    Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction

  2. Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities

    NASA Astrophysics Data System (ADS)

    Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco

    2017-03-01

    Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.

  3. Hedged Monte-Carlo: low variance derivative pricing with objective probabilities

    NASA Astrophysics Data System (ADS)

    Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan

    2001-01-01

    We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.

  4. Specificity of reliable change models and review of the within-subjects standard deviation as an error term.

    PubMed

    Hinton-Bayre, Anton D

    2011-02-01

    There is an ongoing debate over the preferred method(s) for determining the reliable change (RC) in individual scores over time. In the present paper, specificity comparisons of several classic and contemporary RC models were made using a real data set. This included a more detailed review of a new RC model recently proposed in this journal, that used the within-subjects standard deviation (WSD) as the error term. It was suggested that the RC(WSD) was more sensitive to change and theoretically superior. The current paper demonstrated that even in the presence of mean practice effects, false-positive rates were comparable across models when reliability was good and initial and retest variances were equivalent. However, when variances differed, discrepancies in classification across models became evident. Notably, the RC using the WSD provided unacceptably high false-positive rates in this setting. It was considered that the WSD was never intended for measuring change in this manner. The WSD actually combines systematic and error variance. The systematic variance comes from measurable between-treatment differences, commonly referred to as practice effect. It was further demonstrated that removal of the systematic variance and appropriate modification of the residual error term for the purpose of testing individual change yielded an error term already published and criticized in the literature. A consensus on the RC approach is needed. To that end, further comparison of models under varied conditions is encouraged.

  5. A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity

    PubMed Central

    Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837

  6. Multi-objective Optimization of Solar Irradiance and Variance at Pertinent Inclination Angles

    NASA Astrophysics Data System (ADS)

    Jain, Dhanesh; Lalwani, Mahendra

    2018-05-01

    The performance of photovoltaic panel gets highly affected bychange in atmospheric conditions and angle of inclination. This article evaluates the optimum tilt angle and orientation angle (surface azimuth angle) for solar photovoltaic array in order to get maximum solar irradiance and to reduce variance of radiation at different sets or subsets of time periods. Non-linear regression and adaptive neural fuzzy interference system (ANFIS) methods are used for predicting the solar radiation. The results of ANFIS are more accurate in comparison to non-linear regression. These results are further used for evaluating the correlation and applied for estimating the optimum combination of tilt angle and orientation angle with the help of general algebraic modelling system and multi-objective genetic algorithm. The hourly average solar irradiation is calculated at different combinations of tilt angle and orientation angle with the help of horizontal surface radiation data of Jodhpur (Rajasthan, India). The hourly average solar irradiance is calculated for three cases: zero variance, with actual variance and with double variance at different time scenarios. It is concluded that monthly collected solar radiation produces better result as compared to bimonthly, seasonally, half-yearly and yearly collected solar radiation. The profit obtained for monthly varying angle has 4.6% more with zero variance and 3.8% more with actual variance, than the annually fixed angle.

  7. Model-based variance-stabilizing transformation for Illumina microarray data.

    PubMed

    Lin, Simon M; Du, Pan; Huber, Wolfgang; Kibbe, Warren A

    2008-02-01

    Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.

  8. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  9. Beliefs and Intentions for Skin Protection and Exposure

    PubMed Central

    Heckman, Carolyn J.; Manne, Sharon L.; Kloss, Jacqueline D.; Bass, Sarah Bauerle; Collins, Bradley; Lessin, Stuart R.

    2010-01-01

    Objectives To evaluate Fishbein’s Integrative Model in predicting young adults’ skin protection, sun exposure, and indoor tanning intentions. Methods 212 participants completed an online survey. Results Damage distress, self-efficacy, and perceived control accounted for 34% of the variance in skin protection intentions. Outcome beliefs and low self-efficacy for sun avoidance accounted for 25% of the variance in sun exposure intentions. Perceived damage, outcome evaluation, norms, and indoor tanning prototype accounted for 32% of the variance in indoor tanning intentions. Conclusions Future research should investigate whether these variables predict exposure and protection behaviors and whether intervening can reduce young adults’ skin cancer risk behaviors. PMID:22251761

  10. Analyzing Test-Taking Behavior: Decision Theory Meets Psychometric Theory.

    PubMed

    Budescu, David V; Bo, Yuanchao

    2015-12-01

    We investigate the implications of penalizing incorrect answers to multiple-choice tests, from the perspective of both test-takers and test-makers. To do so, we use a model that combines a well-known item response theory model with prospect theory (Kahneman and Tversky, Prospect theory: An analysis of decision under risk, Econometrica 47:263-91, 1979). Our results reveal that when test-takers are fully informed of the scoring rule, the use of any penalty has detrimental effects for both test-takers (they are always penalized in excess, particularly those who are risk averse and loss averse) and test-makers (the bias of the estimated scores, as well as the variance and skewness of their distribution, increase as a function of the severity of the penalty).

  11. Perfectionism and Personality Disorders as Predictors of Symptoms and Interpersonal Problems.

    PubMed

    Dimaggio, Giancarlo; Lysaker, Paul H; Calarco, Teresa; Pedone, Roberto; Marsigli, Nicola; Riccardi, Ilaria; Sabatelli, Beatrice; Carcione, Antonino; Paviglianiti, Alessandra

    2015-01-01

    Maladaptive perfectionism is a common factor in many disorders and is correlated with some personality dysfunctions. Less clear is how dimensions, such as concern over mistakes, doubts about actions, and parental criticism, are linked to overall suffering. Additionally, correlations between perfectionism and personality disorders are poorly explored in clinical samples. In this study we compared a treatment seeking individuals (n=93) and a community sample (n=100) on dimensions of maladaptive perfectionism, personality disorders, symptoms, and interpersonal problems. Results in both samples revealed maladaptive perfectionism was strongly associated with general suffering, interpersonal problems, and a broad range of personality disordered traits. Excessive concern over one's errors, and to some extent doubts about actions, predicted unique additional variance beyond the presence of personality pathology in explaining symptoms and interpersonal problems.

  12. Self-esteem and self-efficacy; perceived parenting and family climate; and depression in university students.

    PubMed

    Oliver, J M; Paull, J C

    1995-07-01

    This study examined associations among self-esteem and self-efficacy; perceived unfavorable Parental Rearing Style (perceived PRS) and unfavorable family climate in the family of origin; and depression in undergraduates still in frequent contact with their families (N = 186). Unfavorable perceived PRS and family climate were construed as "affectionless control," in which parents and family provide little affection, but excessive control. Constructs were measured by the Self-Esteem Inventory, the Self-Efficacy Scale, the Child Report of Parental Behavior Inventory, the Family Environment Scale, and the Beck Inventory. Perceived "affectionless control" in both PRS and family climate accounted for about 13% of the variance in self-esteem, self-efficacy, and depression. Neither introversion nor depression mediated the relation between family socialization and self-esteem.

  13. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  14. A Framework for Analyzing Biometric Template Aging and Renewal Prediction

    DTIC Science & Technology

    2009-03-01

    databases has sufficient data to support template aging over an extended period of time. Another assumption is that there is significant variance to...mentioned above for enrollment also apply to verification. When combining enrollment and verification, there is a significant amount of variance that... significant advancement in the biometrics body of knowledge. This research presents the CTARP Framework, a novel foundational framework for methods of

  15. A New Method for Estimating the Variance Overlap between the Short and the Long Form of a Psychological Test

    ERIC Educational Resources Information Center

    Pfeiffer, Nils; Hagemann, Dirk; Backenstrass, Matthias

    2011-01-01

    In response to the low standards in short form development, Smith, McCarthy, and Anderson (2000) introduced a set of guidelines for the construction and evaluation of short forms of psychological tests. One of their recommendations requires researches to show that the variance overlap between the short form and its long form is adequate. This…

  16. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  17. Hydrogeophysical Assessment of Aquifer Uncertainty Using Simulated Annealing driven MRF-Based Stochastic Joint Inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.

    2017-12-01

    Geophysical quantification of hydrogeological parameters typically involve limited noisy measurements coupled with inadequate understanding of the target phenomenon. Hence, a deterministic solution is unrealistic in light of the largely uncertain inputs. Stochastic imaging (SI), in contrast, provides multiple equiprobable realizations that enable probabilistic assessment of aquifer properties in a realistic manner. Generation of geologically realistic prior models is central to SI frameworks. Higher-order statistics for representing prior geological features in SI are, however, usually borrowed from training images (TIs), which may produce undesirable outcomes if the TIs are unpresentatitve of the target structures. The Markov random field (MRF)-based SI strategy provides a data-driven alternative to TI-based SI algorithms. In the MRF-based method, the simulation of spatial features is guided by Gibbs energy (GE) minimization. Local configurations with smaller GEs have higher likelihood of occurrence and vice versa. The parameters of the Gibbs distribution for computing the GE are estimated from the hydrogeophysical data, thereby enabling the generation of site-specific structures in the absence of reliable TIs. In Metropolis-like SI methods, the variance of the transition probability controls the jump-size. The procedure is a standard Markov chain Monte Carlo (McMC) method when a constant variance is assumed, and becomes simulated annealing (SA) when the variance (cooling temperature) is allowed to decrease gradually with time. We observe that in certain problems, the large variance typically employed at the beginning to hasten burn-in may be unideal for sampling at the equilibrium state. The powerfulness of SA stems from its flexibility to adaptively scale the variance at different stages of the sampling. Degeneration of results were reported in a previous implementation of the MRF-based SI strategy based on a constant variance. Here, we present an updated version of the algorithm based on SA that appears to resolve the degeneration problem with seemingly improved results. We illustrate the performance of the SA version with a joint inversion of time-lapse concentration and electrical resistivity measurements in a hypothetical trinary hydrofacies aquifer characterization problem.

  18. Excess zinc ions are a competitive inhibitor for carboxypeptidase A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirose, J.; Ando, S.; Kidani, Y.

    The mechanism for inhibition of enzyme activity by excess zinc ions has been studied by kinetic and equilibrium dialysis methods at pH 8.2, I = 0.5 M. With carboxypeptidase A (bovine pancreas), peptide (carbobenzoxyglycyl-L-phenylalanine and hippuryl-L-phenylalanine) and ester (hippuryl-L-phenyl lactate) substrates were inhibited competitively by excess zinc ions. The K/sub i/ values for excess zinc ions with carboxypeptidase A at pH 8.2 are all similar. The apparent constant for dissociation of excess zinc ions from carboxypeptidase A was also obtained by equilibrium dialysis at pH 8.2 and was 2.4 x 10/sup -5/ M, very close to the K/sub i/ valuesmore » above. With arsanilazotyrosine-248 carboxypeptidase A ((Azo-CPD)Zn)), hippuryl-L-phenylalanine, carbobenzoxyglycyl-L-phenylalanine, and hippuryl-L-phenyl lactate were also inhibited with a competitive pattern by excess zinc ions, and the K/sub i/ values were (3.0-3.5) x 10/sup -5/ M. The apparent constant for dissociation of excess zinc ions from arsanilazotyrosine-248 carboxypeptidase A, which was obtained from absorption changes at 510 nm, was 3.2 x 10/sup -5/ M and is similar to the K/sub i/ values for ((Azo-CPD)Zn). The apparent dissociation and inhibition constants, which were obtained by inhibition of enzyme activity and spectrophotometric and equilibrium dialysis methods with native carboxypeptidase A and arsanilazotyrosine-248 carboxypeptidase A, were almost the same. This agreement between the apparent dissociation and inhibition constants indicates that the zinc binding to the enzymes directly relates to the inhibition of enzyme activity by excess zinc ions. Excess zinc ions were competitive inhibitors for both peptide and ester substrates. This behavior is believed to arise by the excess zinc ions fixing the enzyme in a conformation to which the substrates cannot bind.« less

  19. The Hurt of Judgment in Excessive Weight Women: A Hermeneutic Study.

    PubMed

    Mehrdad, Neda; Hossein Abbasi, Nahid; Nikbakht Nasrabadi, Alireza

    2015-04-23

    Excess weight is one of the increasing problems of the present society and one of the threatening health conditions around the world. Despite many efforts for prevention and treatment or even surgery, the process of excess weight is not decreased in the world. While most of the studies conducted on excess weight concentrated on the issues why people get excess weight or how the prevention and treatment of excess weight must be performed, there is lake of knowledge about what excessive weight people really experience in their daily life. Understanding the lived experience of excess weight in women is linked with their health and society's health while it indirectly develops the nursing knowledge to improve the quality and access to holistic health care in excessive weight women. The aim of study was to describe with a deeper understanding, the lived experience of excess weight in women. Using a hermeneutic phenomenological approach and a van-manen analysis methods, in depth semi- structured interviews were conducted with twelve women who had lived experience of excess weight. The hurt of Judgment was the main theme that emerged in the process of data analysis. This theme was derived from three sub-themes including social judgment, being different and being seen. These findings can prove helpful in promoting the nursing knowledge concerning a holistic approach in communicating to excessive weight people.

  20. Effect of Foot Hyperpronation on Lumbar Lordosis and Thoracic Kyphosis in Standing Position Using 3-Dimensional Ultrasound-Based Motion Analysis System

    PubMed Central

    Farokhmanesh, Khatere; Shirzadian, Toraj; Mahboubi, Mohammad; Shahri, Mina Neyakan

    2014-01-01

    Based on clinical observations, foot hyperpronation is very common. Excessive pronation (hyperpronation) can cause malalignment of the lower extremities. This most often leads to functional and structural deficits. The aim of this study was to assess the effect of foot hyperpronation on lumbar lordosis and thoracic kyphosis. Thirty five healthy subjects (age range, 18030 years) were asked to stand on 4 positions including a flat surface (normal position) and on wedges angled at 10, 15, and 20 degrees. Sampling was done using simple random sampling. Measurements were made by a motion analysis system. For data analysis, the SPSS software (ver. 18) using paired t-test and repeated measures analysis of variance (ANOVA) was applied. The eversion created by the wedges caused a significant increase in lumbar lordosis and thoracic kyphosis. The most significant change occurred between two consecutive positions of flat surface and the first wedge. The t-test for repeated measures showed a high correlation between each two consecutive positions. The results showed that with increased bilateral foot pronation, lumbar lordosis and thoracic kyphosis increased as well. In fact, each of these results is a compensation phenomenon. Further studies are required to determine long-term results of excessive foot pronation and its probable effect on damage progression. PMID:25169004

  1. Effect of foot hyperpronation on lumbar lordosis and thoracic kyphosis in standing position using 3-dimensional ultrasound-based motion analysis system.

    PubMed

    Farokhmanesh, Khatere; Shirzadian, Toraj; Mahboubi, Mohammad; Shahri, Mina Neyakan

    2014-06-17

    Based on clinical observations, foot hyperpronation is very common. Excessive pronation (hyperpronation) can cause malalignment of the lower extremities. This most often leads to functional and structural deficits. The aim of this study was to assess the effect of foot hyperpronation on lumbar lordosis and thoracic kyphosis. Thirty five healthy subjects (age range, 18030 years) were asked to stand on 4 positions including a flat surface (normal position) and on wedges angled at 10, 15, and 20 degrees. Sampling was done using simple random sampling. Measurements were made by a motion analysis system. For data analysis, the SPSS software (ver. 18) using paired t-test and repeated measures analysis of variance (ANOVA) was applied. The eversion created by the wedges caused a significant increase in lumbar lordosis and thoracic kyphosis. The most significant change occurred between two consecutive positions of flat surface and the first wedge. The t-test for repeated measures showed a high correlation between each two consecutive positions. The results showed that with increased bilateral foot pronation, lumbar lordosis and thoracic kyphosis increased as well. In fact, each of these results is a compensation phenomenon. Further studies are required to determine long-term results of excessive foot pronation and its probable effect on damage progression.

  2. Video denoising using low rank tensor decomposition

    NASA Astrophysics Data System (ADS)

    Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting

    2017-03-01

    Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.

  3. Search-free license plate localization based on saliency and local variance estimation

    NASA Astrophysics Data System (ADS)

    Safaei, Amin; Tang, H. L.; Sanei, S.

    2015-02-01

    In recent years, the performance and accuracy of automatic license plate number recognition (ALPR) systems have greatly improved, however the increasing number of applications for such systems have made ALPR research more challenging than ever. The inherent computational complexity of search dependent algorithms remains a major problem for current ALPR systems. This paper proposes a novel search-free method of localization based on the estimation of saliency and local variance. Gabor functions are then used to validate the choice of candidate license plate. The algorithm was applied to three image datasets with different levels of complexity and the results compared with a number of benchmark methods, particularly in terms of speed. The proposed method outperforms the state of the art methods and can be used for real time applications.

  4. WE-AB-207A-12: HLCC Based Quantitative Evaluation Method of Image Artifact in Dental CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y; Wu, S; Qi, H

    Purpose: Image artifacts are usually evaluated qualitatively via visual observation of the reconstructed images, which is susceptible to subjective factors due to the lack of an objective evaluation criterion. In this work, we propose a Helgason-Ludwig consistency condition (HLCC) based evaluation method to quantify the severity level of different image artifacts in dental CBCT. Methods: Our evaluation method consists of four step: 1) Acquire Cone beam CT(CBCT) projection; 2) Convert 3D CBCT projection to fan-beam projection by extracting its central plane projection; 3) Convert fan-beam projection to parallel-beam projection utilizing sinogram-based rebinning algorithm or detail-based rebinning algorithm; 4) Obtain HLCCmore » profile by integrating parallel-beam projection per view and calculate wave percentage and variance of the HLCC profile, which can be used to describe the severity level of image artifacts. Results: Several sets of dental CBCT projections containing only one type of artifact (i.e. geometry, scatter, beam hardening, lag and noise artifact), were simulated using gDRR, a GPU tool developed for efficient, accurate, and realistic simulation of CBCT Projections. These simulated CBCT projections were used to test our proposed method. HLCC profile wave percentage and variance induced by geometry distortion are about 3∼21 times and 16∼393 times as large as that of the artifact-free projection, respectively. The increase factor of wave percentage and variance are 6 and133 times for beam hardening, 19 and 1184 times for scatter, and 4 and16 times for lag artifacts, respectively. In contrast, for noisy projection the wave percentage, variance and inconsistency level are almost the same with those of the noise-free one. Conclusion: We have proposed a quantitative evaluation method of image artifact based on HLCC theory. According to our simulation results, the severity of different artifact types is found to be in a following order: Scatter>Geometry>Beam hardening>Lag>Noise>Artifact-free in dental CBCT.« less

  5. Dimensionality and noise in energy selective x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Robert E.

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less

  6. Quantifying the impact of fixed effects modeling of clusters in multiple imputation for cluster randomized trials

    PubMed Central

    Andridge, Rebecca. R.

    2011-01-01

    In cluster randomized trials (CRTs), identifiable clusters rather than individuals are randomized to study groups. Resulting data often consist of a small number of clusters with correlated observations within a treatment group. Missing data often present a problem in the analysis of such trials, and multiple imputation (MI) has been used to create complete data sets, enabling subsequent analysis with well-established analysis methods for CRTs. We discuss strategies for accounting for clustering when multiply imputing a missing continuous outcome, focusing on estimation of the variance of group means as used in an adjusted t-test or ANOVA. These analysis procedures are congenial to (can be derived from) a mixed effects imputation model; however, this imputation procedure is not yet available in commercial statistical software. An alternative approach that is readily available and has been used in recent studies is to include fixed effects for cluster, but the impact of using this convenient method has not been studied. We show that under this imputation model the MI variance estimator is positively biased and that smaller ICCs lead to larger overestimation of the MI variance. Analytical expressions for the bias of the variance estimator are derived in the case of data missing completely at random (MCAR), and cases in which data are missing at random (MAR) are illustrated through simulation. Finally, various imputation methods are applied to data from the Detroit Middle School Asthma Project, a recent school-based CRT, and differences in inference are compared. PMID:21259309

  7. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  8. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  9. Empirical Bayes estimation of undercount in the decennial census.

    PubMed

    Cressie, N

    1989-12-01

    Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)." excerpt

  10. Excessive behaviors in clinical practice--A state of the art article.

    PubMed

    Punzi, Elisabeth H

    2016-01-01

    This paper concerns difficulties with excessive food intake, sexual activities, romantic relationships, gambling, Internet use, shopping, and exercise-behaviors that might cause considerable suffering. Excessive behaviors are seen as expressions of underlying difficulties that often co-occur with other psychological difficulties, and behaviors may accompany or replace each other. Moreover, they might pass unnoticed in clinical practice. Given the complexity of excessive behaviors, integrated and individualized treatment has been recommended. This paper presents an overview of the terminology concerning excessive behaviors, and the impact of naming is acknowledged. Thereafter, methods for identification and assessment, as well as treatment needs are discussed. Because identification, assessment, and treatment occur in an interaction between client and practitioner, this paper presents a discussion of the need to empower practitioners to identify and assess excessive behaviors and provide an integrated treatment. Moreover, the need to support practitioners' capacity to handle and tolerate the overwhelming suffering and the negative consequences connected to excessive behaviors is discussed. Qualitative studies are suggested in order to understand the meaning of excessive behaviors, treatment needs, and the interaction between client and practitioner.

  11. Mapping the Regional Influence of Genetics on Brain Structure Variability - A Tensor-Based Morphometry Study

    PubMed Central

    Brun, Caroline; Leporé, Natasha; Pennec, Xavier; Lee, Agatha D.; Barysheva, Marina; Madsen, Sarah K.; Avedissian, Christina; Chou, Yi-Yu; de Zubicaray, Greig I.; McMahon, Katie; Wright, Margaret; Toga, Arthur W.; Thompson, Paul M.

    2010-01-01

    Genetic and environmental factors influence brain structure and function profoundly The search for heritable anatomical features and their influencing genes would be accelerated with detailed 3D maps showing the degree to which brain morphometry is genetically determined. As part of an MRI study that will scan 1150 twins, we applied Tensor-Based Morphometry to compute morphometric differences in 23 pairs of identical twins and 23 pairs of same-sex fraternal twins (mean age: 23.8 ± 1.8 SD years). All 92 twins’ 3D brain MRI scans were nonlinearly registered to a common space using a Riemannian fluid-based warping approach to compute volumetric differences across subjects. A multi-template method was used to improve volume quantification. Vector fields driving each subject’s anatomy onto the common template were analyzed to create maps of local volumetric excesses and deficits relative to the standard template. Using a new structural equation modeling method, we computed the voxelwise proportion of variance in volumes attributable to additive (A) or dominant (D) genetic factors versus shared environmental (C) or unique environmental factors (E). The method was also applied to various anatomical regions of interest (ROIs). As hypothesized, the overall volumes of the brain, basal ganglia, thalamus, and each lobe were under strong genetic control; local white matter volumes were mostly controlled by common environment. After adjusting for individual differences in overall brain scale, genetic influences were still relatively high in the corpus callosum and in early-maturing brain regions such as the occipital lobes, while environmental influences were greater in frontal brain regions which have a more protracted maturational time-course. PMID:19446645

  12. FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport

    NASA Astrophysics Data System (ADS)

    Munk, Madicken

    The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.

  13. A proposed case-control framework to probabilistically classify individual deaths as expected or excess during extreme hot weather events.

    PubMed

    Henderson, Sarah B; Gauld, Jillian S; Rauch, Stephen A; McLean, Kathleen E; Krstic, Nikolas; Hondula, David M; Kosatsky, Tom

    2016-11-15

    Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number of deaths was repeatedly sampled from all deaths that occurred during a hot weather event, and compared with deaths during a control period. The deaths were compared with respect to five factors known to be associated with hot weather mortality. Individuals were ranked by their presence in significant models over 100 trials of 10,000 repetitions. Those with the highest rankings were identified as probable excess deaths. Sensitivity analyses were performed on a range of model combinations. These methods were applied to a 2009 hot weather event in greater Vancouver, Canada. The excess deaths identified were sensitive to differences in model combinations, particularly between univariate and multivariate approaches. One multivariate and one univariate combination were chosen as the best models for further analyses. The individuals identified by multiple combinations suggest that marginalized populations in greater Vancouver are at higher risk of death during hot weather. This study proposes novel methods for classifying specific deaths as expected or excess during a hot weather event. Further work is needed to evaluate performance of the methods in simulation studies and against clinically identified cases. If confirmed, these methods could be applied to a wide range of populations and events of interest.

  14. Performance of chromatographic systems to model soil-water sorption.

    PubMed

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  16. Novel health monitoring method using an RGB camera.

    PubMed

    Hassan, M A; Malik, A S; Fofi, D; Saad, N; Meriaudeau, F

    2017-11-01

    In this paper we present a novel health monitoring method by estimating the heart rate and respiratory rate using an RGB camera. The heart rate and the respiratory rate are estimated from the photoplethysmography (PPG) and the respiratory motion. The method mainly operates by using the green spectrum of the RGB camera to generate a multivariate PPG signal to perform multivariate de-noising on the video signal to extract the resultant PPG signal. A periodicity based voting scheme (PVS) was used to measure the heart rate and respiratory rate from the estimated PPG signal. We evaluated our proposed method with a state of the art heart rate measuring method for two scenarios using the MAHNOB-HCI database and a self collected naturalistic environment database. The methods were furthermore evaluated for various scenarios at naturalistic environments such as a motion variance session and a skin tone variance session. Our proposed method operated robustly during the experiments and outperformed the state of the art heart rate measuring methods by compensating the effects of the naturalistic environment.

  17. Fully moderated T-statistic for small sample size gene expression arrays.

    PubMed

    Yu, Lianbo; Gulati, Parul; Fernandez, Soledad; Pennell, Michael; Kirschner, Lawrence; Jarjoura, David

    2011-09-15

    Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.

  18. Relationship of work-family conflict, self-reported social support and job satisfaction to burnout syndrome among medical workers in southwest China: A cross-sectional study.

    PubMed

    Yang, Shujuan; Liu, Danping; Liu, Hongbo; Zhang, Juying; Duan, Zhanqi

    2017-01-01

    Burnout is a psychosomatic syndrome widely observed in Chinese medical workers due to the increasing cost of medical treatment, excessive workload, and excessive prescribing behavior. No studies have evaluated the interrelationship among occupational burnout, work-family conflict, social support, and job satisfaction in medical workers. The aim of this study was to evaluate these relationships among medical workers in southwest China. This cross-sectional study was conducted between March 2013 and December 2013, and was based on the fifth National Health Service Survey (NHSS). A total of 1382 medical workers were enrolled in the study. Pearson correlation analysis and general linear model univariate analysis were used to evaluate the relationship of work-family conflict, self-reported social support, and job satisfaction with burnout syndrome in medical workers. We observed that five dimensions of job satisfaction and self-reported social support were negatively associated with burnout syndrome, whereas three dimensions of work-family conflict showed a positive correlation. In a four-stage general linear model analysis, we found that demographic factors accounted for 5.4% of individual variance in burnout syndrome (F = 4.720, P<0.001, R2 = 0.054), and that work-family conflict, self-reported social support, and job satisfaction accounted for 2.6% (F = 5.93, P<0.001, R2 = 0.080), 5.7% (F = 9.532, P<0.001, R2 = 0.137) and 17.8% (F = 21.608, P<0.001, R2 = 0.315) of the variance, respectively. In the fourth stage of analysis, female gender and a lower technical title correlated to a higher level of burnout syndrome, and medical workers without administrative duties had more serious burnout syndrome than those with administrative duties. In conclusion, the present study suggests that work-family conflict and self-reported social support slightly affect the level of burnout syndrome, and that job satisfaction is a much stronger influence on burnout syndrome in medical workers of southwest China.

  19. Poisson noise removal with pyramidal multi-scale transforms

    NASA Astrophysics Data System (ADS)

    Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M.

    2013-09-01

    In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume.

  20. Optical Pattern Recognition for Missile Guidance.

    DTIC Science & Technology

    1979-10-01

    to the voltage dependent sensitometry noted earlier, to the low lIE intensity available and to the broadband nature of the XE source used. Erase...same form as measures na ri pi vie, crereas Fig. 6) that was used to control the modulator. measures, namely, carrier period variance , carrier phase...This equalizing correlator system is another method modulation or phase variance , and instantaneous fre- by which the flexibility and repertoire of

  1. A Sociotechnical Systems Approach To Coastal Marine Spatial Planning

    DTIC Science & Technology

    2016-12-01

    the authors followed the MEAD step of identifying variances and creating a matrix of these variances. Then the authors were able to propose methods ...potential politics involved, and the risks involved in proposing and attempting to start up a new marine aquaculture operation. 69 Figure 16. Role...10 16. DLNR Board Responsiveness/Review Time 17. Assessment Value Redesign Suggestions • Have a coordinating group or person (with knowledge

  2. Automated real time constant-specificity surveillance for disease outbreaks.

    PubMed

    Wieland, Shannon C; Brownstein, John S; Berger, Bonnie; Mandl, Kenneth D

    2007-06-13

    For real time surveillance, detection of abnormal disease patterns is based on a difference between patterns observed, and those predicted by models of historical data. The usefulness of outbreak detection strategies depends on their specificity; the false alarm rate affects the interpretation of alarms. We evaluate the specificity of five traditional models: autoregressive, Serfling, trimmed seasonal, wavelet-based, and generalized linear. We apply each to 12 years of emergency department visits for respiratory infection syndromes at a pediatric hospital, finding that the specificity of the five models was almost always a non-constant function of the day of the week, month, and year of the study (p < 0.05). We develop an outbreak detection method, called the expectation-variance model, based on generalized additive modeling to achieve a constant specificity by accounting for not only the expected number of visits, but also the variance of the number of visits. The expectation-variance model achieves constant specificity on all three time scales, as well as earlier detection and improved sensitivity compared to traditional methods in most circumstances. Modeling the variance of visit patterns enables real-time detection with known, constant specificity at all times. With constant specificity, public health practitioners can better interpret the alarms and better evaluate the cost-effectiveness of surveillance systems.

  3. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  4. Geomagnetic field model for the last 5 My: time-averaged field and secular variation

    NASA Astrophysics Data System (ADS)

    Hatakeyama, Tadahiro; Kono, Masaru

    2002-11-01

    Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV model shows large variance of the (2,1) component, which is in good agreement with the previous PSV models obtained by forward approaches. It is also indicated that the variance of the axial dipole term is very small. This is in conflict with the studies based on paleointensity data, but we show that this conclusion is not inconsistent with the paleointensity data because a substantial part of the apparent scatter in paleointensities may be attributable to effects other than the fluctuations in g10 itself.

  5. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    PubMed

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  6. Repeatability and reproducibility of ribotyping and its computer interpretation.

    PubMed

    Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D

    2004-04-01

    Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.

  7. A new method of linkage analysis using LOD scores for quantitative traits supports linkage of monoamine oxidase activity to D17S250 in the Collaborative Study on the Genetics of Alcoholism pedigrees.

    PubMed

    Curtis, David; Knight, Jo; Sham, Pak C

    2005-09-01

    Although LOD score methods have been applied to diseases with complex modes of inheritance, linkage analysis of quantitative traits has tended to rely on non-parametric methods based on regression or variance components analysis. Here, we describe a new method for LOD score analysis of quantitative traits which does not require specification of a mode of inheritance. The technique is derived from the MFLINK method for dichotomous traits. A range of plausible transmission models is constructed, constrained to yield the correct population mean and variance for the trait but differing with respect to the contribution to the variance due to the locus under consideration. Maximized LOD scores under homogeneity and admixture are calculated, as is a model-free LOD score which compares the maximized likelihoods under admixture assuming linkage and no linkage. These LOD scores have known asymptotic distributions and hence can be used to provide a statistical test for linkage. The method has been implemented in a program called QMFLINK. It was applied to data sets simulated using a variety of transmission models and to a measure of monoamine oxidase activity in 105 pedigrees from the Collaborative Study on the Genetics of Alcoholism. With the simulated data, the results showed that the new method could detect linkage well if the true allele frequency for the trait was close to that specified. However, it performed poorly on models in which the true allele frequency was much rarer. For the Collaborative Study on the Genetics of Alcoholism data set only a modest overlap was observed between the results obtained from the new method and those obtained when the same data were analysed previously using regression and variance components analysis. Of interest is that D17S250 produced a maximized LOD score under homogeneity and admixture of 2.6 but did not indicate linkage using the previous methods. However, this region did produce evidence for linkage in a separate data set, suggesting that QMFLINK may have been able to detect a true linkage which was not picked up by the other methods. The application of model-free LOD score analysis to quantitative traits is novel and deserves further evaluation of its merits and disadvantages relative to other methods.

  8. Autism and urinary exogenous neuropeptides: development of an on-line SPE-HPLC-tandem mass spectrometry method to test the opioid excess theory.

    PubMed

    Dettmer, K; Hanna, D; Whetstone, P; Hansen, R; Hammock, B D

    2007-08-01

    Autism is a complex neurodevelopmental disorder with unknown etiology. One hypothesis regarding etiology in autism is the "opioid peptide excess" theory that postulates that excessive amounts of exogenous opioid-like peptides derived from dietary proteins are detectable in urine and that these compounds may be pathophysiologically important in autism. A selective LC-MS/MS method was developed to analyze gliadinomorphin, beta-casomorphin, deltorphin 1, and deltorphin 2 in urine. The method is based on on-line SPE extraction of the neuropeptides from urine, column switching, and subsequent HPLC analysis. A limit of detection of 0.25 ng/mL was achieved for all analytes. Analyte recovery rates from urine ranged between 78% and 94%, with relative standard deviations of 0.2-6.8%. The method was used to screen 69 urine samples from children with and without autism spectrum disorders for the occurrence of neuropeptides. The target neuropeptides were not detected above the detection limit in either sample set.

  9. Estimating stochastic noise using in situ measurements from a linear wavefront slope sensor.

    PubMed

    Bharmal, Nazim Ali; Reeves, Andrew P

    2016-01-15

    It is shown how the solenoidal component of noise from the measurements of a wavefront slope sensor can be utilized to estimate the total noise: specifically, the ensemble noise variance. It is well known that solenoidal noise is orthogonal to the reconstruction of the wavefront under conditions of low scintillation (absence of wavefront vortices). Therefore, it can be retrieved even with a nonzero slope signal present. By explicitly estimating the solenoidal noise from an ensemble of slopes, it can be retrieved for any wavefront sensor configuration. Furthermore, the ensemble variance is demonstrated to be related to the total noise variance via a straightforward relationship. This relationship is revealed via the method of the explicit estimation: it consists of a small, heuristic set of four constants that do not depend on the underlying statistics of the incoming wavefront. These constants seem to apply to all situations-data from a laboratory experiment as well as many configurations of numerical simulation-so the method is concluded to be generic.

  10. Within- and between-person and group variance in behavior and beliefs in cross-cultural longitudinal data.

    PubMed

    Deater-Deckard, Kirby; Godwin, Jennifer; Lansford, Jennifer E; Bacchini, Dario; Bombi, Anna Silvia; Bornstein, Marc H; Chang, Lei; Di Giunta, Laura; Dodge, Kenneth A; Malone, Patrick S; Oburu, Paul; Pastorelli, Concetta; Skinner, Ann T; Sorbring, Emma; Steinberg, Laurence; Tapanya, Sombat; Alampay, Liane Peña; Uribe Tirado, Liliana Maria; Zelli, Arnaldo; Al-Hassan, Suha M

    2018-01-01

    This study grapples with what it means to be part of a cultural group, from a statistical modeling perspective. The method we present compares within- and between-cultural group variability, in behaviors in families. We demonstrate the method using a cross-cultural study of adolescent development and parenting, involving three biennial waves of longitudinal data from 1296 eight-year-olds and their parents (multiple cultures in nine countries). Family members completed surveys about parental negativity and positivity, child academic and social-emotional adjustment, and attitudes about parenting and adolescent behavior. Variance estimates were computed at the cultural group, person, and within-person level using multilevel models. Of the longitudinally consistent variance, most was within and not between cultural groups-although there was a wide range of between-group differences. This approach to quantifying cultural group variability may prove valuable when applied to quantitative studies of acculturation. Copyright © 2017 The Foundation for Professionals in Services for Adolescents. All rights reserved.

  11. Impact of the Fano Factor on Position and Energy Estimation in Scintillation Detectors.

    PubMed

    Bora, Vaibhav; Barrett, Harrison H; Jha, Abhinav K; Clarkson, Eric

    2015-02-01

    The Fano factor for an integer-valued random variable is defined as the ratio of its variance to its mean. Light from various scintillation crystals have been reported to have Fano factors from sub-Poisson (Fano factor < 1) to super-Poisson (Fano factor > 1). For a given mean, a smaller Fano factor implies a smaller variance and thus less noise. We investigated if lower noise in the scintillation light will result in better spatial and energy resolutions. The impact of Fano factor on the estimation of position of interaction and energy deposited in simple gamma-camera geometries is estimated by two methods - calculating the Cramér-Rao bound and estimating the variance of a maximum likelihood estimator. The methods are consistent with each other and indicate that when estimating the position of interaction and energy deposited by a gamma-ray photon, the Fano factor of a scintillator does not affect the spatial resolution. A smaller Fano factor results in a better energy resolution.

  12. 41 CFR 102-36.90 - How do we find out what personal property is available as excess?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... use the following methods to find out what excess personal property is available: (a) Check GSAXcess...®, access http://www.gsaxcess.gov. (b) Contact or submit want lists to regional GSA Personal Property...

  13. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  14. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  15. Measuring disorganized speech in schizophrenia: automated analysis explains variance in cognitive deficits beyond clinician-rated scales.

    PubMed

    Minor, K S; Willits, J A; Marggraf, M P; Jones, M N; Lysaker, P H

    2018-04-25

    Conveying information cohesively is an essential element of communication that is disrupted in schizophrenia. These disruptions are typically expressed through disorganized symptoms, which have been linked to neurocognitive, social cognitive, and metacognitive deficits. Automated analysis can objectively assess disorganization within sentences, between sentences, and across paragraphs by comparing explicit communication to a large text corpus. Little work in schizophrenia has tested: (1) links between disorganized symptoms measured via automated analysis and neurocognition, social cognition, or metacognition; and (2) if automated analysis explains incremental variance in cognitive processes beyond clinician-rated scales. Disorganization was measured in schizophrenia (n = 81) with Coh-Metrix 3.0, an automated program that calculates basic and complex language indices. Trained staff also assessed neurocognition, social cognition, metacognition, and clinician-rated disorganization. Findings showed that all three cognitive processes were significantly associated with at least one automated index of disorganization. When automated analysis was compared with a clinician-rated scale, it accounted for significant variance in neurocognition and metacognition beyond the clinician-rated measure. When combined, these two methods explained 28-31% of the variance in neurocognition, social cognition, and metacognition. This study illustrated how automated analysis can highlight the specific role of disorganization in neurocognition, social cognition, and metacognition. Generally, those with poor cognition also displayed more disorganization in their speech-making it difficult for listeners to process essential information needed to tie the speaker's ideas together. Our findings showcase how implementing a mixed-methods approach in schizophrenia can explain substantial variance in cognitive processes.

  16. Testing variance components by two jackknife methods

    USDA-ARS?s Scientific Manuscript database

    The jacknife method, a resampling technique, has been widely used for statistical tests for years. The pseudo value based jacknife method (defined as pseudo jackknife method) is commonly used to reduce the bias for an estimate; however, sometimes it could result in large variaion for an estmimate a...

  17. Spacecraft methods and structures with enhanced attitude control that facilitates gyroscope substitutions

    NASA Technical Reports Server (NTRS)

    Li, Rongsheng (Inventor); Kurland, Jeffrey A. (Inventor); Dawson, Alec M. (Inventor); Wu, Yeong-Wei A. (Inventor); Uetrecht, David S. (Inventor)

    2004-01-01

    Methods and structures are provided that enhance attitude control during gyroscope substitutions by insuring that a spacecraft's attitude control system does not drive its absolute-attitude sensors out of their capture ranges. In a method embodiment, an operational process-noise covariance Q of a Kalman filter is temporarily replaced with a substantially greater interim process-noise covariance Q. This replacement increases the weight given to the most recent attitude measurements and hastens the reduction of attitude errors and gyroscope bias errors. The error effect of the substituted gyroscopes is reduced and the absolute-attitude sensors are not driven out of their capture range. In another method embodiment, this replacement is preceded by the temporary replacement of an operational measurement-noise variance R with a substantially larger interim measurement-noise variance R to reduce transients during the gyroscope substitutions.

  18. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  19. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  20. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  1. Predictors of excessive use of social media and excessive online gaming in Czech teenagers.

    PubMed

    Spilková, Jana; Chomynová, Pavla; Csémy, Ladislav

    2017-12-01

    Background and aims Young people's involvement in online gaming and the use of social media are increasing rapidly, resulting in a high number of excessive Internet users in recent years. The objective of this paper is to analyze the situation of excessive Internet use among adolescents in the Czech Republic and to reveal determinants of excessive use of social media and excessive online gaming. Methods Data from secondary school students (N = 4,887) were collected within the 2015 European School Survey Project on Alcohol and Other Drugs. Logistic regression models were constructed to describe the individual and familial discriminative factors and the impact of the health risk behavior of (a) excessive users of social media and (b) excessive players of online games. Results The models confirmed important gender-specific distinctions - while girls are more prone to online communication and social media use, online gaming is far more prevalent among boys. The analysis did not indicate an influence of family composition on both the excessive use of social media and on excessive online gaming, and only marginal effects for the type of school attended. We found a connection between the excessive use of social media and binge drinking and an inverse relation between excessive online gaming and daily smoking. Discussion and conclusion The non-existence of significant associations between family environment and excessive Internet use confirmed the general, widespread of this phenomenon across the social and economic strata of the teenage population, indicating a need for further studies on the topic.

  2. Simultaneous analysis of 17O/16O, 18O/16O and 2H/1H of gypsum hydration water by cavity ring‐down laser spectroscopy

    PubMed Central

    Mather, Ian; Rolfe, James; Evans, Nicholas P.; Herwartz, Daniel; Staubwasser, Michael; Hodell, David A.

    2015-01-01

    Rationale The recent development of cavity ring‐down laser spectroscopy (CRDS) instruments capable of measuring 17O‐excess in water has created new opportunities for studying the hydrologic cycle. Here we apply this new method to studying the triple oxygen (17O/16O, 18O/16O) and hydrogen (2H/1H) isotope ratios of gypsum hydration water (GHW), which can provide information about the conditions under which the mineral formed and subsequent post‐depositional interaction with other fluids. Methods We developed a semi‐automated procedure for extracting GHW by slowly heating the sample to 400°C in vacuo and cryogenically trapping the evolved water. The isotopic composition (δ17O, δ18O and δ2H values) of the GHW is subsequently measured by CRDS. The extraction apparatus allows the dehydration of five samples and one standard simultaneously, thereby increasing the long‐term precision and sample throughput compared with previous methods. The apparatus is also useful for distilling brines prior to isotopic analysis. A direct comparison is made between results of 17O‐excess in GHW obtained by CRDS and fluorination followed by isotope ratio mass spectrometry (IRMS) of O2. Results The long‐term analytical precision of our method of extraction and isotopic analysis of GHW by CRDS is ±0.07‰ for δ17O values, ±0.13‰ for δ18O values and ±0.49‰ for δ2H values (all ±1SD), and ±1.1‰ and ±8 per meg for the deuterium‐excess and 17O‐excess, respectively. Accurate measurement of the 17O‐excess values of GHW, of both synthetic and natural samples, requires the use of a micro‐combustion module (MCM). This accessory removes contaminants (VOCs, H2S, etc.) from the water vapour stream that interfere with the wavelengths used for spectroscopic measurement of water isotopologues. CRDS/MCM and IRMS methods yield similar isotopic results for the analysis of both synthetic and natural gypsum samples within analytical error of the two methods. Conclusions We demonstrate that precise and simultaneous isotopic measurements of δ17O, δ18O and δ2H values, and the derived deuterium‐excess and 17O‐excess, can be obtained from GHW and brines using a new extraction apparatus and subsequent measurement by CRDS. This method provides new opportunities for the application of water isotope tracers in hydrologic and paleoclimatologic research. © 2015 The Authors. Rapid Communications in Mass Spectrometry Published by John Wiley & Sons Ltd. PMID:26443399

  3. Comparative test on several forms of background error covariance in 3DVar

    NASA Astrophysics Data System (ADS)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )

  4. Methods for analyzing matched designs with double controls: excess risk is easily estimated and misinterpreted when evaluating traffic deaths.

    PubMed

    Redelmeier, Donald A; Tibshirani, Robert J

    2018-06-01

    To demonstrate analytic approaches for matched studies where two controls are linked to each case and events are accumulating counts rather than binary outcomes. A secondary intent is to clarify the distinction between total risk and excess risk (unmatched vs. matched perspectives). We review past research testing whether elections can lead to increased traffic risks. The results are reinterpreted by analyzing both the total count of individuals in fatal crashes and the excess count of individuals in fatal crashes, each time accounting for the matched double controls. Overall, 1,546 individuals were in fatal crashes on the 10 election days (average = 155/d), and 2,593 individuals were in fatal crashes on the 20 control days (average = 130/d). Poisson regression of total counts yielded a relative risk of 1.19 (95% confidence interval: 1.12-1.27). Poisson regression of excess counts yielded a relative risk of 3.22 (95% confidence interval: 2.72-3.80). The discrepancy between analyses of total counts and excess counts replicated with alternative statistical models and was visualized in graphical displays. Available approaches provide methods for analyzing count data in matched designs with double controls and help clarify the distinction between increases in total risk and increases in excess risk. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Explanation of Obsessive-Compulsive Disorder and Major Depressive Disorder on the Basis of Thought-Action Fusion

    PubMed Central

    Ghamari Kivi, Hossein; Mohammadipour Rik, Ne’mat; Sadeghi Movahhed, Fariba

    2013-01-01

    Objective: Thought-action fusion (TAF) refers to the tendency to assume incorrect causal relationship between one’s own thoughts and external reality, in which, thoughts and actions are treated as equivalents. This construct is present to development and maintenance of many psychological disorders. The aim of the present study was to predict obsessive-compulsive disorder (OCD) and its types, and major depressive disorder (MDD) with TAF and its levels. Methods: Two groups, included 50 persons with OCD and MDD, respectively, were selected by convenience sampling method in private and governmental psychiatric centers in Ardabil, Iran. Then, they responded to Beck Depression Inventory, Padua Inventory and TAF scale. Data were analysed using multiple regressions analysis by stepwise method. Results: TAF or its subtypes (moral TAF, likelihood-self TAF and likelihood-others TAF) can explain 14% of MDD variance (p < 0.01), 15% of OCD variance (p < 0.01), and 8-21% of OCD types variance (p < 0.05). Moral TAF had high levels in OCD and MDD. Conclusion: The construct of TAF is not specific factor for OCD, and it is present in MDD, too. Declaration of interest: None. PMID:24644509

  6. Linear response to long wavelength fluctuations using curvature simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldauf, Tobias; Zaldarriaga, Matias; Seljak, Uroš

    2016-09-01

    We study the local response to long wavelength fluctuations in cosmological N -body simulations, focusing on the matter and halo power spectra, halo abundance and non-linear transformations of the density field. The long wavelength mode is implemented using an effective curved cosmology and a mapping of time and distances. The method provides an alternative, more direct, way to measure the isotropic halo biases. Limiting ourselves to the linear case, we find generally good agreement between the biases obtained from the curvature method and the traditional power spectrum method at the level of a few percent. We also study the responsemore » of halo counts to changes in the variance of the field and find that the slope of the relation between the responses to density and variance differs from the naïve derivation assuming a universal mass function by approximately 8–20%. This has implications for measurements of the amplitude of local non-Gaussianity using scale dependent bias. We also analyze the halo power spectrum and halo-dark matter cross-spectrum response to long wavelength fluctuations and derive second order halo bias from it, as well as the super-sample variance contribution to the galaxy power spectrum covariance matrix.« less

  7. Evaluation of three statistical prediction models for forensic age prediction based on DNA methylation.

    PubMed

    Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram

    2018-05-01

    DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  9. Excess under-5 female mortality across India: a spatial analysis using 2011 census data.

    PubMed

    Guilmoto, Christophe Z; Saikia, Nandita; Tamrakar, Vandana; Bora, Jayanta Kumar

    2018-06-01

    Excess female mortality causes half of the missing women (estimated deficit of women in countries with suspiciously low proportion of females in their population) today. Globally, most of these avoidable deaths of women occur during childhood in China and India. We aimed to estimate excess female under-5 mortality rate (U5MR) for India's 35 states and union territories and 640 districts. Using the summary birth history method (or Brass method), we derived district-level estimates of U5MR by sex from 2011 census data. We used data from 46 countries with no evidence of gender bias for mortality to estimate the effects and intensity of excess female mortality at district level. We used a detailed spatial and statistical analysis to highlight the correlates of excess mortality at district level. Excess female U5MR was 18·5 per 1000 livebirths (95% CI 13·1-22·6) in India 2000-2005, which corresponds to an estimated 239 000 excess deaths (169 000-293 000) per year. More than 90% of districts had excess female mortality, but the four largest states in northern India (Uttar Pradesh, Bihar, Rajasthan, and Madhya Pradesh) accounted for two-thirds of India's total number. Low economic development, gender inequity, and high fertility were the main predictors of excess female mortality. Spatial analysis confirmed the strong spatial clustering of postnatal discrimination against girls in India. The considerable effect of gender bias on mortality in India highlights the need for more proactive engagement with the issue of postnatal sex discrimination and a focus on the northern districts. Notably, these regions are not the same as those most affected by skewed sex ratio at birth. None. Copyright © 2018 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY-NC-ND 4.0 license. Published by Elsevier Ltd.. All rights reserved.

  10. OCT aspects of dental hard tissue changes induced by excessive occlusal forces

    NASA Astrophysics Data System (ADS)

    Scrieciu, Monica; Mercuţ, Veronica; Popescu, Sanda Mihaela; Tǎrâţǎ, Daniela; Osiac, Eugen

    2018-03-01

    The study purpose is to highlight dental hard tissues changes of a tooth with dental wear as a consequence of excessive occlusal forces, using OCT. Methods: a central incisor extracted for periodontal reason was cleaned and it was embedded in a black acrylic resin block. The block was sectioned along the longitudinal axis of the tooth and prepared for OCT analysis. Results: The OCT signal showed differences between the labial and palatal dental hard tissue structures, even in areas without excessive occlusal solicitations. Conclusion: The OCT signal highlights changes of dental hard tissues structures according to excessive occlusal solicitations areas.

  11. Clinical assessment of excessive daytime sleepiness in the diagnosis of sleep disorders.

    PubMed

    Rosenberg, Russell P

    2015-12-01

    Daytime sleepiness is common, but, in some individuals, it can be excessive and lead to distress and impairment. For many of these individuals, excessive daytime sleepiness is simply caused by poor sleep habits or self-imposed sleep times that are not sufficient to maintain alertness throughout the day. For others, daytime sleepiness may be related to a more serious disorder or condition such as narcolepsy, idiopathic hypersomnia, or obstructive sleep apnea. Clinicians must be familiar with the disorders associated with excessive daytime sleepiness and the assessment methods used to diagnose these disorders in order to identify patients who need treatment. © Copyright 2015 Physicians Postgraduate Press, Inc.

  12. Excess Cancers Among HIV-Infected People in the United States

    PubMed Central

    Pfeiffer, Ruth M.; Shiels, Meredith S.; Li, Jianmin; Hall, H. Irene; Engels, Eric A.

    2015-01-01

    Background: Nearly 900 000 people in the United States are living with diagnosed human immunodeficiency virus (HIV) infection and therefore increased cancer risk. The total number of cancers occurring among HIV-infected people and the excess number above expected background cases are unknown. Methods: We derived cancer incidence rates for the United States HIV-infected and general populations from Poisson models applied to linked HIV and cancer registry data and from Surveillance, Epidemiology, and End Results program data, respectively. We applied these rates to estimates of people living with diagnosed HIV at mid-year 2010 to estimate total and expected cancer counts, respectively. We subtracted expected from total cancers to estimate excess cancers. Results: An estimated 7760 (95% confidence interval [CI] = 7330 to 8320) cancers occurred in 2010 among HIV-infected people, of which 3920 cancers (95% CI = 3480 to 4470) or 50% (95% CI = 48 to 54%) were in excess of expected. The most common excess cancers were non-Hodgkin’s lymphoma (NHL; n = 1440 excess cancers, occurring in 88% excess), Kaposi’s sarcoma (KS, n = 910, 100% excess), anal cancer (n = 740, 97% excess), and lung cancer (n = 440, 52% excess). The proportion of excess cancers that were AIDS defining (ie, KS, NHL, cervical cancer) declined with age and time since AIDS diagnosis (both P < .001). For anal cancer, 83% of excess cases occurred among men who have sex with men, and 71% among those living five or more years since AIDS onset. Among injection drug users, 22% of excess cancers were lung cancer, and 16% were liver cancer. Conclusions: The excess cancer burden in the US HIV population is substantial, and patterns across groups highlight opportunities for cancer control initiatives targeted to HIV-infected people. PMID:25663691

  13. Gender differences in exercise dependence and eating disorders in young adults: a path analysis of a conceptual model.

    PubMed

    Meulemans, Shelli; Pribis, Peter; Grajales, Tevni; Krivak, Gretchen

    2014-11-05

    The purpose of our study was to study the prevalence of exercise dependence (EXD) among college students and to investigate the role of EXD and gender on exercise behavior and eating disorders. Excessive exercise can become an addiction known as exercise dependence. In our population of 517 college students, 3.3% were at risk for EXD and 8% were at risk for an eating disorder. We used Path analysis the simplest case of Structural Equation Modeling (SEM) to investigate the role of EXD and exercise behavior on eating disorders. We observed a small direct effect from gender to eating disorders. In females we observed significant direct effect between exercise behavior (r = -0.17, p = 0.009) and EXD (r = 0.34, p < 0.001) on eating pathology. We also observed an indirect effect of exercise behavior on eating pathology (r = 0.16) through EXD (r = 0.48, r2 = 0.23, p < 0.001). In females the total variance of eating pathology explained by the SEM model was 9%. In males we observed a direct effect between EXD (r = 0.23, p < 0.001) on eating pathology. We also observed indirect effect of exercise behavior on eating pathology (r = 0.11) through EXD (r = 0.49, r2 = 0.24, p < 0.001). In males the total variance of eating pathology explained by the SEM model was 5%.

  14. WE-G-18A-06: Sinogram Restoration in Helical Cone-Beam CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, K; Riviere, P La

    2014-06-15

    Purpose: To extend CT sinogram restoration, which has been shown in 2D to reduce noise and to correct for geometric effects and other degradations at a low computational cost, from 2D to a 3D helical cone-beam geometry. Methods: A method for calculating sinogram degradation coefficients for a helical cone-beam geometry was proposed. These values were used to perform penalized-likelihood sinogram restoration on simulated data that were generated from the FORBILD thorax phantom. Sinogram restorations were performed using both a quadratic penalty and the edge-preserving Huber penalty. After sinogram restoration, Fourier-based analytical methods were used to obtain reconstructions. Resolution-variance trade-offs weremore » investigated for several locations within the reconstructions for the purpose of comparing sinogram restoration to no restoration. In order to compare potential differences, reconstructions were performed using different groups of neighbors in the penalty, two analytical reconstruction methods (Katsevich and single-slice rebinning), and differing helical pitches. Results: The resolution-variance properties of reconstructions restored using sinogram restoration with a Huber penalty outperformed those of reconstructions with no restoration. However, the use of a quadratic sinogram restoration penalty did not lead to an improvement over performing no restoration at the outer regions of the phantom. Application of the Huber penalty to neighbors both within a view and across views did not perform as well as only applying the penalty to neighbors within a view. General improvements in resolution-variance properties using sinogram restoration with the Huber penalty were not dependent on the reconstruction method used or the magnitude of the helical pitch. Conclusion: Sinogram restoration for noise and degradation effects for helical cone-beam CT is feasible and should be able to be applied to clinical data. When applied with the edge-preserving Huber penalty, sinogram restoration leads to an improvement in resolution-variance tradeoffs.« less

  15. Excess Costs Associated with Possible Misdiagnosis of Alzheimer’s Disease Among Patients with Vascular Dementia in a UK CPRD Population

    PubMed Central

    Happich, Michael; Kirson, Noam Y.; Desai, Urvi; King, Sarah; Birnbaum, Howard G.; Reed, Catherine; Belger, Mark; Lenox-Smith, Alan; Price, David

    2016-01-01

    Background: Prior diagnosis of Alzheimer’s disease (AD) among patients later diagnosed with vascular dementia (VaD) has been associated with excess costs, suggesting potential benefits of earlier rule-out of AD diagnosis. Objective: To investigate whether prior diagnosis with AD among patients with VaD is associated with excess costs in the UK. Methods: Patients with a final VaD diagnosis, continuous data visibility for≥6 months prior to index date, and linkage to Hospital Episode Statistics data were retrospectively selected from de-identified Clinical Practice Research Datalink data. Patients with AD diagnosis before a final VaD diagnosis were matched to similar patients with no prior AD diagnosis using propensity score methods. Annual excess healthcare costs were calculated for 5 years post-index, stratified by time to final diagnosis. Results: Of 9,311 patients with VaD, 508 (6%) had prior AD diagnosis with a median time to VaD diagnosis exceeding 2 years from index date. Over the entire follow-up period, patients with prior AD diagnosis had accumulated healthcare costs that were approximately GBP2,000 higher than those for matched counterparts (mostly due to higher hospitalization costs). Cost differentials peaked particularly in the period including the final VaD diagnosis, with excess costs quickly declining thereafter. Conclusion: Potential misdiagnosis of AD among UK patients with VaD resulted in substantial excess costs. The decline in excess costs following a final VaD diagnosis suggests potential benefits from earlier rule-out of AD. PMID:27163798

  16. Alcohol Electronic Screening and Brief Intervention: A Community Guide Systematic Review

    PubMed Central

    Tansil, Kristin A.; Esser, Marissa B.; Sandhu, Paramjit; Reynolds, Jeffrey A.; Elder, Randy W.; Williamson, Rebecca S.; Chattopadhyay, Sajal K.; Bohm, Michele K.; Brewer, Robert D.; McKnight-Eily, Lela R.; Hungerford, Daniel W.; Toomey, Traci L.; Hingson, Ralph W.; Fielding, Jonathan E.

    2016-01-01

    Context Excessive drinking is responsible for 1 in 10 deaths among working-age adults in the U.S. annually. Alcohol screening and brief intervention (ASBI) is an effective, but underutilized, intervention for reducing excessive drinking among adults. Electronic screening and brief intervention (e-SBI) uses electronic devices to deliver key elements of ASBI, and has the potential to expand population reach. Evidence acquisition Using Community Guide methods, a systematic review of the scientific literature on the effectiveness of e-SBI for reducing excessive alcohol consumption and related harms was conducted. The search covered studies published from 1967 to October 2011. A total of 31 studies with 36 study arms met quality criteria, and were included in the review. Analyses were conducted in 2012. Evidence synthesis Twenty-four studies (28 study arms) provided results for excessive drinkers only and seven studies (eight study arms) reported results for all drinkers. Nearly all studies found that e-SBI reduced excessive alcohol consumption and related harms: nine study arms reported a median 23.9% reduction in binge drinking intensity (maximum drinks/binge episode) and nine study arms reported a median 16.5% reduction in binge drinking frequency. Reductions in drinking measures were sustained for up to 12 months. Conclusion According to Community Guide rules of evidence, e-SBI is an effective method for reducing excessive alcohol consumption and related harms among intervention participants. Implementation of e-SBI could complement population-level strategies previously recommended by the Community Preventive Services Task Force for reducing excessive drinking (e.g., increasing alcohol taxes and regulating alcohol outlet density). PMID:27745678

  17. Alcohol Electronic Screening and Brief Intervention: A Community Guide Systematic Review.

    PubMed

    Tansil, Kristin A; Esser, Marissa B; Sandhu, Paramjit; Reynolds, Jeffrey A; Elder, Randy W; Williamson, Rebecca S; Chattopadhyay, Sajal K; Bohm, Michele K; Brewer, Robert D; McKnight-Eily, Lela R; Hungerford, Daniel W; Toomey, Traci L; Hingson, Ralph W; Fielding, Jonathan E

    2016-11-01

    Excessive drinking is responsible for one in ten deaths among working-age adults in the U.S. annually. Alcohol screening and brief intervention is an effective but underutilized intervention for reducing excessive drinking among adults. Electronic screening and brief intervention (e-SBI) uses electronic devices to deliver key elements of alcohol screening and brief intervention, with the potential to expand population reach. Using Community Guide methods, a systematic review of the scientific literature on the effectiveness of e-SBI for reducing excessive alcohol consumption and related harms was conducted. The search covered studies published from 1967 to October 2011. A total of 31 studies with 36 study arms met quality criteria and were included in the review. Analyses were conducted in 2012. Twenty-four studies (28 study arms) provided results for excessive drinkers only and seven studies (eight study arms) reported results for all drinkers. Nearly all studies found that e-SBI reduced excessive alcohol consumption and related harms: nine study arms reported a median 23.9% reduction in binge-drinking intensity (maximum drinks/binge episode) and nine study arms reported a median 16.5% reduction in binge-drinking frequency. Reductions in drinking measures were sustained for up to 12 months. According to Community Guide rules of evidence, e-SBI is an effective method for reducing excessive alcohol consumption and related harms among intervention participants. Implementation of e-SBI could complement population-level strategies previously recommended by the Community Preventive Services Task Force for reducing excessive drinking (e.g., increasing alcohol taxes and regulating alcohol outlet density). Published by Elsevier Inc.

  18. Prediction of activity-related energy expenditure using accelerometer-derived physical activity under free-living conditions: a systematic review.

    PubMed

    Jeran, S; Steinbrecher, A; Pischon, T

    2016-08-01

    Activity-related energy expenditure (AEE) might be an important factor in the etiology of chronic diseases. However, measurement of free-living AEE is usually not feasible in large-scale epidemiological studies but instead has traditionally been estimated based on self-reported physical activity. Recently, accelerometry has been proposed for objective assessment of physical activity, but it is unclear to what extent this methods explains the variance in AEE. We conducted a systematic review searching MEDLINE database (until 2014) on studies that estimated AEE based on accelerometry-assessed physical activity in adults under free-living conditions (using doubly labeled water method). Extracted study characteristics were sample size, accelerometer (type (uniaxial, triaxial), metrics (for example, activity counts, steps, acceleration), recording period, body position, wear time), explained variance of AEE (R(2)) and number of additional predictors. The relation of univariate and multivariate R(2) with study characteristics was analyzed using nonparametric tests. Nineteen articles were identified. Examination of various accelerometers or subpopulations in one article was treated separately, resulting in 28 studies. Sample sizes ranged from 10 to 149. In most studies the accelerometer was triaxial, worn at the trunk, during waking hours and reported activity counts as output metric. Recording periods ranged from 5 to 15 days. The variance of AEE explained by accelerometer-assessed physical activity ranged from 4 to 80% (median crude R(2)=26%). Sample size was inversely related to the explained variance. Inclusion of 1 to 3 other predictors in addition to accelerometer output significantly increased the explained variance to a range of 12.5-86% (median total R(2)=41%). The increase did not depend on the number of added predictors. We conclude that there is large heterogeneity across studies in the explained variance of AEE when estimated based on accelerometry. Thus, data on predicted AEE based on accelerometry-assessed physical activity need to be interpreted cautiously.

  19. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  20. Coding and Commonality Analysis: Non-ANOVA Methods for Analyzing Data from Experiments.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The advantages and disadvantages of three analytic methods used to analyze experimental data in educational research are discussed. The same hypothetical data set is used with all methods for a direct comparison. The Analysis of Variance (ANOVA) method and its several analogs are collectively labeled OVA methods and are evaluated. Regression…

  1. Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.

    PubMed

    Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A

    2017-03-01

    Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.

  2. Comparison of particle swarm optimization and simulated annealing for locating additional boreholes considering combined variance minimization

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Safa, Mohammad; Mokhtari, Hadi

    2016-10-01

    One of the most important stages in complementary exploration is optimal designing the additional drilling pattern or defining the optimum number and location of additional boreholes. Quite a lot research has been carried out in this regard in which for most of the proposed algorithms, kriging variance minimization as a criterion for uncertainty assessment is defined as objective function and the problem could be solved through optimization methods. Although kriging variance implementation is known to have many advantages in objective function definition, it is not sensitive to local variability. As a result, the only factors evaluated for locating the additional boreholes are initial data configuration and variogram model parameters and the effects of local variability are omitted. In this paper, with the goal of considering the local variability in boundaries uncertainty assessment, the application of combined variance is investigated to define the objective function. Thus in order to verify the applicability of the proposed objective function, it is used to locate the additional boreholes in Esfordi phosphate mine through the implementation of metaheuristic optimization methods such as simulated annealing and particle swarm optimization. Comparison of results from the proposed objective function and conventional methods indicates that the new changes imposed on the objective function has caused the algorithm output to be sensitive to the variations of grade, domain's boundaries and the thickness of mineralization domain. The comparison between the results of different optimization algorithms proved that for the presented case the application of particle swarm optimization is more appropriate than simulated annealing.

  3. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  4. Costs for Hospital Stays in the United States, 2011

    MedlinePlus

    ... detailed description of HCUP, more information on the design of the Nationwide Inpatient Sample (NIS), and methods ... Nationwide Inpatient Sample (NIS) Variances, 2001. HCUP Methods Series Report #2003-2. Online. June 2005 (revised June ...

  5. On the physical origins of interaction-induced vibrational (hyper)polarizabilities.

    PubMed

    Zaleśny, Robert; Garcia-Borràs, Marc; Góra, Robert W; Medved', Miroslav; Luis, Josep M

    2016-08-10

    This paper presents the results of a pioneering exploration of the physical origins of vibrational contributions to the interaction-induced electric properties of molecular complexes. In order to analyze the excess nuclear relaxation (hyper)polarizabilities, a new scheme was proposed which relies on the computationally efficient Bishop-Hasan-Kirtman method for determining the nuclear relaxation contributions to electric properties. The extension presented herein is general and can be used with any interaction-energy partitioning method. As an example, in this study we employed the variational-perturbational interaction-energy decomposition scheme (at the MP2/aug-cc-pVQZ level) and the extended transition state method by employing three exchange-correlation functionals (BLYP, LC-BLYP, and LC-BLYP-dDsC) to study the excess properties of the HCN dimer. It was observed that the first-order electrostatic contribution to the excess nuclear relaxation polarizability cancels with the negative exchange repulsion term out to a large extent, resulting in a positive value of Δα(nr) due to the contributions from the delocalization and the dispersion terms. In the case of the excess nuclear relaxation first hyperpolarizability, the pattern of interaction contributions is very similar to that for Δα(nr), both in terms of their sign as well as relative magnitude. Finally, our results show that the LC-BLYP and LC-BLYP-dDsC functionals, which yield smaller values of the orbital relaxation term than BLYP, are more successful in predicting excess properties.

  6. Higher-order cumulants and spectral kurtosis for early detection of subterranean termites

    NASA Astrophysics Data System (ADS)

    de la Rosa, Juan José González; Moreno Muñoz, Antonio

    2008-02-01

    This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.

  7. Demographic and occupational correlates of workaholism.

    PubMed

    Taris, Toon W; Van Beek, Ilona; Schaufeli, Wilmar B

    2012-04-01

    Drawing on a convenience sample of 9,160 Dutch employees, the present study examined whether commonly held ideas about the associations between demographic, professional, and occupational characteristics and workaholism would be observed. For example, it is sometimes assumed that managers are more likely to display workaholic tendencies than others. Analysis of variance was used to relate workaholism scores (measured as the combination of working excessively and working compulsively) to participant age, sex, employment status (self-employed or not), profession, and occupational sector. Relatively high average scores on workaholism were obtained by workers in the agriculture, construction, communication, consultancy, and commerce/trade sectors, as well as managers and higher professionals. Low scores were found for those in the public administration and services industry sectors, and for nurses, social workers, and paramedics. The other characteristics were not or only weakly related to workaholism.

  8. Evaluation of Kurtosis into the product of two normally distributed variables

    NASA Astrophysics Data System (ADS)

    Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio

    2016-06-01

    Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.

  9. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    PubMed

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  10. FINE GRAIN NUCLEAR EMULSION

    DOEpatents

    Oliver, A.J.

    1962-04-24

    A method of preparing nuclear track emulsions having mean grain sizes less than 0.1 microns is described. The method comprises adding silver nitrate to potassium bromide at a rate at which there is always a constant, critical excess of silver ions. For minimum size grains, the silver ion concentration is maintained at the critical level of about pAg 2.0 to 5.0 during prectpitation, pAg being defined as the negative logarithm of the silver ion concentration. It is preferred to eliminate the excess silver at the conclusion of the precipitation steps. The emulsion is processed by methods in all other respects generally similar to the methods of the prior art. (AEC)

  11. A wet chemical method for the estimation of carbon in uranium carbides.

    PubMed

    Chandramouli, V; Yadav, R B; Rao, P R

    1987-09-01

    A wet chemical method for the estimation of carbon in uranium carbides has been developed, based on oxidation with a saturated solution of sodium dichromate in 9M sulphuric acid, absorption of the evolved carbon dioxide in a known excess of barium hydroxide solution, and titration of the excess of barium hydroxide with standard potassium hydrogen phthalate solution. The carbon content obtained is in good agreement with that obtained by combustion and titration.

  12. Estimating the energetic cost of feeding excess dietary nitrogen to dairy cows.

    PubMed

    Reed, K F; Bonfá, H C; Dijkstra, J; Casper, D P; Kebreab, E

    2017-09-01

    Feeding N in excess of requirement could require the use of additional energy to metabolize excess protein, and to synthesize and excrete urea; however, the amount and fate of this energy is unknown. Little progress has been made on this topic in recent decades, so an extension of work published in 1970 was conducted to quantify the effect of excess N on ruminant energetics. In part 1 of this study, the results of previous work were replicated using a simple linear regression to estimate the effect of excess N on energy balance. In part 2, mixed model methodology and a larger data set were used to improve upon the previously reported linear regression methods. In part 3, heat production, retained energy, and milk energy replaced the composite energy balance variable previously proposed as the dependent variable to narrow the effect of excess N. In addition, rumen degradable and undegradable protein intakes were estimated using table values and included as covariates in part 3. Excess N had opposite and approximately equal effects on heat production (+4.1 to +7.6 kcal/g of excess N) and retained energy (-4.2 to -6.6 kcal/g of excess N) but had a larger negative effect on milk gross energy (-52 to -68 kcal/g of excess N). The results suggest that feeding excess N increases heat production, but more investigation is required to determine why excess N has such a large effect on milk gross energy production. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. En face projection imaging of the human choroidal layers with tracking SLO and swept source OCT angiography methods

    NASA Astrophysics Data System (ADS)

    Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.

    2015-07-01

    We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.

  14. Verbal Working Memory in Children With Cochlear Implants

    PubMed Central

    Caldwell-Tarr, Amanda; Low, Keri E.; Lowenstein, Joanna H.

    2017-01-01

    Purpose Verbal working memory in children with cochlear implants and children with normal hearing was examined. Participants Ninety-three fourth graders (47 with normal hearing, 46 with cochlear implants) participated, all of whom were in a longitudinal study and had working memory assessed 2 years earlier. Method A dual-component model of working memory was adopted, and a serial recall task measured storage and processing. Potential predictor variables were phonological awareness, vocabulary knowledge, nonverbal IQ, and several treatment variables. Potential dependent functions were literacy, expressive language, and speech-in-noise recognition. Results Children with cochlear implants showed deficits in storage and processing, similar in size to those at second grade. Predictors of verbal working memory differed across groups: Phonological awareness explained the most variance in children with normal hearing; vocabulary explained the most variance in children with cochlear implants. Treatment variables explained little of the variance. Where potentially dependent functions were concerned, verbal working memory accounted for little variance once the variance explained by other predictors was removed. Conclusions The verbal working memory deficits of children with cochlear implants arise due to signal degradation, which limits their abilities to acquire phonological awareness. That hinders their abilities to store items using a phonological code. PMID:29075747

  15. Choice in experiential learning: True preferences or experimental artifacts?

    PubMed

    Ashby, Nathaniel J S; Konstantinidis, Emmanouil; Yechiam, Eldad

    2017-03-01

    The rate of selecting different options in the decisions-from-feedback paradigm is commonly used to measure preferences resulting from experiential learning. While convergence to a single option increases with experience, some variance in choice remains even when options are static and offer fixed rewards. Employing a decisions-from-feedback paradigm followed by a policy-setting task, we examined whether the observed variance in choice is driven by factors related to the paradigm itself: Continued exploration (e.g., believing options are non-stationary) or exploitation of perceived outcome patterns (i.e., a belief that sequential choices are not independent). Across two studies, participants showed variance in their choices, which was related (i.e., proportional) to the policies they set. In addition, in Study 2, participants' reported under-confidence was associated with the amount of choice variance in later choices and policies. These results suggest that variance in choice is better explained by participants lacking confidence in knowing which option is better, rather than methodological artifacts (i.e., exploration or failures to recognize outcome independence). As such, the current studies provide evidence for the decisions-from-feedback paradigm's validity as a behavioral research method for assessing learned preferences. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Is Excessive Polypharmacy a Transient or Persistent Phenomenon? A Nationwide Cohort Study in Taiwan

    PubMed Central

    Wang, Yi-Jen; Chiang, Shu-Chiung; Lee, Pei-Chen; Chen, Yu-Chun; Chou, Li-Fang; Chou, Yueh-Ching; Chen, Tzeng-Ji

    2018-01-01

    Objectives: Target populations with persistent polypharmacy should be identified prior to implementing strategies against inappropriate medication use, yet limited information regarding such populations is available. The main objectives were to explore the trends of excessive polypharmacy, whether transient or persistent, at the individual level. The secondary objectives were to identify the factors associated with persistently excessive polypharmacy and to estimate the probabilities for repeatedly excessive polypharmacy. Methods: Retrospective cohort analyses of excessive polypharmacy, defined as prescription of ≥ 10 medicines at an ambulatory visit, from 2001 to 2013 were conducted using a nationally representative claims database in Taiwan. Survival analyses with log-rank test of adult patients with first-time excessive polypharmacy were conducted to predict the probabilities, stratified by age and sex, of having repeatedly excessive polypharmacy. Results: During the study period, excessive polypharmacy occurred in 5.4% of patients for the first time. Among them, 63.9% had repeatedly excessive polypharmacy and the probabilities were higher in men and old people. Men versus women, and old versus middle-aged and young people had shorter median excessive polypharmacy-free times (9.4 vs. 5.5 months, 5.3 vs. 10.1 and 35.0 months, both p < 0.001). Overall, the probabilities of having no repeatedly excessive polypharmacy within 3 months, 6 months, and 1 year were 59.9, 53.6, and 48.1%, respectively. Conclusion: Although male and old patients were more likely to have persistently excessive polypharmacy, most cases of excessive polypharmacy were transient or did not re-appear in the short run. Systemic deprescribing measures should be tailored to at-risk groups. PMID:29515446

  17. Comparison of gene-based rare variant association mapping methods for quantitative traits in a bovine population with complex familial relationships.

    PubMed

    Zhang, Qianqian; Guldbrandtsen, Bernt; Calus, Mario P L; Lund, Mogens Sandø; Sahana, Goutam

    2016-08-17

    There is growing interest in the role of rare variants in the variation of complex traits due to increasing evidence that rare variants are associated with quantitative traits. However, association methods that are commonly used for mapping common variants are not effective to map rare variants. Besides, livestock populations have large half-sib families and the occurrence of rare variants may be confounded with family structure, which makes it difficult to disentangle their effects from family mean effects. We compared the power of methods that are commonly applied in human genetics to map rare variants in cattle using whole-genome sequence data and simulated phenotypes. We also studied the power of mapping rare variants using linear mixed models (LMM), which are the method of choice to account for both family relationships and population structure in cattle. We observed that the power of the LMM approach was low for mapping a rare variant (defined as those that have frequencies lower than 0.01) with a moderate effect (5 to 8 % of phenotypic variance explained by multiple rare variants that vary from 5 to 21 in number) contributing to a QTL with a sample size of 1000. In contrast, across the scenarios studied, statistical methods that are specialized for mapping rare variants increased power regardless of whether multiple rare variants or a single rare variant underlie a QTL. Different methods for combining rare variants in the test single nucleotide polymorphism set resulted in similar power irrespective of the proportion of total genetic variance explained by the QTL. However, when the QTL variance is very small (only 0.1 % of the total genetic variance), these specialized methods for mapping rare variants and LMM generally had no power to map the variants within a gene with sample sizes of 1000 or 5000. We observed that the methods that combine multiple rare variants within a gene into a meta-variant generally had greater power to map rare variants compared to LMM. Therefore, it is recommended to use rare variant association mapping methods to map rare genetic variants that affect quantitative traits in livestock, such as bovine populations.

  18. Quantifying the vascular response to ischemia with speckle variance optical coherence tomography

    PubMed Central

    Poole, Kristin M.; McCormack, Devin R.; Patil, Chetan A.; Duvall, Craig L.; Skala, Melissa C.

    2014-01-01

    Longitudinal monitoring techniques for preclinical models of vascular remodeling are critical to the development of new therapies for pathological conditions such as ischemia and cancer. In models of skeletal muscle ischemia in particular, there is a lack of quantitative, non-invasive and long term assessment of vessel morphology. Here, we have applied speckle variance optical coherence tomography (OCT) methods to quantitatively assess vascular remodeling and growth in a mouse model of peripheral arterial disease. This approach was validated on two different mouse strains known to have disparate rates and abilities of recovering following induction of hind limb ischemia. These results establish the potential for speckle variance OCT as a tool for quantitative, preclinical screening of pro- and anti-angiogenic therapies. PMID:25574425

  19. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  20. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

Top