Sample records for reducing sample variance

  1. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  2. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  3. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  4. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  5. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  6. Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize

    PubMed Central

    Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto

    2014-01-01

    Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911

  7. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  8. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  9. Consistent Small-Sample Variances for Six Gamma-Family Measures of Ordinal Association

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2009-01-01

    Gamma-family measures are bivariate ordinal correlation measures that form a family because they all reduce to Goodman and Kruskal's gamma in the absence of ties (1954). For several gamma-family indices, more than one variance estimator has been introduced. In previous research, the "consistent" variance estimator described by Cliff and…

  10. Importance sampling variance reduction for the Fokker–Planck rarefied gas particle method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collyer, B.S., E-mail: benjamin.collyer@gmail.com; London Mathematical Laboratory, 14 Buckingham Street, London WC2N 6DF; Connaughton, C.

    The Fokker–Planck approximation to the Boltzmann equation, solved numerically by stochastic particle schemes, is used to provide estimates for rarefied gas flows. This paper presents a variance reduction technique for a stochastic particle method that is able to greatly reduce the uncertainty of the estimated flow fields when the characteristic speed of the flow is small in comparison to the thermal velocity of the gas. The method relies on importance sampling, requiring minimal changes to the basic stochastic particle scheme. We test the importance sampling scheme on a homogeneous relaxation, planar Couette flow and a lid-driven-cavity flow, and find thatmore » our method is able to greatly reduce the noise of estimated quantities. Significantly, we find that as the characteristic speed of the flow decreases, the variance of the noisy estimators becomes independent of the characteristic speed.« less

  11. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses

    PubMed Central

    Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.

    2015-01-01

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576

  12. Empirical Bayes estimation of undercount in the decennial census.

    PubMed

    Cressie, N

    1989-12-01

    Empirical Bayes methods are used to estimate the extent of the undercount at the local level in the 1980 U.S. census. "Grouping of like subareas from areas such as states, counties, and so on into strata is a useful way of reducing the variance of undercount estimators. By modeling the subareas within a stratum to have a common mean and variances inversely proportional to their census counts, and by taking into account sampling of the areas (e.g., by dual-system estimation), empirical Bayes estimators that compromise between the (weighted) stratum average and the sample value can be constructed. The amount of compromise is shown to depend on the relative importance of stratum variance to sampling variance. These estimators are evaluated at the state level (51 states, including Washington, D.C.) and stratified on race/ethnicity (3 strata) using data from the 1980 postenumeration survey (PEP 3-8, for the noninstitutional population)." excerpt

  13. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  14. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    PubMed

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation

    NASA Astrophysics Data System (ADS)

    Räisänen, Petri; Barker, W. Howard

    2004-07-01

    The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.

  16. Factors influencing heterogeneity of radiation-induced DNA-damage measured by the alkaline comet assay.

    PubMed

    Seidel, Clemens; Lautenschläger, Christine; Dunst, Jürgen; Müller, Arndt-Christian

    2012-04-20

    To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity.

  17. Evaluation of sampling plans to detect Cry9C protein in corn flour and meal.

    PubMed

    Whitaker, Thomas B; Trucksess, Mary W; Giesbrecht, Francis G; Slate, Andrew B; Thomas, Francis S

    2004-01-01

    StarLink is a genetically modified corn that produces an insecticidal protein, Cry9C. Studies were conducted to determine the variability and Cry9C distribution among sample test results when Cry9C protein was estimated in a bulk lot of corn flour and meal. Emphasis was placed on measuring sampling and analytical variances associated with each step of the test procedure used to measure Cry9C in corn flour and meal. Two commercially available enzyme-linked immunosorbent assay kits were used: one for the determination of Cry9C protein concentration and the other for % StarLink seed. The sampling and analytical variances associated with each step of the Cry9C test procedures were determined for flour and meal. Variances were found to be functions of Cry9C concentration, and regression equations were developed to describe the relationships. Because of the larger particle size, sampling variability associated with cornmeal was about double that for corn flour. For cornmeal, the sampling variance accounted for 92.6% of the total testing variability. The observed sampling and analytical distributions were compared with the Normal distribution. In almost all comparisons, the null hypothesis that the Cry9C protein values were sampled from a Normal distribution could not be rejected at 95% confidence limits. The Normal distribution and the variance estimates were used to evaluate the performance of several Cry9C protein sampling plans for corn flour and meal. Operating characteristic curves were developed and used to demonstrate the effect of increasing sample size on reducing false positives (seller's risk) and false negatives (buyer's risk).

  18. Variability of cytokine gene expression in intestinal tissue and the impact of normalization with the use of reference genes.

    PubMed

    McGowan, Ian; Janocko, Laura; Burneisen, Shaun; Bhat, Anand; Richardson-Harman, Nicola

    2015-01-01

    To determine the intra- and inter-subject variability of mucosal cytokine gene expression in rectal biopsies from healthy volunteers and to screen cytokine and chemokine mRNA as potential biomarkers of mucosal inflammation. Rectal biopsies were collected from 8 participants (3 biopsies per participant) and 1 additional participant (10 biopsies). Quantitative reverse transcription polymerase chain reaction (RT-qPCR) was used to quantify IL-1β, IL-6, IL-12p40, IL-8, IFN-γ, MIP-1α, MIP-1β, RANTES, and TNF-α gene expression in the rectal tissue. The intra-assay, inter-biopsy and inter-subject variance was measured in the eight participants. Bootstrap re-sampling of the biopsy measurements was performed to determine the accuracy of gene expression data obtained for 10 biopsies obtained from one participant. Cytokines were both non-normalized and normalized using four reference genes (GAPDH, β-actin, β2 microglobulin, and CD45). Cytokine measurement accuracy was increased with the number of biopsy samples, per person; four biopsies were typically needed to produce a mean result within a 95% confidence interval of the subject's cytokine level approximately 80% of the time. Intra-assay precision (% geometric standard deviation) ranged between 8.2 and 96.9 with high variance between patients and even between different biopsies from the same patient. Variability was not greatly reduced with the use of reference genes to normalize data. The number of biopsy samples required to provide an accurate result varied by target although 4 biopsy samples per subject and timepoint, provided for >77% accuracy across all targets tested. Biopsies within the same subjects and between subjects had similar levels of variance while variance within a biopsy (intra-assay) was generally lower. Normalization of inflammatory cytokines against reference genes failed to consistently reduce variance. The accuracy and reliability of mRNA expression of inflammatory cytokines will set a ceiling on the ability of these measures to predict mucosal inflammation. Techniques to reduce variability should be developed within a larger cohort of individuals before normative reference values can be validated. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Factors influencing heterogeneity of radiation-induced DNA-damage measured by the alkaline comet assay

    PubMed Central

    2012-01-01

    Background To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Methods Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Results Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Conclusions Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity. PMID:22520045

  20. Using the Positive and Negative Syndrome Scale (PANSS) to Define Different Domains of Negative Symptoms

    PubMed Central

    Khan, Anzalee; Keefe, Richard S. E.

    2017-01-01

    Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as “avolition and anhedonia,” specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes. PMID:29410933

  1. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  2. Improved Horvitz-Thompson Estimation of Model Parameters from Two-phase Stratified Samples: Applications in Epidemiology

    PubMed Central

    Breslow, Norman E.; Lumley, Thomas; Ballantyne, Christie M; Chambless, Lloyd E.; Kulich, Michal

    2009-01-01

    The case-cohort study involves two-phase sampling: simple random sampling from an infinite super-population at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators. PMID:20174455

  3. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  4. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    PubMed

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  5. Strategies Used by Adults to Reduce Their Prescription Drug Costs

    MedlinePlus

    ... on their 2010 income ( 5 ). Data source and methods Data from the 2011 NHIS were used for ... sample design of NHIS. The Taylor series linearization method was chosen for variance estimation. All estimates shown ...

  6. Using the Positive and Negative Syndrome Scale (PANSS) to Define Different Domains of Negative Symptoms: Prediction of Everyday Functioning by Impairments in Emotional Expression and Emotional Experience.

    PubMed

    Harvey, Philip D; Khan, Anzalee; Keefe, Richard S E

    2017-12-01

    Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as "avolition and anhedonia," specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes.

  7. On the impact of relatedness on SNP association analysis.

    PubMed

    Gross, Arnd; Tönjes, Anke; Scholz, Markus

    2017-12-06

    When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.

  8. Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.

    PubMed

    Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L

    2005-12-01

    To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.

  9. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  10. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  11. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  12. K-Fold Crossvalidation in Canonical Analysis.

    ERIC Educational Resources Information Center

    Liang, Kun-Hsia; And Others

    1995-01-01

    A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)

  13. A general unified framework to assess the sampling variance of heritability estimates using pedigree or marker-based relationships.

    PubMed

    Visscher, Peter M; Goddard, Michael E

    2015-01-01

    Heritability is a population parameter of importance in evolution, plant and animal breeding, and human medical genetics. It can be estimated using pedigree designs and, more recently, using relationships estimated from markers. We derive the sampling variance of the estimate of heritability for a wide range of experimental designs, assuming that estimation is by maximum likelihood and that the resemblance between relatives is solely due to additive genetic variation. We show that well-known results for balanced designs are special cases of a more general unified framework. For pedigree designs, the sampling variance is inversely proportional to the variance of relationship in the pedigree and it is proportional to 1/N, whereas for population samples it is approximately proportional to 1/N(2), where N is the sample size. Variation in relatedness is a key parameter in the quantification of the sampling variance of heritability. Consequently, the sampling variance is high for populations with large recent effective population size (e.g., humans) because this causes low variation in relationship. However, even using human population samples, low sampling variance is possible with high N. Copyright © 2015 by the Genetics Society of America.

  14. The use of spatio-temporal correlation to forecast critical transitions

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Bierkens, Marc F. P.

    2010-05-01

    Complex dynamical systems may have critical thresholds at which the system shifts abruptly from one state to another. Such critical transitions have been observed in systems ranging from the human body system to financial markets and the Earth system. Forecasting the timing of critical transitions before they are reached is of paramount importance because critical transitions are associated with a large shift in dynamical regime of the system under consideration. However, it is hard to forecast critical transitions, because the state of the system shows relatively little change before the threshold is reached. Recently, it was shown that increased spatio-temporal autocorrelation and variance can serve as alternative early warning signal for critical transitions. However, thus far these second order statistics have not been used for forecasting in a data assimilation framework. Here we show that the use of spatio-temporal autocorrelation and variance in the state of the system reduces the uncertainty in the predicted timing of critical transitions compared to classical approaches that use the value of the system state only. This is shown by assimilating observed spatio-temporal autocorrelation and variance into a dynamical system model using a Particle Filter. We adapt a well-studied distributed model of a logistically growing resource with a fixed grazing rate. The model describes the transition from an underexploited system with high resource biomass to overexploitation as grazing pressure crosses the critical threshold, which is a fold bifurcation. To represent limited prior information, we use a large variance in the prior probability distributions of model parameters and the system driver (grazing rate). First, we show that the rate of increase in spatio-temporal autocorrelation and variance prior to reaching the critical threshold is relatively consistent across the uncertainty range of the driver and parameter values used. This indicates that an increase in spatio-temporal autocorrelation and variance are consistent predictors of a critical transition, even under the condition of a poorly defined system. Second, we perform data assimilation experiments using an artificial exhaustive data set generated by one realization of the model. To mimic real-world sampling, an observational data set is created from this exhaustive data set. This is done by sampling on a regular spatio-temporal grid, supplemented by sampling locations at a short distance. Spatial and temporal autocorrelation in this observational data set is calculated for different spatial and temporal separation (lag) distances. To assign appropriate weights to observations (here, autocorrelation values and variance) in the Particle Filter, the covariance matrix of the error in these observations is required. This covariance matrix is estimated using Monte Carlo sampling, selecting a different random position of the sampling network relative to the exhaustive data set for each realization. At each update moment in the Particle Filter, observed autocorrelation values are assimilated into the model and the state of the model is updated. Using this approach, it is shown that the use of autocorrelation reduces the uncertainty in the forecasted timing of a critical transition compared to runs without data assimilation. The performance of the use of spatial autocorrelation versus temporal autocorrelation depends on the timing and number of observational data. This study is restricted to a single model only. However, it is becoming increasingly clear that spatio-temporal autocorrelation and variance can be used as early warning signals for a large number of systems. Thus, it is expected that spatio-temporal autocorrelation and variance are valuable in data assimilation frameworks in a large number of dynamical systems.

  15. Estimation of Variance in the Case of Complex Samples.

    ERIC Educational Resources Information Center

    Groenewald, A. C.; Stoker, D. J.

    In a complex sampling scheme it is desirable to select the primary sampling units (PSUs) without replacement to prevent duplications in the sample. Since the estimation of the sampling variances is more complicated when the PSUs are selected without replacement, L. Kish (1965) recommends that the variance be calculated using the formulas…

  16. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  17. Bed load transport over a broad range of timescales: Determination of three regimes of fluctuations

    NASA Astrophysics Data System (ADS)

    Ma, Hongbo; Heyman, Joris; Fu, Xudong; Mettra, Francois; Ancey, Christophe; Parker, Gary

    2014-12-01

    This paper describes the relationship between the statistics of bed load transport flux and the timescale over which it is sampled. A stochastic formulation is developed for the probability distribution function of bed load transport flux, based on the Ancey et al. (2008) theory. An analytical solution for the variance of bed load transport flux over differing sampling timescales is presented. The solution demonstrates that the timescale dependence of the variance of bed load transport flux reduces to a three-regime relation demarcated by an intermittency timescale (tI) and a memory timescale (tc). As the sampling timescale increases, this variance passes through an intermittent stage (≪tI), an invariant stage (tI < t < tc), and a memoryless stage (≫ tc). We propose a dimensionless number (Ra) to represent the relative strength of fluctuation, which provides a common ground for comparison of fluctuation strength among different experiments, as well as different sampling timescales for each experiment. Our analysis indicates that correlated motion and the discrete nature of bed load particles are responsible for this three-regime behavior. We use the data from three experiments with high temporal resolution of bed load transport flux to validate the proposed three-regime behavior. The theoretical solution for the variance agrees well with all three sets of experimental data. Our findings contribute to the understanding of the observed fluctuations of bed load transport flux over monosize/multiple-size grain beds, to the characterization of an inherent connection between short-term measurements and long-term statistics, and to the design of appropriate sampling strategies for bed load transport flux.

  18. Feasibility Study for Design of a Biocybernetic Communication System

    DTIC Science & Technology

    1975-08-01

    electrode for the Within Words variance and Between Words variance for each of the 255 data samples in the 6-sec epoch. If a given sample point was not...contributing to the computer classification of the word, the ratio of the two variances (i.e., the F-statistic) should be small. On the other hand...if the Between Word variance was signifi- cantly higher than the Within Word variance for a given sample point, we can assume with some confidence

  19. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  20. Women's empowerment and domestic violence: the role of sociocultural determinants in maternal and child undernutrition in tribal and rural communities in South India.

    PubMed

    Sethuraman, Kavita; Lansdown, Richard; Sullivan, Keith

    2006-06-01

    Moderate malnutrition continues to affect 46% of children under five years of age and 47% of rural women in India. Women's lack of empowerment is believed to be an important factor in the persistent prevalence of malnutrition. In India, women's empowerment often varies by community, with tribes sometimes being the most progressive. To explore the relationship between women's empowerment, maternal nutritional status, and the nutritional status of their children aged 6 to 24 months in rural and tribal communities. This study in rural Karnataka, India, included tribal and rural subjects and used both qualitative and quantitative methods of data collection. Structured interviews with mothers were performed and anthropometric measurements were obtained for 820 mother-child pairs. The data were analyzed by multivariate and logistic regression. Some degree of malnutrition was seen in 83.5% of children and 72.4% of mothers in the sample. Biological variables explained most of the variance in nutritional status, followed by health-care seeking and women's empowerment variables; socioeconomic variables explained the least amount of variance. Women's empowerment variables were significantly associated with child nutrition and explained 5.6% of the variance in the sample. Maternal experience of psychological abuse and sexual coercion increased the risk of malnutrition in mothers and children. Domestic violence was experienced by 34% of mothers in the sample. In addition to the known investments needed to reduce malnutrition, improving women's nutrition, promoting gender equality, empowering women, and ending violence against women could further reduce the prevalence of malnutrition in this segment of the Indian population.

  1. Sampling benthic macroinvertebrates in a large flood-plain river: Considerations of study design, sample size, and cost

    USGS Publications Warehouse

    Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.

    1998-01-01

    Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.

  2. Precipitation estimation in mountainous terrain using multivariate geostatistics. Part II: isohyetal maps

    USGS Publications Warehouse

    Hevesi, Joseph A.; Flint, Alan L.; Istok, Jonathan D.

    1992-01-01

    Values of average annual precipitation (AAP) may be important for hydrologic characterization of a potential high-level nuclear-waste repository site at Yucca Mountain, Nevada. Reliable measurements of AAP are sparse in the vicinity of Yucca Mountain, and estimates of AAP were needed for an isohyetal mapping over a 2600-square-mile watershed containing Yucca Mountain. Estimates were obtained with a multivariate geostatistical model developed using AAP and elevation data from a network of 42 precipitation stations in southern Nevada and southeastern California. An additional 1531 elevations were obtained to improve estimation accuracy. Isohyets representing estimates obtained using univariate geostatistics (kriging) defined a smooth and continuous surface. Isohyets representing estimates obtained using multivariate geostatistics (cokriging) defined an irregular surface that more accurately represented expected local orographic influences on AAP. Cokriging results included a maximum estimate within the study area of 335 mm at an elevation of 7400 ft, an average estimate of 157 mm for the study area, and an average estimate of 172 mm at eight locations in the vicinity of the potential repository site. Kriging estimates tended to be lower in comparison because the increased AAP expected for remote mountainous topography was not adequately represented by the available sample. Regression results between cokriging estimates and elevation were similar to regression results between measured AAP and elevation. The position of the cokriging 250-mm isohyet relative to the boundaries of pinyon pine and juniper woodlands provided indirect evidence of improved estimation accuracy because the cokriging result agreed well with investigations by others concerning the relationship between elevation, vegetation, and climate in the Great Basin. Calculated estimation variances were also mapped and compared to evaluate improvements in estimation accuracy. Cokriging estimation variances were reduced by an average of 54% relative to kriging variances within the study area. Cokriging reduced estimation variances at the potential repository site by 55% relative to kriging. The usefulness of an existing network of stations for measuring AAP within the study area was evaluated using cokriging variances, and twenty additional stations were located for the purpose of improving the accuracy of future isohyetal mappings. Using the expanded network of stations, the maximum cokriging estimation variance within the study area was reduced by 78% relative to the existing network, and the average estimation variance was reduced by 52%.

  3. Design of a sampling plan to detect ochratoxin A in green coffee.

    PubMed

    Vargas, E A; Whitaker, T B; Dos Santos, E A; Slate, A B; Lima, F B; Franca, R C A

    2006-01-01

    The establishment of maximum limits for ochratoxin A (OTA) in coffee by importing countries requires that coffee-producing countries develop scientifically based sampling plans to assess OTA contents in lots of green coffee before coffee enters the market thus reducing consumer exposure to OTA, minimizing the number of lots rejected, and reducing financial loss for producing countries. A study was carried out to design an official sampling plan to determine OTA in green coffee produced in Brazil. Twenty-five lots of green coffee (type 7 - approximately 160 defects) were sampled according to an experimental protocol where 16 test samples were taken from each lot (total of 16 kg) resulting in a total of 800 OTA analyses. The total, sampling, sample preparation, and analytical variances were 10.75 (CV = 65.6%), 7.80 (CV = 55.8%), 2.84 (CV = 33.7%), and 0.11 (CV = 6.6%), respectively, assuming a regulatory limit of 5 microg kg(-1) OTA and using a 1 kg sample, Romer RAS mill, 25 g sub-samples, and high performance liquid chromatography. The observed OTA distribution among the 16 OTA sample results was compared to several theoretical distributions. The 2 parameter-log normal distribution was selected to model OTA test results for green coffee as it gave the best fit across all 25 lot distributions. Specific computer software was developed using the variance and distribution information to predict the probability of accepting or rejecting coffee lots at specific OTA concentrations. The acceptation probability was used to compute an operating characteristic (OC) curve specific to a sampling plan design. The OC curve was used to predict the rejection of good lots (sellers' or exporters' risk) and the acceptance of bad lots (buyers' or importers' risk).

  4. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  5. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  6. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  7. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  8. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    USDA-ARS?s Scientific Manuscript database

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  9. Performance of Language-Coordinated Collective Systems: A Study of Wine Recognition and Description

    PubMed Central

    Zubek, Julian; Denkiewicz, Michał; Dębska, Agnieszka; Radkowska, Alicja; Komorowska-Mach, Joanna; Litwin, Piotr; Stępień, Magdalena; Kucińska, Adrianna; Sitarska, Ewa; Komorowska, Krystyna; Fusaroli, Riccardo; Tylén, Kristian; Rączaszek-Leonardi, Joanna

    2016-01-01

    Most of our perceptions of and engagements with the world are shaped by our immersion in social interactions, cultural traditions, tools and linguistic categories. In this study we experimentally investigate the impact of two types of language-based coordination on the recognition and description of complex sensory stimuli: that of red wine. Participants were asked to taste, remember and successively recognize samples of wines within a larger set in a two-by-two experimental design: (1) either individually or in pairs, and (2) with or without the support of a sommelier card—a cultural linguistic tool designed for wine description. Both effectiveness of recognition and the kinds of errors in the four conditions were analyzed. While our experimental manipulations did not impact recognition accuracy, bias-variance decomposition of error revealed non-trivial differences in how participants solved the task. Pairs generally displayed reduced bias and increased variance compared to individuals, however the variance dropped significantly when they used the sommelier card. The effect of sommelier card reducing the variance was observed only in pairs, individuals did not seem to benefit from the cultural linguistic tool. Analysis of descriptions generated with the aid of sommelier cards shows that pairs were more coherent and discriminative than individuals. The findings are discussed in terms of global properties and dynamics of collective systems when constrained by different types of cultural practices. PMID:27729875

  10. Jackknife Estimation of Sampling Variance of Ratio Estimators in Complex Samples: Bias and the Coefficient of Variation. Research Report. ETS RR-06-19

    ERIC Educational Resources Information Center

    Oranje, Andreas

    2006-01-01

    A multitude of methods has been proposed to estimate the sampling variance of ratio estimates in complex samples (Wolter, 1985). Hansen and Tepping (1985) studied some of those variance estimators and found that a high coefficient of variation (CV) of the denominator of a ratio estimate is indicative of a biased estimate of the standard error of a…

  11. Correcting for Systematic Bias in Sample Estimates of Population Variances: Why Do We Divide by n-1?

    ERIC Educational Resources Information Center

    Mittag, Kathleen Cage

    An important topic presented in introductory statistics courses is the estimation of population parameters using samples. Students learn that when estimating population variances using sample data, we always get an underestimate of the population variance if we divide by n rather than n-1. One implication of this correction is that the degree of…

  12. Understanding the Degrees of Freedom of Sample Variance by Using Microsoft Excel

    ERIC Educational Resources Information Center

    Ding, Jian-Hua; Jin, Xian-Wen; Shuai, Ling-Ying

    2017-01-01

    In this article, the degrees of freedom of the sample variance are simulated by using the Visual Basic for Applications of Microsoft Excel 2010. The simulation file dynamically displays why the sample variance should be calculated by dividing the sum of squared deviations by n-1 rather than n, which is helpful for students to grasp the meaning of…

  13. tscvh R Package: Computational of the two samples test on microarray-sequencing data

    NASA Astrophysics Data System (ADS)

    Fajriyah, Rohmatul; Rosadi, Dedi

    2017-12-01

    We present a new R package, a tscvh (two samples cross-variance homogeneity), as we called it. This package is a software of the cross-variance statistical test which has been proposed and introduced by Fajriyah ([3] and [4]), based on the cross-variance concept. The test can be used as an alternative test for the significance difference between two means when sample size is small, the situation which is usually appeared in the bioinformatics research. Based on its statistical distribution, the p-value can be also provided. The package is built under a homogeneity of variance between samples.

  14. Sampling in freshwater environments: suspended particle traps and variability in the final data.

    PubMed

    Barbizzi, Sabrina; Pati, Alessandra

    2008-11-01

    This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.

  15. Estimating individual glomerular volume in the human kidney: clinical perspectives.

    PubMed

    Puelles, Victor G; Zimanyi, Monika A; Samuel, Terence; Hughson, Michael D; Douglas-Denton, Rebecca N; Bertram, John F; Armitage, James A

    2012-05-01

    Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin's concordance coefficient (R(C)), coefficient of variation (CV) and coefficient of error (CE) measured reliability. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (R(C) > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution.

  16. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    PubMed

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  17. Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition

    USGS Publications Warehouse

    Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina

    2012-01-01

    Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site  =  36%, residual  =  64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least-disturbed sites (15–16%) for invertebrates and diatoms, whereas discordance among assessments based on fish did not differ between basin types (least-disturbed  =  16%, developed  =  17%). Assessments made using invertebrate and diatom indicators from a single reach disagreed with other samples collected within the same stream segment nearly ⅓ of the time in developed basins, compared to ⅙ for all other cases.

  18. Structural changes and out-of-sample prediction of realized range-based variance in the stock market

    NASA Astrophysics Data System (ADS)

    Gong, Xu; Lin, Boqiang

    2018-03-01

    This paper aims to examine the effects of structural changes on forecasting the realized range-based variance in the stock market. Considering structural changes in variance in the stock market, we develop the HAR-RRV-SC model on the basis of the HAR-RRV model. Subsequently, the HAR-RRV and HAR-RRV-SC models are used to forecast the realized range-based variance of S&P 500 Index. We find that there are many structural changes in variance in the U.S. stock market, and the period after the financial crisis contains more structural change points than the period before the financial crisis. The out-of-sample results show that the HAR-RRV-SC model significantly outperforms the HAR-BV model when they are employed to forecast the 1-day, 1-week, and 1-month realized range-based variances, which means that structural changes can improve out-of-sample prediction of realized range-based variance. The out-of-sample results remain robust across the alternative rolling fixed-window, the alternative threshold value in ICSS algorithm, and the alternative benchmark models. More importantly, we believe that considering structural changes can help improve the out-of-sample performances of most of other existing HAR-RRV-type models in addition to the models used in this paper.

  19. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  20. The Expected Sample Variance of Uncorrelated Random Variables with a Common Mean and Some Applications in Unbalanced Random Effects Models

    ERIC Educational Resources Information Center

    Vardeman, Stephen B.; Wendelberger, Joanne R.

    2005-01-01

    There is a little-known but very simple generalization of the standard result that for uncorrelated random variables with common mean [mu] and variance [sigma][superscript 2], the expected value of the sample variance is [sigma][superscript 2]. The generalization justifies the use of the usual standard error of the sample mean in possibly…

  1. Social capital and neo-materialist contextual determinants of sense of insecurity in the neighbourhood: a multilevel analysis in Southern Sweden.

    PubMed

    Lindström, Martin; Lindström, Christine; Moghaddassi, Mahnaz; Merlo, Juan

    2006-12-01

    The aim of this study was to investigate the influence of contextual (social capital and neo-materialist) and individual factors on sense of insecurity in the neighbourhood. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (median odds ratios, intra-class correlation, cross-level modification and odds ratios) of individual and municipality/city quarter (social capital and police district) factors on sense of insecurity was analysed. The crude variance between municipalities/city quarters was not affected by individual factors. The introduction of administrative police district in the model reduced the municipality variance, although some of the significant variance between municipalities remained. The introduction of social capital did not affect the municipality variance. This study suggests that the neo-materialist factor administrative police district may partly explain the individual's sense of insecurity in the neighbourhood.

  2. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Effects of diversity on multiagent systems: Minority games

    NASA Astrophysics Data System (ADS)

    Wong, K. Y. Michael; Lim, S. W.; Gao, Zhuo

    2005-06-01

    We consider a version of large population games whose agents compete for resources using strategies with adaptable preferences. The games can be used to model economic markets, ecosystems, or distributed control. Diversity of initial preferences of strategies is introduced by randomly assigning biases to the strategies of different agents. We find that diversity among the agents reduces their maladaptive behavior. We find interesting scaling relations with diversity for the variance and other parameters such as the convergence time, the fraction of fickle agents, and the variance of wealth, illustrating their dynamical origin. When diversity increases, the scaling dynamics is modified by kinetic sampling and waiting effects. Analyses yield excellent agreement with simulations.

  5. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less

  6. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  7. Jackknife variance of the partial area under the empirical receiver operating characteristic curve.

    PubMed

    Bandos, Andriy I; Guo, Ben; Gur, David

    2017-04-01

    Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.

  8. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  9. Comparative test on several forms of background error covariance in 3DVar

    NASA Astrophysics Data System (ADS)

    Shao, Aimei

    2013-04-01

    The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )

  10. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  11. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  12. Estimation of genetic parameters and their sampling variances of quantitative traits in the type 2 modified augmented design

    USDA-ARS?s Scientific Manuscript database

    We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...

  13. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  14. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.

  15. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  16. Estimating individual glomerular volume in the human kidney: clinical perspectives

    PubMed Central

    Puelles, Victor G.; Zimanyi, Monika A.; Samuel, Terence; Hughson, Michael D.; Douglas-Denton, Rebecca N.; Bertram, John F.

    2012-01-01

    Background. Measurement of individual glomerular volumes (IGV) has allowed the identification of drivers of glomerular hypertrophy in subjects without overt renal pathology. This study aims to highlight the relevance of IGV measurements with possible clinical implications and determine how many profiles must be measured in order to achieve stable size distribution estimates. Methods. We re-analysed 2250 IGV estimates obtained using the disector/Cavalieri method in 41 African and 34 Caucasian Americans. Pooled IGV analysis of mean and variance was conducted. Monte-Carlo (Jackknife) simulations determined the effect of the number of sampled glomeruli on mean IGV. Lin’s concordance coefficient (RC), coefficient of variation (CV) and coefficient of error (CE) measured reliability. Results. IGV mean and variance increased with overweight and hypertensive status. Superficial glomeruli were significantly smaller than juxtamedullary glomeruli in all subjects (P < 0.01), by race (P < 0.05) and in obese individuals (P < 0.01). Subjects with multiple chronic kidney disease (CKD) comorbidities showed significant increases in IGV mean and variability. Overall, mean IGV was particularly reliable with nine or more sampled glomeruli (RC > 0.95, <5% difference in CV and CE). These observations were not affected by a reduced sample size and did not disrupt the inverse linear correlation between mean IGV and estimated total glomerular number. Conclusions. Multiple comorbidities for CKD are associated with increased IGV mean and variance within subjects, including overweight, obesity and hypertension. Zonal selection and the number of sampled glomeruli do not represent drawbacks for future longitudinal biopsy-based studies of glomerular size and distribution. PMID:21984554

  17. Increasing precision of turbidity-based suspended sediment concentration and load estimates.

    PubMed

    Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E

    2010-01-01

    Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.

  18. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  19. A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.

    NASA Astrophysics Data System (ADS)

    Lark, R. M.; Lapworth, D. J.

    2014-05-01

    It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.

  20. Applying Incremental Sampling Methodology to Soils Containing Heterogeneously Distributed Metallic Residues to Improve Risk Analysis.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.

  1. Enhanced algorithms for stochastic programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishna, Alamuru S.

    1993-09-01

    In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean ofmore » a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.« less

  2. Genetic and nonshared environmental factors affect the likelihood of being charged with driving under the influence (DUI) and driving while intoxicated (DWI).

    PubMed

    Beaver, Kevin M; Barnes, J C

    2012-12-01

    Driving under the influence (DUI) and driving while intoxicated (DWI) are related to a range of serious health, legal, and financial costs. Given the costs to society of DUIs and DWIs, there has been interest in identifying the causes of DUIs and DWIs. The current study added to this existing knowledge base by estimating genetic and environmental effects on DUIs and DWIs in a sample of twins drawn from the National Longitudinal Study of Adolescent Health (Add Health). The results of the analyses revealed that genetic factors explained 53% of the variance in DUIs/DWIs and the nonshared environment explained 47% of the variance. Shared environmental factors explained none of the variance in DUIs/DWIs. We conclude with a discussion of the results, the limitations of the study, and how the findings might be compatible with policies designed to reduce DUIs and DWIs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. A BASIS FOR MODIFYING THE TANK 12 COMPOSITE SAMPLING DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shine, G.

    The SRR sampling campaign to obtain residual solids material from the Savannah River Site (SRS) Tank Farm Tank 12 primary vessel resulted in obtaining appreciable material in all 6 planned source samples from the mound strata but only in 5 of the 6 planned source samples from the floor stratum. Consequently, the design of the compositing scheme presented in the Tank 12 Sampling and Analysis Plan, Pavletich (2014a), must be revised. Analytical Development of SRNL statistically evaluated the sampling uncertainty associated with using various compositing arrays and splitting one or more samples for compositing. The variance of the simple meanmore » of composite sample concentrations is a reasonable standard to investigate the impact of the following sampling options. Composite Sample Design Option (a). Assign only 1 source sample from the floor stratum and 1 source sample from each of the mound strata to each of the composite samples. Each source sample contributes material to only 1 composite sample. Two source samples from the floor stratum would not be used. Composite Sample Design Option (b). Assign 2 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that one source sample from the floor must be used twice, with 2 composite samples sharing material from this particular source sample. All five source samples from the floor would be used. Composite Sample Design Option (c). Assign 3 source samples from the floor stratum and 1 source sample from each of the mound strata to each composite sample. This infers that several of the source samples from the floor stratum must be assigned to more than one composite sample. All 5 source samples from the floor would be used. Using fewer than 12 source samples will increase the sampling variability over that of the Basic Composite Sample Design, Pavletich (2013). Considering the impact to the variance of the simple mean of the composite sample concentrations, the recommendation is to construct each sample composite using four or five source samples. Although the variance using 5 source samples per composite sample (Composite Sample Design Option (c)) was slightly less than the variance using 4 source samples per composite sample (Composite Sample Design Option (b)), there is no practical difference between those variances. This does not consider that the measurement error variance, which is the same for all composite sample design options considered in this report, will further dilute any differences. Composite Sample Design Option (a) had the largest variance for the mean concentration in the three composite samples and should be avoided. These results are consistent with Pavletich (2014b) which utilizes a low elevation and a high elevation mound source sample and two floor source samples for each composite sample. Utilizing the four source samples per composite design, Pavletich (2014b) utilizes aliquots of Floor Sample 4 for two composite samples.« less

  4. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  5. Calibrating SALT: a sampling scheme to improve estimates of suspended sediment yield

    Treesearch

    Robert B. Thomas

    1986-01-01

    Abstract - SALT (Selection At List Time) is a variable probability sampling scheme that provides unbiased estimates of suspended sediment yield and its variance. SALT performs better than standard schemes which are estimate variance. Sampling probabilities are based on a sediment rating function which promotes greater sampling intensity during periods of high...

  6. How does variance in fertility change over the demographic transition?

    PubMed Central

    Hruschka, Daniel J.; Burger, Oskar

    2016-01-01

    Most work on the human fertility transition has focused on declines in mean fertility. However, understanding changes in the variance of reproductive outcomes can be equally important for evolutionary questions about the heritability of fertility, individual determinants of fertility and changing patterns of reproductive skew. Here, we document how variance in completed fertility among women (45–49 years) differs across 200 surveys in 72 low- to middle-income countries where fertility transitions are currently in progress at various stages. Nearly all (91%) of samples exhibit variance consistent with a Poisson process of fertility, which places systematic, and often severe, theoretical upper bounds on the proportion of variance that can be attributed to individual differences. In contrast to the pattern of total variance, these upper bounds increase from high- to mid-fertility samples, then decline again as samples move from mid to low fertility. Notably, the lowest fertility samples often deviate from a Poisson process. This suggests that as populations move to low fertility their reproduction shifts from a rate-based process to a focus on an ideal number of children. We discuss the implications of these findings for predicting completed fertility from individual-level variables. PMID:27022082

  7. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    PubMed

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  8. Does an uneven sample size distribution across settings matter in cross-classified multilevel modeling? Results of a simulation study.

    PubMed

    Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C

    2018-06-06

    Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  10. Importance Sampling Variance Reduction in GRESS ATMOSIM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wakeford, Daniel Tyler

    This document is intended to introduce the importance sampling method of variance reduction to a Geant4 user for application to neutral particle Monte Carlo transport through the atmosphere, as implemented in GRESS ATMOSIM.

  11. Variances and uncertainties of the sample laboratory-to-laboratory variance (S(L)2) and standard deviation (S(L)) associated with an interlaboratory study.

    PubMed

    McClure, Foster D; Lee, Jung K

    2012-01-01

    The validation process for an analytical method usually employs an interlaboratory study conducted as a balanced completely randomized model involving a specified number of randomly chosen laboratories, each analyzing a specified number of randomly allocated replicates. For such studies, formulas to obtain approximate unbiased estimates of the variance and uncertainty of the sample laboratory-to-laboratory (lab-to-lab) STD (S(L)) have been developed primarily to account for the uncertainty of S(L) when there is a need to develop an uncertainty budget that includes the uncertainty of S(L). For the sake of completeness on this topic, formulas to estimate the variance and uncertainty of the sample lab-to-lab variance (S(L)2) were also developed. In some cases, it was necessary to derive the formulas based on an approximate distribution for S(L)2.

  12. A structured sparse regression method for estimating isoform expression level from multi-sample RNA-seq data.

    PubMed

    Zhang, L; Liu, X J

    2016-06-03

    With the rapid development of next-generation high-throughput sequencing technology, RNA-seq has become a standard and important technique for transcriptome analysis. For multi-sample RNA-seq data, the existing expression estimation methods usually deal with each single-RNA-seq sample, and ignore that the read distributions are consistent across multiple samples. In the current study, we propose a structured sparse regression method, SSRSeq, to estimate isoform expression using multi-sample RNA-seq data. SSRSeq uses a non-parameter model to capture the general tendency of non-uniformity read distribution for all genes across multiple samples. Additionally, our method adds a structured sparse regularization, which not only incorporates the sparse specificity between a gene and its corresponding isoform expression levels, but also reduces the effects of noisy reads, especially for lowly expressed genes and isoforms. Four real datasets were used to evaluate our method on isoform expression estimation. Compared with other popular methods, SSRSeq reduced the variance between multiple samples, and produced more accurate isoform expression estimations, and thus more meaningful biological interpretations.

  13. A new approach to importance sampling for the simulation of false alarms. [in radar systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1987-01-01

    In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.

  14. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  15. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    PubMed

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  16. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data

    PubMed Central

    Dazard, Jean-Eudes; Rao, J. Sunil

    2012-01-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950

  17. Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.

    PubMed

    Ritz, Christian; Van der Vliet, Leana

    2009-09-01

    The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.

  18. Analysis of components of variance in multiple-reader studies of computer-aided diagnosis with different tasks

    NASA Astrophysics Data System (ADS)

    Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei

    2001-06-01

    In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.

  19. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  20. Variance Estimation Using Replication Methods in Structural Equation Modeling with Complex Sample Data

    ERIC Educational Resources Information Center

    Stapleton, Laura M.

    2008-01-01

    This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…

  1. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiba, G., E-mail: go_chiba@eng.hokudai.ac.jp; Tsuji, M.; Narabayashi, T.

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  2. Monte Carlo isotopic inventory analysis for complex nuclear systems

    NASA Astrophysics Data System (ADS)

    Phruksarojanakun, Phiphat

    Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.

  3. Implications of the field sampling procedure of the LUCAS Topsoil Survey for uncertainty in soil organic carbon concentrations.

    NASA Astrophysics Data System (ADS)

    Lark, R. M.; Rawlins, B. G.; Lark, T. A.

    2014-05-01

    The LUCAS Topsoil survey is a pan-European Union initiative in which soil data were collected according to standard protocols from 19 967 sites. Any inference about soil variables is subject to uncertainty due to different sources of variability in the data. In this study we examine the likely magnitude of uncertainty due to the field-sampling protocol. The published sampling protocol (LUCAS, 2009) describes a procedure to form a composite soil sample from aliquots collected to a depth of between approximately 15-20. A v-shaped hole to the target depth is cut with a spade, then a slice is cut from one of the exposed surfaces. This methodology gives rather less control of the sampling depth than protocols used in other soil and geochemical surveys, this may be a substantial source of variation in uncultivated soils with strong contrasts between an organic-rich A-horizon and an underlying B-horizon. We extracted all representative profile descriptions from soil series recorded in the memoir of the 1:250 000-scale map of Northern England (Soil Survey of England and Wales, 1984) where the base of the A-horizon is less than 20 cm below the surface. The Soil Associations in which these 14 series are significant members cover approximately 17% of the area of Northern England, and are expected to be the mineral soils with the largest organic content. Soil Organic Carbon content and bulk density were extracted for the A- and B-horizons, along with the thickness of the horizons. Recorded bulk density, or prediction by a pedotransfer function, were also recorded. For any proposed angle of the v-shaped hole, the proportions of A- and B-horizon in the resulting sample may be computed by trigonometry. From the bulk density and SOC concentration of the horizons, the SOC concentration of the sample can be computed. For each Soil Series we drew 1000 random samples from a trapezoidal distribution of angles, with uniform density over the range corresponding to depths 15-20 cm and zero density for angles corresponding to depths larger than 21 cm or less than 14 cm. We computed the corresponding variance of sample SOC contents. We found that the variance in SOC determinations attributable to variation in sample depth for these uncultivated soils was of the same order of magnitude as the estimate of the subsampling + analytical variance component (both on a log scale) that we previously computed for soils in the UK (Rawlins et al., 2009). It seems unnecessary to accept this source of uncertainty, given the effort undertaken to reduce the analytical variation which is no larger (and often smaller) than this variation due to the field protocol. If pan-European soil monitoring is to be based on the LUCAS Topsoil survey, as suggested by an initial report, uncertainty could be reduced if the sampling depth was specified to a unique depth, rather than the current depth range. LUCAS. 2009. Instructions for Surveyors. Technical reference document C-1: General implementation, Land Cover and Use, Water management, Soil, Transect, Photos. European Commission, Eurostat. Rawlins, B.G., Scheib, A.J., Lark, R.M. & Lister, T.R. 2009. Sampling and analytical plus subsampling variance components for five soil indicators observed at regional scale. European Journal of Soil Science 60, 740-747

  4. Hidden Item Variance in Multiple Mini-Interview Scores

    ERIC Educational Resources Information Center

    Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen

    2017-01-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…

  5. RESPONDENT-DRIVEN SAMPLING AS MARKOV CHAIN MONTE CARLO

    PubMed Central

    GOEL, SHARAD; SALGANIK, MATTHEW J.

    2013-01-01

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present respondent-driven sampling as Markov chain Monte Carlo (MCMC) importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating respondent-driven sampling studies. PMID:19572381

  6. Increased gender variance in autism spectrum disorders and attention deficit hyperactivity disorder.

    PubMed

    Strang, John F; Kenworthy, Lauren; Dominska, Aleksandra; Sokoloff, Jennifer; Kenealy, Laura E; Berl, Madison; Walsh, Karin; Menvielle, Edgardo; Slesaransky-Poe, Graciela; Kim, Kyung-Eun; Luong-Tran, Caroline; Meagher, Haley; Wallace, Gregory L

    2014-11-01

    Evidence suggests over-representation of autism spectrum disorders (ASDs) and behavioral difficulties among people referred for gender issues, but rates of the wish to be the other gender (gender variance) among different neurodevelopmental disorders are unknown. This chart review study explored rates of gender variance as reported by parents on the Child Behavior Checklist (CBCL) in children with different neurodevelopmental disorders: ASD (N = 147, 24 females and 123 males), attention deficit hyperactivity disorder (ADHD; N = 126, 38 females and 88 males), or a medical neurodevelopmental disorder (N = 116, 57 females and 59 males), were compared with two non-referred groups [control sample (N = 165, 61 females and 104 males) and non-referred participants in the CBCL standardization sample (N = 1,605, 754 females and 851 males)]. Significantly greater proportions of participants with ASD (5.4%) or ADHD (4.8%) had parent reported gender variance than in the combined medical group (1.7%) or non-referred comparison groups (0-0.7%). As compared to non-referred comparisons, participants with ASD were 7.59 times more likely to express gender variance; participants with ADHD were 6.64 times more likely to express gender variance. The medical neurodevelopmental disorder group did not differ from non-referred samples in likelihood to express gender variance. Gender variance was related to elevated emotional symptoms in ADHD, but not in ASD. After accounting for sex ratio differences between the neurodevelopmental disorder and non-referred comparison groups, gender variance occurred equally in females and males.

  7. FOCIS: A forest classification and inventory system using LANDSAT and digital terrain data

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Franklin, J.; Woodcook, C. E.; Logan, T. L.

    1981-01-01

    Accurate, cost-effective stratification of forest vegetation and timber inventory is the primary goal of a Forest Classification and Inventory System (FOCIS). Conventional timber stratification using photointerpretation can be time-consuming, costly, and inconsistent from analyst to analyst. FOCIS was designed to overcome these problems by using machine processing techniques to extract and process tonal, textural, and terrain information from registered LANDSAT multispectral and digital terrain data. Comparison of samples from timber strata identified by conventional procedures showed that both have about the same potential to reduce the variance of timber volume estimates over simple random sampling.

  8. Analysis of the variation in OCT measurements of a structural bottle neck for eye-brain transfer of visual information from 3D-volumes of the optic nerve head, PIMD-Average [02π

    NASA Astrophysics Data System (ADS)

    Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla

    2016-03-01

    The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can be observed with a power of 0.8. This is shorter than what has been observed both for HRT measurements and automated perimetry measurements with a similar observation rate. It is concluded that PIMDAverage [02π] has the potential to detect deterioration of glaucoma quicker than currently available primary diagnostic instruments. To increase the efficiency of PIMD-Average [02π] further, the variation among visits within subject has to be reduced.

  9. Neurocognitive impairment in a large sample of homeless adults with mental illness.

    PubMed

    Stergiopoulos, V; Cusi, A; Bekele, T; Skosireva, A; Latimer, E; Schütz, C; Fernando, I; Rourke, S B

    2015-04-01

    This study examines neurocognitive functioning in a large, well-characterized sample of homeless adults with mental illness and assesses demographic and clinical factors associated with neurocognitive performance. A total of 1500 homeless adults with mental illness enrolled in the At Home Chez Soi study completed neuropsychological measures assessing speed of information processing, memory, and executive functioning. Sociodemographic and clinical data were also collected. Linear regression analyses were conducted to examine factors associated with neurocognitive performance. Approximately half of our sample met criteria for psychosis, major depressive disorder, and alcohol or substance use disorder, and nearly half had experienced severe traumatic brain injury. Overall, 72% of participants demonstrated cognitive impairment, including deficits in processing speed (48%), verbal learning (71%) and recall (67%), and executive functioning (38%). The overall statistical model explained 19.8% of the variance in the neurocognitive summary score, with reduced neurocognitive performance associated with older age, lower education, first language other than English or French, Black or Other ethnicity, and the presence of psychosis. Homeless adults with mental illness experience impairment in multiple neuropsychological domains. Much of the variance in our sample's cognitive performance remains unexplained, highlighting the need for further research in the mechanisms underlying cognitive impairment in this population. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances.

    PubMed

    Böing-Messing, Florian; Mulder, Joris

    2018-05-03

    In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

  11. Portfolio of automated trading systems: complexity and learning set size issues.

    PubMed

    Raudys, Sarunas

    2013-03-01

    In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.

  12. The mean and variance of phylogenetic diversity under rarefaction

    PubMed Central

    Matsen, Frederick A.

    2013-01-01

    Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701

  13. The mean and variance of phylogenetic diversity under rarefaction.

    PubMed

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  14. Shape variation in the human pelvis and limb skeleton: Implications for obstetric adaptation.

    PubMed

    Kurki, Helen K; Decrausaz, Sarah-Louise

    2016-04-01

    Under the obstetrical dilemma (OD) hypothesis, selection acts on the human female pelvis to ensure a sufficiently sized obstetric canal for birthing a large-brained, broad shouldered neonate, while bipedal locomotion selects for a narrower and smaller pelvis. Despite this female-specific stabilizing selection, variability of linear dimensions of the pelvic canal and overall size are not reduced in females, suggesting shape may instead be variable among females of a population. Female canal shape has been shown to vary among populations, while male canal shape does not. Within this context, we examine within-population canal shape variation in comparison with that of noncanal aspects of the pelvis and the limbs. Nine skeletal samples (total female n = 101, male n = 117) representing diverse body sizes and shapes were included. Principal components analysis was applied to size-adjusted variables of each skeletal region. A multivariate variance was calculated using the weighted PC scores for all components in each model and F-ratios used to assess differences in within-population variances between sexes and skeletal regions. Within both sexes, multivariate canal shape variance is significantly greater than noncanal pelvis and limb variances, while limb variance is greater than noncanal pelvis variance in some populations. Multivariate shape variation is not consistently different between the sexes in any of the skeletal regions. Diverse selective pressures, including obstetrics, locomotion, load carrying, and others may act on canal shape, as well as genetic drift and plasticity, thus increasing variation in morphospace while protecting obstetric sufficiency. © 2015 Wiley Periodicals, Inc.

  15. Exploring the factor structure of the Food Cravings Questionnaire-Trait in Cuban adults

    PubMed Central

    Rodríguez-Martín, Boris C.; Molerio-Pérez, Osana

    2014-01-01

    Food cravings refer to an intense desire to eat specific foods. The Food Cravings Questionnaire-Trait (FCQ-T) is the most commonly used instrument to assess food cravings as a multidimensional construct. Its 39 items have an underlying nine-factor structure for both the original English and Spanish version; but subsequent studies yielded fewer factors. As a result, a 15-item version of the FCQ-T with one-factor structure has been proposed (FCQ-T-reduced; see this Research Topic). The current study aimed to explore the factor structure of the Spanish version for both the FCQ-T and FCQ-T-reduced in a sample of 1241 Cuban adults. Results showed a four-factor structure for the FCQ-T, which explained 55% of the variance. Factors were highly correlated. Using the items of the FCQ-T-reduced only showed a one-factor structure, which explained 52% of the variance. Both versions of the FCQ-T were positively correlated with body mass index (BMI), scores on the Food Thoughts Suppression Inventory and weight cycling. In addition, women had higher scores than men and restrained eaters had higher scores than unrestrained eaters. To summarize, results showed that (1) the FCQ-T factor structure was significantly reduced in Cuban adults and (2) the FCQ-T-reduced may represent a good alternative to efficiently assess food craving on a trait level. PMID:24672503

  16. Influence function based variance estimation and missing data issues in case-cohort studies.

    PubMed

    Mark, S D; Katki, H

    2001-12-01

    Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.

  17. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  18. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances.

    PubMed

    van Breukelen, Gerard J P; Candel, Math J J M

    2018-06-10

    Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment-to-control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment-dependent costs (cost-considered design). The results show that the balanced design is the MMD if the treatment-to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment-to-control variance ratio. It still is highly efficient and better than the cost-considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost-considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment-dependent and homogeneity of variances cannot be assumed. © 2018 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  19. Variance-Stable R-Estimators.

    DTIC Science & Technology

    1984-05-01

    By means of the concept of change-of variance function we investigate the stability properties of the asymptotic variance of R-estimators. This allows us to construct the optimal V-robust R-estimator that minimizes the asymptotic variance at the model, under the side condition of a bounded change-of variance function. Finally, we discuss the connection between this function and an influence function for two-sample rank tests introduced by Eplett (1980). (Author)

  20. Reducing variation in a rabbit vaccine safety study with particular emphasis on housing conditions and handling.

    PubMed

    Verwer, Cynthia M; van der Ark, Arno; van Amerongen, Geert; van den Bos, Ruud; Hendriksen, Coenraad F M

    2009-04-01

    This paper describes the results of a study of the effects of modified housing conditions, conditioning and habituation on humans using a rabbit model for monitoring whole-cell pertussis vaccine (pWCV)-induced adverse effects. The study has been performed with reference to previous vaccine safety studies of pWCV in rabbits in which results were difficult to interpret due to the large variation in experimental outcome, especially in the key parameter deep-body temperature (T(b)). Certain stressful laboratory conditions, as well as procedures involving humans, e.g. blood sampling, inoculation and cage-cleaning, were hypothesized to cause this large variation. The results of this study show that under modified housing conditions rabbits have normal circadian body temperatures. This allowed discrimination of pWCV-induced adverse effects in which handled rabbits tended to show a dose-related increase in temperature after inoculation with little variance, whereas non-handled rabbits did not. Effects of experimental and routine procedures on body temperature were significantly reduced under modified conditions and were within the normal T(b) range. Handled animals reacted less strongly and with less variance to experimental procedures, such as blood sampling, injection and cage-cleaning, than non-handled rabbits. Overall, handling had a positive effect on the behaviour of the animals. Data show that the housing modifications have provided a more robust model for monitoring pWCV adverse effects. Furthermore, conditioning and habituation of rabbits to humans reduce the variation in experimental outcome, which might allow for a reduction in the number of animals used. In addition, this also reduces distress and thus contributes to refining this animal model.

  1. How much is enough? An analysis of CD measurement amount for mask characterization

    NASA Astrophysics Data System (ADS)

    Ullrich, Albrecht; Richter, Jan

    2009-10-01

    The demands on CD (critical dimension) metrology amount in terms of both reproducibility and measurement uncertainty steadily increase from node to node. Different mask characterization requirements have to be addressed like very small features, unevenly distributed features, contacts, semi-dense structures to name only a few. Usually this enhanced need is met by an increasing number of CD measurements, where the new CD requirements are added to the well established CD characterization recipe. This leads straight forwardly to prolonged cycle times and highly complex evaluation routines. At the same time mask processes are continuously improved to become more stable. The enhanced stability offers potential to actually reduce the number of measurements. Thus, in this work we will start to address the fundamental question of how many CD measurements are needed for mask characterization for a given confidence level. We used analysis of variances (ANOVA) to distinguish various contributors like mask making process, measurement tool stability and measurement methodology. These contributions have been investigated for classical photomask CD specifications e.g. mean to target, CD uniformity, target offset tolerance and x-y bias. We found depending on specification that the importance of the contributors interchanges. Interestingly, not only short and long-term metrology contributions are dominant. Also the number of measurements and their spatial distribution on the mask layout (sampling methodology) can be the most important part of the variance. The knowledge of contributions can be used to optimize the sampling plan. As a major finding, we conclude that there is potential to reduce a significant amount of measurements without loosing confidence at all. Here, full sampling in x and y as well as full sampling for different features can be shortened substantially almost up to 50%.

  2. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  3. Temporal variability in urinary levels of drinking water disinfection byproducts dichloroacetic acid and trichloroacetic acid among men

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yi-Xin; Zeng, Qiang; Wang, Le

    Urinary haloacetic acids (HAAs), such as dichloroacetic acid (DCAA) and trichloroacetic acid (TCAA), have been suggested as potential biomarkers of exposure to drinking water disinfection byproducts (DBPs). However, variable exposure to and the short elimination half-lives of these biomarkers can result in considerable variability in urinary measurements, leading to exposure misclassification. Here we examined the variability of DCAA and TCAA levels in the urine among eleven men who provided urine samples on 8 days over 3 months. The urinary concentrations of DCAA and TCAA were measured by gas chromatography coupled with electron capture detection. We calculated the intraclass correlation coefficientsmore » (ICCs) to characterize the within-person and between-person variances and computed the sensitivity and specificity to assess how well single or multiple urine collections accurately determined personal 3-month average DCAA and TCAA levels. The within-person variance was much higher than the between-person variance for all three sample types (spot, first morning, and 24-h urine samples) for DCAA (ICC=0.08–0.37) and TCAA (ICC=0.09–0.23), regardless of the sampling interval. A single-spot urinary sample predicted high (top 33%) 3-month average DCAA and TCAA levels with high specificity (0.79 and 0.78, respectively) but relatively low sensitivity (0.47 and 0.50, respectively). Collecting two or three urine samples from each participant improved the classification. The poor reproducibility of the measured urinary DCAA and TCAA concentrations indicate that a single measurement may not accurately reflect individual long-term exposure. Collection of multiple urine samples from one person is an option for reducing exposure classification errors in studies exploring the effects of DBP exposure on reproductive health. - Highlights: • We evaluated the variability of DCAA and TCAA levels in the urine among men. • Urinary DCAA and TCAA levels varied greatly over a 3-month period. • Single measurement may not accurately reflect personal long-term exposure levels. • Collecting multiple samples from one person improved the exposure classification.« less

  4. Respondent-driven sampling as Markov chain Monte Carlo.

    PubMed

    Goel, Sharad; Salganik, Matthew J

    2009-07-30

    Respondent-driven sampling (RDS) is a recently introduced, and now widely used, technique for estimating disease prevalence in hidden populations. RDS data are collected through a snowball mechanism, in which current sample members recruit future sample members. In this paper we present RDS as Markov chain Monte Carlo importance sampling, and we examine the effects of community structure and the recruitment procedure on the variance of RDS estimates. Past work has assumed that the variance of RDS estimates is primarily affected by segregation between healthy and infected individuals. We examine an illustrative model to show that this is not necessarily the case, and that bottlenecks anywhere in the networks can substantially affect estimates. We also show that variance is inflated by a common design feature in which the sample members are encouraged to recruit multiple future sample members. The paper concludes with suggestions for implementing and evaluating RDS studies.

  5. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  6. Development of rotation sample designs for the estimation of crop acreages

    NASA Technical Reports Server (NTRS)

    Lycthuan-Lee, T. G. (Principal Investigator)

    1981-01-01

    The idea behind the use of rotation sample designs is that the variation of the crop acreage of a particular sample unit from year to year is usually less than the variation of crop acreage between units within a particular year. The estimation theory is based on an additive mixed analysis of variance model with years as fixed effects, (a sub t), and sample units as a variable factor. The rotation patterns are decided upon according to: (1) the number of sample units in the design each year; (2) the number of units retained in the following years; and (3) the number of years to complete the rotation pattern. Different analytic formulae for the variance of (a sub t) and the variance comparisons in using a complete survey of the rotation patterns.

  7. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  8. Statistical classification techniques for engineering and climatic data samples

    NASA Technical Reports Server (NTRS)

    Temple, E. C.; Shipman, J. R.

    1981-01-01

    Fisher's sample linear discriminant function is modified through an appropriate alteration of the common sample variance-covariance matrix. The alteration consists of adding nonnegative values to the eigenvalues of the sample variance covariance matrix. The desired results of this modification is to increase the number of correct classifications by the new linear discriminant function over Fisher's function. This study is limited to the two-group discriminant problem.

  9. Estimating acreage by double sampling using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)

    1982-01-01

    Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.

  10. Fixed precision sampling plans for white apple leafhopper (Homoptera: Cicadellidae) on apple.

    PubMed

    Beers, Elizabeth H; Jones, Vincent P

    2004-10-01

    Constant precision sampling plans for the white apple leafhopper, Typhlocyba pomaria McAtee, were developed so that it could be used as an indicator species for system stability as new integrated pest management programs without broad-spectrum pesticides are developed. Taylor's power law was used to model the relationship between the mean and the variance, and Green's constant precision sequential sample equation was used to develop sampling plans. Bootstrap simulations of the sampling plans showed greater precision (D = 0.25) than the desired precision (Do = 0.3), particularly at low mean population densities. We found that by adjusting the Do value in Green's equation to 0.4, we were able to reduce the average sample number by 25% and provided an average D = 0.31. The sampling plan described allows T. pomaria to be used as reasonable indicator species of agroecosystem stability in Washington apple orchards.

  11. Prediction of Self-Management Behavior among Iranian Women with Type 2 Diabetes: Application of the Theory of Reasoned Action along with Self-Efficacy (ETRA).

    PubMed

    Didarloo, A R; Shojaeizadeh, D; Gharaaghaji Asl, R; Habibzadeh, H; Niknami, Sh; Pourali, R

    2012-02-01

    Continuous performing of diabetes self-care behaviors was shown to be an effective strategy to control diabetes and to prevent or reduce its- related complications. This study aimed to investigate predictors of self-care behavior based on the extended theory of reasoned action by self efficacy (ETRA) among women with type 2 diabetes in Iran. A sample of 352 women with type 2 diabetes, referring to a Diabetes Clinic in Khoy, Iran using the nonprobability sampling was enrolled. Appropriate instruments were designed to measure the variables of interest (diabetes knowledge, personal beliefs, subjective norm, self-efficacy and behavioral intention along with self- care behaviors). Reliability and validity of the instruments using Cronbach's alpha coefficients (the values of them were more than 0.70) and a panel of experts were tested. A statistical significant correlation existed between independent constructs of proposed model and modelrelated dependent constructs, as ETRA model along with its related external factors explained 41.5% of variance of intentions and 25.3% of variance of actual behavior. Among constructs of model, self-efficacy was the strongest predictor of intentions among women with type 2 diabetes, as it lonely explained 31.3% of variance of intentions and 11.4% of variance of self-care behavior. The high ability of the extended theory of reasoned action with self-efficacy in forecasting and explaining diabetes mellitus self management can be a base for educational intervention. So to improve diabetes self management behavior and to control the disease, use of educational interventions based on proposed model is suggested.

  12. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  13. A comparison of selection at list time and time-stratified sampling for estimating suspended sediment loads

    Treesearch

    Robert B. Thomas; Jack Lewis

    1993-01-01

    Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...

  14. Some refinements on the comparison of areal sampling methods via simulation

    Treesearch

    Jeffrey Gove

    2017-01-01

    The design of forest inventories and development of new sampling methods useful in such inventories normally have a two-fold target of design unbiasedness and minimum variance in mind. Many considerations such as costs go into the choices of sampling method for operational and other levels of inventory. However, the variance in terms of meeting a specified level of...

  15. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  16. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  18. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-01

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  19. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas

    2016-05-01

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  20. A stochastic hybrid model for pricing forward-start variance swaps

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah

    2017-11-01

    Recently, market players have been exposed to the astounding increase in the trading volume of variance swaps. In this paper, the forward-start nature of a variance swap is being inspected, where hybridizations of equity and interest rate models are used to evaluate the price of discretely-sampled forward-start variance swaps. The Heston stochastic volatility model is being extended to incorporate the dynamics of the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. This is essential since previous studies on variance swaps were mainly focusing on instantaneous-start variance swaps without considering the interest rate effects. This hybrid model produces an efficient semi-closed form pricing formula through the development of forward characteristic functions. The performance of this formula is investigated via simulations to demonstrate how the formula performs for different sampling times and against the real market scenario. Comparison done with the Monte Carlo simulation which was set as our main reference point reveals that our pricing formula gains almost the same precision in a shorter execution time.

  1. Control Variate Estimators of Survivor Growth from Point Samples

    Treesearch

    Francis A. Roesch; Paul C. van Deusen

    1993-01-01

    Two estimators of the control variate type for survivor growth from remeasured point samples are proposed and compared with more familiar estimators. The large reductionsin variance, observed in many cases forestimators constructed with control variates, arealso realized in thisapplication. A simulation study yielded consistent reductions in variance which were often...

  2. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  3. Approximate Sample Size Formulas for Testing Group Mean Differences when Variances Are Unequal in One-Way ANOVA

    ERIC Educational Resources Information Center

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2008-01-01

    This study proposes an approach for determining appropriate sample size for Welch's F test when unequal variances are expected. Given a certain maximum deviation in population means and using the quantile of F and t distributions, there is no need to specify a noncentrality parameter and it is easy to estimate the approximate sample size needed…

  4. Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model

    NASA Astrophysics Data System (ADS)

    Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.

    2017-09-01

    The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.

  5. Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BILISOLY, ROGER L.; MCKENNA, SEAN A.

    2003-01-01

    Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues ofmore » the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.« less

  6. Repeated measurements of mite and pet allergen levels in house dust over a time period of 8 years.

    PubMed

    Antens, C J M; Oldenwening, M; Wolse, A; Gehring, U; Smit, H A; Aalberse, R C; Kerkhof, M; Gerritsen, J; de Jongste, J C; Brunekreef, B

    2006-12-01

    Studies of the association between indoor allergen exposure and the development of allergic diseases have often measured allergen exposure at one point in time. We investigated the variability of house dust mite (Der p 1, Der f 1) and cat (Fel d 1) allergen in Dutch homes over a period of 8 years. Data were obtained in the Dutch PIAMA birth cohort study. Dust from the child's mattress, the parents' mattress and the living room floor was collected at four points in time, when the child was 3 months, 4, 6 and 8 years old. Dust samples were analysed for Der p 1, Der f 1 and Fel d 1 by sandwich enzyme immuno assay. Mite allergen concentrations for the child's mattress, the parents' mattress and the living room floor were moderately correlated between time-points. Agreement was better for cat allergen. For Der p 1 and Der f 1 on the child's mattress, the within-home variance was close to or smaller than the between-home variance in most cases. For Fel d 1, the within-home variance was almost always smaller than the between-home variance. Results were similar for allergen levels expressed per gram of dust and allergen levels expressed per square metre of the sampled surface. Variance ratios were smaller when samples were taken at shorter time intervals than at longer time intervals. Over a period of 4 years, mite and cat allergens measured in house dust are sufficiently stable to use single measurements with confidence in epidemiological studies. The within-home variance was larger when samples were taken 8 years apart so that over such long periods, repetition of sampling is recommended.

  7. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  8. Gender variance in childhood and sexual orientation in adulthood: a prospective study.

    PubMed

    Steensma, Thomas D; van der Ende, Jan; Verhulst, Frank C; Cohen-Kettenis, Peggy T

    2013-11-01

    Several retrospective and prospective studies have reported on the association between childhood gender variance and sexual orientation and gender discomfort in adulthood. In most of the retrospective studies, samples were drawn from the general population. The samples in the prospective studies consisted of clinically referred children. In understanding the extent to which the association applies for the general population, prospective studies using random samples are needed. This prospective study examined the association between childhood gender variance, and sexual orientation and gender discomfort in adulthood in the general population. In 1983, we measured childhood gender variance, in 406 boys and 473 girls. In 2007, sexual orientation and gender discomfort were assessed. Childhood gender variance was measured with two items from the Child Behavior Checklist/4-18. Sexual orientation was measured for four parameters of sexual orientation (attraction, fantasy, behavior, and identity). Gender discomfort was assessed by four questions (unhappiness and/or uncertainty about one's gender, wish or desire to be of the other gender, and consideration of living in the role of the other gender). For both men and women, the presence of childhood gender variance was associated with homosexuality for all four parameters of sexual orientation, but not with bisexuality. The report of adulthood homosexuality was 8 to 15 times higher for participants with a history of gender variance (10.2% to 12.2%), compared to participants without a history of gender variance (1.2% to 1.7%). The presence of childhood gender variance was not significantly associated with gender discomfort in adulthood. This study clearly showed a significant association between childhood gender variance and a homosexual sexual orientation in adulthood in the general population. In contrast to the findings in clinically referred gender-variant children, the presence of a homosexual sexual orientation in adulthood was substantially lower. © 2012 International Society for Sexual Medicine.

  9. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  10. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  11. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    PubMed

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  12. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    PubMed Central

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  13. The relationship between psychosocial job stress and burnout in emergency departments: an exploratory study.

    PubMed

    García-Izquierdo, Mariano; Ríos-Rísquez, María Isabel

    2012-01-01

    The purpose of this study was to examine the relationship and predictive power of various psychosocial job stressors for the 3 dimensions of burnout in emergency departments. This study was structured as a cross-sectional design, with a questionnaire as the tool. The data were gathered using an anonymous questionnaire in 3 hospitals in Spain. The sample consisted of 191 emergency departments. Burnout was evaluated by the Maslach Burnout Inventory and the job stressors by the Nursing Stress Scale. The Burnout Model in this study consisted of 3 dimensions: emotional exhaustion, cynicism, and reduced professional efficacy. The model that predicted the emotional exhaustion dimension was formed by 2 variables: Excessive workload and lack of emotional support. These 2 variables explained 19.4% of variance in emotional exhaustion. Cynicism had 4 predictors that explained 25.8% of variance: Interpersonal conflicts, lack of social support, excessive workload, and type of contract. Finally, variability in reduced professional efficacy was predicted by 3 variables: Interpersonal conflicts, lack of social support, and the type of shift worked, which explained 10.4% of variance. From the point of view of nurse leaders, organizational interventions, and the management of human resources, this analysis of the principal causes of burnout is particularly useful to select, prioritize, and implement preventive measures that will improve the quality of care offered to patients and the well-being of personnel. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Bootstrap Estimation and Testing for Variance Equality.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Algina, James

    The purpose of this study was to develop a single procedure for comparing population variances which could be used for distribution forms. Bootstrap methodology was used to estimate the variability of the sample variance statistic when the population distribution was normal, platykurtic and leptokurtic. The data for the study were generated and…

  15. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  16. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  17. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE PAGES

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...

    2016-05-03

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  18. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  19. The mass media are an important context for adolescents' sexual behavior.

    PubMed

    L'Engle, Kelly Ladin; Brown, Jane D; Kenneavy, Kristin

    2006-03-01

    This study compared influences from the mass media (television, music, movies, magazines) on adolescents' sexual intentions and behaviors to other socialization contexts, including family, religion, school, and peers. A sample of 1011 Black and White adolescents from 14 middle schools in the Southeastern United States completed linked mail surveys about their media use and in-home Audio-CASI interviews about their sexual intentions and behaviors. Analysis of the sexual content in 264 media vehicles used by respondents was also conducted. Exposure to sexual content across media, and perceived support from the media for teen sexual behavior, were the main media influence measures. Media explained 13% of the variance in intentions to initiate sexual intercourse in the near future, and 8-10% of the variance in light and heavy sexual behaviors, which was comparable to other contexts. Media influences also demonstrated significant associations with intentions and behaviors after all other factors were considered. All contextual factors, including media, explained 54% of the variance in sexual intentions and 21-33% of the variance in sexual behaviors. Adolescents who are exposed to more sexual content in the media, and who perceive greater support from the media for teen sexual behavior, report greater intentions to engage in sexual intercourse and more sexual activity. Mass media are an important context for adolescents' sexual socialization, and media influences should be considered in research and interventions with early adolescents to reduce sexual activity.

  20. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    PubMed

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  1. Advanced overlay: sampling and modeling for optimized run-to-run control

    NASA Astrophysics Data System (ADS)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.

  2. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maire, Sylvain, E-mail: maire@univ-tln.fr; Simon, Martin, E-mail: simon@math.uni-mainz.de

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance ofmore » the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.« less

  3. Biased Brownian dynamics for rate constant calculation.

    PubMed

    Zou, G; Skeel, R D; Subramaniam, S

    2000-08-01

    An enhanced sampling method-biased Brownian dynamics-is developed for the calculation of diffusion-limited biomolecular association reaction rates with high energy or entropy barriers. Biased Brownian dynamics introduces a biasing force in addition to the electrostatic force between the reactants, and it associates a probability weight with each trajectory. A simulation loses weight when movement is along the biasing force and gains weight when movement is against the biasing force. The sampling of trajectories is then biased, but the sampling is unbiased when the trajectory outcomes are multiplied by their weights. With a suitable choice of the biasing force, more reacted trajectories are sampled. As a consequence, the variance of the estimate is reduced. In our test case, biased Brownian dynamics gives a sevenfold improvement in central processing unit (CPU) time with the choice of a simple centripetal biasing force.

  4. Development of composite calibration standard for quantitative NDE by ultrasound and thermography

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Benedict, Zach G.; Bhatnagar, Nishtha; Harper, Adam G.

    2018-04-01

    Inspection of aircraft components for damage utilizing ultrasonic Non-Destructive Evaluation (NDE) is a time intensive endeavor. Additional time spent during aircraft inspections translates to added cost to the company performing them, and as such, reducing this expenditure is of great importance. There is also great variance in the calibration samples from one entity to another due to a lack of a common calibration set. By characterizing damage types, we can condense the required calibration sets and reduce the time required to perform calibration while also providing procedures for the fabrication of these standard sets. We present here our effort to fabricate composite samples with known defects and quantify the size and location of defects, such as delaminations, and impact damage. Ultrasonic and Thermographic images are digitally enhanced to accurately measure the damage size. Ultrasonic NDE is compared with thermography.

  5. The effect of covariate mean differences on the standard error and confidence interval for the comparison of treatment means.

    PubMed

    Liu, Xiaofeng Steven

    2011-05-01

    The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.

  6. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  7. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  8. Experimental layout, data analysis, and thresholds in ELISA testing of maize for aphid-borne viruses.

    PubMed

    Caciagli, P; Verderio, A

    2003-06-30

    Several aspects of enzyme-linked immunosorbent assay (ELISA) procedures and data analysis have been examined in an attempt to find a rapid and reliable method for discriminating between 'positive' and 'negative' results when testing a large number of samples. A layout of ELISA plates was designed to reduce uncontrolled variation and to optimize the number of negative and positive controls. A transformation using the fourth root (A(1/4)) of the optical density readings corrected for the blank (A) stabilized the variance of most ELISA data examined. Transformed A values were used to calculate the true limits, at a set protection level, for false positive (C) and false negative (D). Methods are discussed to reduce the number of undifferentiated samples, i.e. the samples with response falling between C and D. The whole procedure was set up for use with an electronic spreadsheet. With the addition of few instructions of the type 'if em leader then em leader else' in the spreadsheet, the ELISA results were obtained in the simple trichotomous form 'negative/undefined/positive'. This allowed rapid analysis of more than 1100 maize samples testing for the presence of seven aphid-borne viruses-in fact almost 8000 ELISA samples.

  9. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Variance reduction for Fokker–Planck based particle Monte Carlo schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick

    Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less

  11. Improvements for retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Husi, L.; Bréon, F. M.; Ma, R.; Chen, L.; Wang, Z.

    2017-12-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR ( 1.5µm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (>15 µm) and to reduce the uncertainties caused by cloud heterogeneity. A premium resoltion of 0.8° is determined by considering successful retrievals and cloud horizontal homogeneity. The improved algorithm is applied to measurements of POLDER in 2008, and we further compared our retrievals with cloud effective radii estimations of Moderate Resolution Imaging Spectroradiometer (MODIS). The results indicate that in global scale, the cloud effective radii and effective variance is larger in the central ocean than inland and coast areas. Over heavy polluted regions, the cloud droplets has small effective radii and narraw distribution due to the influence of aerosol particles.

  12. Validation of Resource Utilization Groups version III for Home Care (RUG-III/HC): evidence from a Canadian home care jurisdiction.

    PubMed

    Poss, Jeffrey W; Hirdes, John P; Fries, Brant E; McKillop, Ian; Chase, Mary

    2008-04-01

    The case-mix system Resource Utilization Groups version III for Home Care (RUG-III/HC) was derived using a modest data sample from Michigan, but to date no comprehensive large scale validation has been done. This work examines the performance of the RUG-III/HC classification using a large sample from Ontario, Canada. Cost episodes over a 13-week period were aggregated from individual level client billing records and matched to assessment information collected using the Resident Assessment Instrument for Home Care, from which classification rules for RUG-III/HC are drawn. The dependent variable, service cost, was constructed using formal services plus informal care valued at approximately one-half that of a replacement worker. An analytic dataset of 29,921 episodes showed a skewed distribution with over 56% of cases falling into the lowest hierarchical level, reduced physical functions. Case-mix index values for formal and informal cost showed very close similarities to those found in the Michigan derivation. Explained variance for a function of combined formal and informal cost was 37.3% (20.5% for formal cost alone), with personal support services as well as informal care showing the strongest fit to the RUG-III/HC classification. RUG-III/HC validates well compared with the Michigan derivation work. Potential enhancements to the present classification should consider the large numbers of undifferentiated cases in the reduced physical function group, and the low explained variance for professional disciplines.

  13. Help-seeking intentions for early dementia diagnosis in a sample of Irish adults.

    PubMed

    Devoy, Susan; Simpson, Ellen Elizabeth Anne

    2017-08-01

    To identify factors that may increase intentions to seek help for an early dementia diagnosis. Early dementia diagnosis in Ireland is low, reducing the opportunity for intervention, which can delay progression, reduce psychological distress and increase social supports. Using the theory of planned behaviour (TPB), and a mixed methods approach, three focus groups were conducted (N = 22) to illicit attitudes and beliefs about help seeking for an early dementia diagnosis. The findings informed the development of the Help Seeking Intentions for Early Dementia Diagnosis (HSIEDD) questionnaire which was piloted and then administered to a sample of community dwelling adults from Dublin and Kildare (N = 95). Content analysis revealed participants held knowledge of the symptoms of dementia but not about available interventions. Facilitators of help seeking were family, friends and peers alongside well informed health professionals. Barriers to seeking help were a lack of knowledge, fear, loss, stigma and inaccessible services. The quantitative findings suggest the TPB constructs account for almost 28% of the variance in intentions to seek help for an early diagnosis of dementia, after controlling for sociodemographic variables and knowledge of dementia. In the final step of the regression analysis, the main predictors of help seeking were knowledge of dementia and subjective norm, accounting for 6% and 8% of the variance, respectively. Future interventions should aim to increase awareness of the support available to those experiencing early memory problems, and should highlight the supportive role that family, friends, peers and health professionals could provide.

  14. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  15. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  16. Reducing statistical uncertainties in simulated organ doses of phantoms immersed in water

    DOE PAGES

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.; ...

    2016-08-13

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  17. Patterns and predictors of growth in divorced fathers' health status and substance use.

    PubMed

    DeGarmo, David S; Reid, John B; Leve, Leslie D; Chamberlain, Patricia; Knutson, John F

    2010-03-01

    Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed.

  18. Structural Studies of Amorphous Materials by Fluctuation Electron Microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Treacy, Michael M. J.

    Fluctuation Electron Microscopy (FEM) is a technique that examines the fluctuations in electron scattering across a uniformly thin amorphous sample. The statistics of the intensity fluctuations, mean and variance, reveal any underlying medium-range order present in the structure. The goals of this project were: (1) To determine the fundamentals of the scattering physics that gives rise to the variance signal in fluctuation electron microscopy (FEM); (2) To use these discoveries to find ways to quantify FEM; (3) To apply the FEM method to interesting and technologically important families of amorphous materials, particularly those with important applications in energy-related processes. Excellent progress was made in items (1) and (2). In stage (3) we did not examine the metamict zircons, as proposed. Instead, we examined films of polycrystalline and amorphous semi-conducting diamond. Significant accomplishments are: (1) A Reverse Monte Carlo procedure was successfully implemented to invert FEM data into a structural model. This is computer-intensive, but it demonstrated that diffraction and FEM data from amorphous silicon are most consistent with a paracrystallite model. This means that there is more diamond-like topology present in amorphous silicon than is predicted by the continuous random network model. (2) There is significant displacement decoherence arising in diffraction from amorphous silicon and carbon. The samples are being bombarded by the electron beam and atoms do not stay still while being irradiated – much more than was formerly understood. The atom motions cause the destructive and constructive interferences in the diffraction pattern to fluctuate with time, and it is the time-averaged speckle that is being measured. The variance is reduced by a factor m, 4 ≤ m ≤ 1000, relative to that predicted by kinematical scattering theory. (3) Speckle intensity obeys a gamma distribution, where the mean intensitymore » $$ \\overline{I}\\ $$ and m are the two parameters governing the shape of the gamma distribution profile. m is determined by the illumination spatial coherence, which is normally very high, and mostly by the displacement decoherence within the sample. (4) Amorphous materials are more affected by the electron beam than are crystalline materials. Different samples exhibit different disruptibility, as measured by the effective values of m that fit the data. (5) Understanding the origin of the displacement decoherence better should lead to efficient methods for computing the observed variance from amorphous materials.« less

  19. Kruskal-Wallis test: BASIC computer program to perform nonparametric one-way analysis of variance and multiple comparisons on ranks of several independent samples.

    PubMed

    Theodorsson-Norheim, E

    1986-08-01

    Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.

  20. Applications of GARCH models to energy commodities

    NASA Astrophysics Data System (ADS)

    Humphreys, H. Brett

    This thesis uses GARCH methods to examine different aspects of the energy markets. The first part of the thesis examines seasonality in the variance. This study modifies the standard univariate GARCH models to test for seasonal components in both the constant and the persistence in natural gas, heating oil and soybeans. These commodities exhibit seasonal price movements and, therefore, may exhibit seasonal variances. In addition, the heating oil model is tested for a structural change in variance during the Gulf War. The results indicate the presence of an annual seasonal component in the persistence for all commodities. Out-of-sample volatility forecasting for natural gas outperforms standard forecasts. The second part of this thesis uses a multivariate GARCH model to examine volatility spillovers within the crude oil forward curve and between the London and New York crude oil futures markets. Using these results the effect of spillovers on dynamic hedging is examined. In addition, this research examines cointegration within the oil markets using investable returns rather than fixed prices. The results indicate the presence of strong volatility spillovers between both markets, weak spillovers from the front of the forward curve to the rest of the curve, and cointegration between the long term oil price on the two markets. The spillover dynamic hedge models lead to a marginal benefit in terms of variance reduction, but a substantial decrease in the variability of the dynamic hedge; thereby decreasing the transactions costs associated with the hedge. The final portion of the thesis uses portfolio theory to demonstrate how the energy mix consumed in the United States could be chosen given a national goal to reduce the risks to the domestic macroeconomy of unanticipated energy price shocks. An efficient portfolio frontier of U.S. energy consumption is constructed using a covariance matrix estimated with GARCH models. The results indicate that while the electric utility industry is operating close to the minimum variance position, a shift towards coal consumption would reduce price volatility for overall U.S. energy consumption. With the inclusion of potential externality costs, the shift remains away from oil but towards natural gas instead of coal.

  1. Control algorithms for dynamic attenuators.

    PubMed

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.

  2. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  3. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    PubMed

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  4. Gender Differences in Variance and Means on the Naglieri Non-Verbal Ability Test: Data from the Philippines

    ERIC Educational Resources Information Center

    Vista, Alvin; Care, Esther

    2011-01-01

    Background: Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. Aims: This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public…

  5. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    PubMed

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  6. Ex Post Facto Monte Carlo Variance Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, Thomas E.

    The variance in Monte Carlo particle transport calculations is often dominated by a few particles whose importance increases manyfold on a single transport step. This paper describes a novel variance reduction method that uses a large importance change as a trigger to resample the offending transport step. That is, the method is employed only after (ex post facto) a random walk attempts a transport step that would otherwise introduce a large variance in the calculation.Improvements in two Monte Carlo transport calculations are demonstrated empirically using an ex post facto method. First, the method is shown to reduce the variance inmore » a penetration problem with a cross-section window. Second, the method empirically appears to modify a point detector estimator from an infinite variance estimator to a finite variance estimator.« less

  7. Discrete velocity computations with stochastic variance reduction of the Boltzmann equation for gas mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clarke, Peter; Varghese, Philip; Goldstein, David

    We extend a variance reduced discrete velocity method developed at UT Austin [1, 2] to gas mixtures with large mass ratios and flows with trace species. The mixture is stored as a collection of independent velocity distribution functions, each with a unique grid in velocity space. Different collision types (A-A, A-B, B-B, etc.) are treated independently, and the variance reduction scheme is formulated with different equilibrium functions for each separate collision type. The individual treatment of species enables increased focus on species important to the physics of the flow, even if the important species are present in trace amounts. Themore » method is verified through comparisons to Direct Simulation Monte Carlo computations and the computational workload per time step is investigated for the variance reduced method.« less

  8. How patients think about social responsibility of public hospitals in China?

    PubMed

    Liu, Wenbin; Shi, Lizheng; Pong, Raymond W; Chen, Yingyao

    2016-08-11

    Hospital social responsibility is receiving increasing attention, especially in China where major changes to the healthcare system have taken place. This study examines how patients viewed hospital social responsibility in China and explore the factors that influenced patients' perception of hospital social responsibility. A cross-sectional survey was conducted, using a structured questionnaire, on a sample of 5385 patients from 48 public hospitals in three regions of China: Shanghai, Hainan, and Shaanxi. A multilevel regression model was employed to examine factors influencing patients' assessments of hospital social responsibility. Intra-class correlation coefficients (ICCs) were calculated to estimate the proportion of variance in the dependent variables determined at the hospital level. The scores for service quality, appropriateness, accessibility and professional ethics were positively associated with patients' assessments of hospital social responsibility. Older outpatients tended to give lower assessments, while inpatients in larger hospitals scored higher. After adjusted for the independent variables, the ICC rose from 0.182 to 0.313 for inpatients and from 0.162 to 0.263 for outpatients. The variance at the patient level was reduced by 51.5 and 48.6 %, respectively, for inpatients and outpatients. And the variance at the hospital level was reduced by 16.7 % for both groups. Some hospital and patient characteristics and their perceptions of service quality, appropriateness, accessibility and professional ethics were associated with their assessments of public hospital social responsibility. The differences were mainly determined at the patient level. More attention to law-abiding behaviors, cost-effective health services, and charitable works could improve perceptions of hospitals' adherence to social responsibility.

  9. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  10. Smoothed Spectra, Ogives, and Error Estimates for Atmospheric Turbulence Data

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís

    2018-01-01

    A systematic evaluation is conducted of the smoothed spectrum, which is a spectral estimate obtained by averaging over a window of contiguous frequencies. The technique is extended to the ogive, as well as to the cross-spectrum. It is shown that, combined with existing variance estimates for the periodogram, the variance—and therefore the random error—associated with these estimates can be calculated in a straightforward way. The smoothed spectra and ogives are biased estimates; with simple power-law analytical models, correction procedures are devised, as well as a global constraint that enforces Parseval's identity. Several new results are thus obtained: (1) The analytical variance estimates compare well with the sample variance calculated for the Bartlett spectrum and the variance of the inertial subrange of the cospectrum is shown to be relatively much larger than that of the spectrum. (2) Ogives and spectra estimates with reduced bias are calculated. (3) The bias of the smoothed spectrum and ogive is shown to be negligible at the higher frequencies. (4) The ogives and spectra thus calculated have better frequency resolution than the Bartlett spectrum, with (5) gradually increasing variance and relative error towards the low frequencies. (6) Power-law identification and extraction of the rate of dissipation of turbulence kinetic energy are possible directly from the ogive. (7) The smoothed cross-spectrum is a valid inner product and therefore an acceptable candidate for coherence and spectral correlation coefficient estimation by means of the Cauchy-Schwarz inequality. The quadrature, phase function, coherence function and spectral correlation function obtained from the smoothed spectral estimates compare well with the classical ones derived from the Bartlett spectrum.

  11. A step towards removing plasma volume variance from the Athlete's Biological Passport: The use of biomarkers to describe vascular volumes from a simple blood test.

    PubMed

    Lobigs, Louisa M; Sottas, Pierre-Edouard; Bourdon, Pitre C; Nikolovski, Zoran; El-Gingo, Mohamed; Varamenti, Evdokia; Peeling, Peter; Dawson, Brian; Schumacher, Yorck O

    2018-02-01

    The haematological module of the Athlete's Biological Passport (ABP) has significantly impacted the prevalence of blood manipulations in elite sports. However, the ABP relies on a number of concentration-based markers of erythropoiesis, such as haemoglobin concentration ([Hb]), which are influenced by shifts in plasma volume (PV). Fluctuations in PV contribute to the majority of biological variance associated with volumetric ABP markers. Our laboratory recently identified a panel of common chemistry markers (from a simple blood test) capable of describing ca 67% of PV variance, presenting an applicable method to account for volume shifts within anti-doping practices. Here, this novel PV marker was included into the ABP adaptive model. Over a six-month period (one test per month), 33 healthy, active males provided blood samples and performed the CO-rebreathing method to record PV (control). In the final month participants performed a single maximal exercise effort to promote a PV shift (mean PV decrease -17%, 95% CI -9.75 to -18.13%). Applying the ABP adaptive model, individualized reference limits for [Hb] and the OFF-score were created, with and without the PV correction. With the PV correction, an average of 66% of [Hb] within-subject variance is explained, narrowing the predicted reference limits, and reducing the number of atypical ABP findings post-exercise. Despite an increase in sensitivity there was no observed loss of specificity with the addition of the PV correction. The novel PV marker presented here has the potential to improve the ABP's rate of correct doping detection by removing the confounding effects of PV variance. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  13. Fast content-based image retrieval using dynamic cluster tree

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Sun, Jizhou; Wu, Rongteng; Zhang, Yaping

    2008-03-01

    A novel content-based image retrieval data structure is developed in present work. It can improve the searching efficiency significantly. All images are organized into a tree, in which every node is comprised of images with similar features. Images in a children node have more similarity (less variance) within themselves in relative to its parent. It means that every node is a cluster and each of its children nodes is a sub-cluster. Information contained in a node includes not only the number of images, but also the center and the variance of these images. Upon the addition of new images, the tree structure is capable of dynamically changing to ensure the minimization of total variance of the tree. Subsequently, a heuristic method has been designed to retrieve the information from this tree. Given a sample image, the probability of a tree node that contains the similar images is computed using the center of the node and its variance. If the probability is higher than a certain threshold, this node will be recursively checked to locate the similar images. So will its children nodes if their probability is also higher than that threshold. If no sufficient similar images were founded, a reduced threshold value would be adopted to initiate a new seeking from the root node. The search terminates when it found sufficient similar images or the threshold value is too low to give meaningful sense. Experiments have shown that the proposed dynamic cluster tree is able to improve the searching efficiency notably.

  14. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  15. Design Effects and Generalized Variance Functions for the 1990-91 Schools and Staffing Survey (SASS). Volume II. Technical Report.

    ERIC Educational Resources Information Center

    Salvucci, Sameena; And Others

    This technical report provides the results of a study on the calculation and use of generalized variance functions (GVFs) and design effects for the 1990-91 Schools and Staffing Survey (SASS). The SASS is a periodic integrated system of sample surveys conducted by the National Center for Education Statistics (NCES) that produces sampling variances…

  16. Positive couple interactions and daily cortisol: on the stress-protecting role of intimacy.

    PubMed

    Ditzen, Beate; Hoppmann, Christiane; Klumb, Petra

    2008-10-01

    To determine whether intimacy might be associated with reduced daily salivary cortisol levels in couples, thereby adding to the epidemiologic literature on reduced health burden in happy couples. A total of 51 dual-earner couples reported time spent on intimacy, stated their current affect quality, and provided saliva samples for cortisol estimation approximately every 3 hours in a 1-week time-sampling assessment. In addition, participants provided data on chronic problems of work organization. Multilevel analyses revealed that intimacy was significantly associated with reduced daily salivary cortisol levels. There was an interaction effect of intimacy with chronic problems of work organization in terms of their relationship with cortisol levels, suggesting a buffering effect of intimacy on work-related elevated cortisol levels. Above this, the association between intimacy and cortisol was mediated by positive affect. Intimacy and affect together explained 7% of daily salivary cortisol variance. Our results are in line with previous studies on the effect of intimacy on cortisol stress responses in the laboratory as well as with epidemiologic data on health beneficial effects of happy marital relationships.

  17. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  18. Identifiability and Performance Analysis of Output Over-sampling Approach to Direct Closed-loop Identification

    NASA Astrophysics Data System (ADS)

    Sun, Lianming; Sano, Akira

    Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.

  19. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments.

    PubMed

    Germine, Laura; Nakayama, Ken; Duchaine, Bradley C; Chabris, Christopher F; Chatterjee, Garga; Wilmer, Jeremy B

    2012-10-01

    With the increasing sophistication and ubiquity of the Internet, behavioral research is on the cusp of a revolution that will do for population sampling what the computer did for stimulus control and measurement. It remains a common assumption, however, that data from self-selected Web samples must involve a trade-off between participant numbers and data quality. Concerns about data quality are heightened for performance-based cognitive and perceptual measures, particularly those that are timed or that involve complex stimuli. In experiments run with uncompensated, anonymous participants whose motivation for participation is unknown, reduced conscientiousness or lack of focus could produce results that would be difficult to interpret due to decreased overall performance, increased variability of performance, or increased measurement noise. Here, we addressed the question of data quality across a range of cognitive and perceptual tests. For three key performance metrics-mean performance, performance variance, and internal reliability-the results from self-selected Web samples did not differ systematically from those obtained from traditionally recruited and/or lab-tested samples. These findings demonstrate that collecting data from uncompensated, anonymous, unsupervised, self-selected participants need not reduce data quality, even for demanding cognitive and perceptual experiments.

  20. Patterns and Predictors of Growth in Divorced Fathers' Health Status and Substance Use

    PubMed Central

    DeGarmo, David S.; Reid, John B.; Leve, Leslie D.; Chamberlain, Patricia; Knutson, John F.

    2009-01-01

    Health status and substance use trajectories are described over 18 months for a county sample of 230 divorced fathers of young children aged 4 to 11. One third of the sample was clinically depressed. Health problems, drinking, and hard drug use were stable over time for the sample, whereas depression, smoking, and marijuana use exhibited overall mean reductions. Variance components revealed significant individual differences in average levels and trajectories for health and substance use outcomes. Controlling for fathers' antisociality, negative life events, and social support, fathering identity predicted reductions in health-related problems and marijuana use. Father involvement reduced drinking and marijuana use. Antisociality was the strongest risk factor for health and substance use outcomes. Implications for application of a generative fathering perspective in practice and preventive interventions are discussed. PMID:19477763

  1. [Physicochemical and microbiological evaluation of 3 commercial guava jams (Psidium guajava L.)].

    PubMed

    López, R; Ramírez, A O; Graziani de Fariñas, L

    2000-09-01

    Four different production batches were taken from each brand. Samples were purchased from retail markets in Maracay, Cagua and Turmero. (Venezuela). The average physical and chemical values were: vacuum = 38.81 cm Hg; pH = 3.28; titrable acidity (%citric acid) = 0.59%; degree Brix = 67.24; reducing sugars = 55.28%; total sugars = 62.28, and the color parameters a = +14.44, b = +8.77 and L = 17.09. Molds, yeast and aerobic plate counts were lower than 10 UFC/g; it reveals an excellent microbiological quality of the product. The studied jams degree Brix and acidity fulfil COVENIN (1) requirements for jam products, but not pH range. In agreement with variance analysis, there were highly significance differences between the samples and among the shares of each sample for all physical and chemical properties evaluated.

  2. Age-specific survival of male golden-cheeked warblers on the Fort Hood Military Reservation, Texas

    USGS Publications Warehouse

    Duarte, Adam; Hines, James E.; Nichols, James D.; Hatfield, Jeffrey S.; Weckerly, Floyd W.

    2014-01-01

    Population models are essential components of large-scale conservation and management plans for the federally endangered Golden-cheeked Warbler (Setophaga chrysoparia; hereafter GCWA). However, existing models are based on vital rate estimates calculated using relatively small data sets that are now more than a decade old. We estimated more current, precise adult and juvenile apparent survival (Φ) probabilities and their associated variances for male GCWAs. In addition to providing estimates for use in population modeling, we tested hypotheses about spatial and temporal variation in Φ. We assessed whether a linear trend in Φ or a change in the overall mean Φ corresponded to an observed increase in GCWA abundance during 1992-2000 and if Φ varied among study plots. To accomplish these objectives, we analyzed long-term GCWA capture-resight data from 1992 through 2011, collected across seven study plots on the Fort Hood Military Reservation using a Cormack-Jolly-Seber model structure within program MARK. We also estimated Φ process and sampling variances using a variance-components approach. Our results did not provide evidence of site-specific variation in adult Φ on the installation. Because of a lack of data, we could not assess whether juvenile Φ varied spatially. We did not detect a strong temporal association between GCWA abundance and Φ. Mean estimates of Φ for adult and juvenile male GCWAs for all years analyzed were 0.47 with a process variance of 0.0120 and a sampling variance of 0.0113 and 0.28 with a process variance of 0.0076 and a sampling variance of 0.0149, respectively. Although juvenile Φ did not differ greatly from previous estimates, our adult Φ estimate suggests previous GCWA population models were overly optimistic with respect to adult survival. These updated Φ probabilities and their associated variances will be incorporated into new population models to assist with GCWA conservation decision making.

  3. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  4. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  5. Interlaboratory comparability, bias, and precision for four laboratories measuring constituents in precipitation, November 1982-August 1983

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Malo, B.A.

    1985-01-01

    Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hiller, Mauritius M.; Veinot, Kenneth G.; Easterly, Clay E.

    In this study, methods are addressed to reduce the computational time to compute organ-dose rate coefficients using Monte Carlo techniques. Several variance reduction techniques are compared including the reciprocity method, importance sampling, weight windows and the use of the ADVANTG software package. For low-energy photons, the runtime was reduced by a factor of 10 5 when using the reciprocity method for kerma computation for immersion of a phantom in contaminated water. This is particularly significant since impractically long simulation times are required to achieve reasonable statistical uncertainties in organ dose for low-energy photons in this source medium and geometry. Althoughmore » the MCNP Monte Carlo code is used in this paper, the reciprocity technique can be used equally well with other Monte Carlo codes.« less

  7. Effects of low sampling rate in the digital data-transition tracking loop

    NASA Technical Reports Server (NTRS)

    Mileant, A.; Million, S.; Hinedi, S.

    1994-01-01

    This article describes the performance of the all-digital data-transition tracking loop (DTTL) with coherent and noncoherent sampling using nonlinear theory. The effects of few samples per symbol and of noncommensurate sampling and symbol rates are addressed and analyzed. Their impact on the probability density and variance of the phase error are quantified through computer simulations. It is shown that the performance of the all-digital DTTL approaches its analog counterpart when the sampling and symbol rates are noncommensurate (i.e., the number of samples per symbol is an irrational number). The loop signal-to-noise ratio (SNR) (inverse of phase error variance) degrades when the number of samples per symbol is an odd integer but degrades even further for even integers.

  8. The Influence of trisomy 21 on facial form and variability.

    PubMed

    Starbuck, John M; Cole, Theodore M; Reeves, Roger H; Richtsmeier, Joan T

    2017-11-01

    Triplication of chromosome 21 (trisomy 21) results in Down syndrome (DS), the most common live-born human aneuploidy. Individuals with DS have a unique facial appearance that can include form changes and altered variability. Using 3D photogrammatic images, 3D coordinate locations of 20 anatomical landmarks, and Euclidean Distance Matrix Analysis methods, we quantitatively test the hypothesis that children with DS (n = 55) exhibit facial form and variance differences relative to two different age-matched (4-12 years) control samples of euploid individuals: biological siblings of individuals with DS (n = 55) and euploid individuals without a sibling with DS (n = 55). Approximately 36% of measurements differ significantly between DS and DS-sibling samples, whereas 46% differ significantly between DS and unrelated control samples. Nearly 14% of measurements differ significantly in variance between DS and DS sibling samples, while 18% of measurements differ significantly in variance between DS and unrelated euploid control samples. Of those measures that showed a significant difference in variance, all were relatively increased in the sample of DS individuals. These results indicate that faces of children with DS are quantitatively more similar to their siblings than to unrelated euploid individuals and exhibit consistent, but slightly increased variation with most individuals falling within the range of normal variation established by euploid samples. These observations provide indirect evidence of the strength of the genetic underpinnings of the resemblance between relatives and the resistance of craniofacial development to genetic perturbations caused by trisomy 21, while underscoring the complexity of the genotype-phenotype map. © 2017 Wiley Periodicals, Inc.

  9. Power and Sample Size Calculations for Testing Linear Combinations of Group Means under Variance Heterogeneity with Applications to Meta and Moderation Analyses

    ERIC Educational Resources Information Center

    Shieh, Gwowen; Jan, Show-Li

    2015-01-01

    The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…

  10. Genomic Analysis of Complex Microbial Communities in Wounds

    DTIC Science & Technology

    2012-01-01

    thoroughly in the ecology literature. Permutation Multivariate Analysis of Variance ( PerMANOVA ). We used PerMANOVA to test the null-hypothesis of no...difference between the bacterial communities found within a single wound compared to those from different patients (α = 0.05). PerMANOVA is a...permutation-based version of the multivariate analysis of variance (MANOVA). PerMANOVA uses the distances between samples to partition variance and

  11. Control algorithms for dynamic attenuators

    PubMed Central

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods. PMID:24877818

  12. A comparison of maximum likelihood and other estimators of eigenvalues from several correlated Monte Carlo samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, M.

    1980-12-01

    The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less

  13. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  14. Internet Gaming Disorder Explains Unique Variance in Psychological Distress and Disability After Controlling for Comorbid Depression, OCD, ADHD, and Anxiety.

    PubMed

    Pearcy, Benjamin T D; McEvoy, Peter M; Roberts, Lynne D

    2017-02-01

    This study extends knowledge about the relationship of Internet Gaming Disorder (IGD) to other established mental disorders by exploring comorbidities with anxiety, depression, Attention Deficit Hyperactivity Disorder (ADHD), and obsessive compulsive disorder (OCD), and assessing whether IGD accounts for unique variance in distress and disability. An online survey was completed by a convenience sample that engages in Internet gaming (N = 404). Participants meeting criteria for IGD based on the Personal Internet Gaming Disorder Evaluation-9 (PIE-9) reported higher comorbidity with depression, OCD, ADHD, and anxiety compared with those who did not meet the IGD criteria. IGD explained a small proportion of unique variance in distress (1%) and disability (3%). IGD accounted for a larger proportion of unique variance in disability than anxiety and ADHD, and a similar proportion to depression. Replications with clinical samples using longitudinal designs and structured diagnostic interviews are required.

  15. Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition with a clinical sample.

    PubMed

    Nelson, Jason M; Canivez, Gary L; Watkins, Marley W

    2013-06-01

    Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  16. Estimation variance bounds of importance sampling simulations in digital communication systems

    NASA Technical Reports Server (NTRS)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  17. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  18. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  19. Perceived parental child rearing and attachment as predictors of anxiety and depressive disorder symptoms in children: The mediational role of attachment.

    PubMed

    Chorot, Paloma; Valiente, Rosa M; Magaz, Ana M; Santed, Miguel A; Sandin, Bonifacio

    2017-07-01

    The present study aimed to examine (a) the relative contribution of perceived parental child-rearing behaviors and attachment on anxiety and depressive symptoms, and (b) the role of attachment as a possible mediator of the association between parental rearing and anxiety and depression. A sample of 1002 children (aged 9-12 years) completed a booklet of self-report questionnaires measuring parental rearing behaviors, attachment towards peers, and DSM anxiety and depressive disorder symptoms. We found that parental aversiveness, parental neglect, and fearful/preoccupied attachment, each accounted for a significant amount of the variance in both anxiety and depressive symptoms. In addition, parental overcontrol was found to account for unique variance in anxiety whereas communication/warmth accounted for a significant proportion of the variance in depression. A relevant finding was that fearful/preoccupied attachment was found to mediate the association between parental rearing behaviors and both anxiety and depression. Parental rearing behaviors and attachment to peers may act as risk factors to the development and/or maintenance of anxiety and depressive symptomatology in children. Findings may contribute to outline preventive and/or treatment programs to prevent or reduce both clinical anxiety and depression during childhood. Copyright © 2017. Published by Elsevier B.V.

  20. Overcoming confounded controls in the analysis of gene expression data from microarray experiments.

    PubMed

    Bhattacharya, Soumyaroop; Long, Dang; Lyons-Weiler, James

    2003-01-01

    A potential limitation of data from microarray experiments exists when improper control samples are used. In cancer research, comparisons of tumour expression profiles to those from normal samples is challenging due to tissue heterogeneity (mixed cell populations). A specific example exists in a published colon cancer dataset, in which tissue heterogeneity was reported among the normal samples. In this paper, we show how to overcome or avoid the problem of using normal samples that do not derive from the same tissue of origin as the tumour. We advocate an exploratory unsupervised bootstrap analysis that can reveal unexpected and undesired, but strongly supported, clusters of samples that reflect tissue differences instead of tumour versus normal differences. All of the algorithms used in the analysis, including the maximum difference subset algorithm, unsupervised bootstrap analysis, pooled variance t-test for finding differentially expressed genes and the jackknife to reduce false positives, are incorporated into our online Gene Expression Data Analyzer ( http:// bioinformatics.upmc.edu/GE2/GEDA.html ).

  1. Representativeness of laboratory sampling procedures for the analysis of trace metals in soil.

    PubMed

    Dubé, Jean-Sébastien; Boudreault, Jean-Philippe; Bost, Régis; Sona, Mirela; Duhaime, François; Éthier, Yannic

    2015-08-01

    This study was conducted to assess the representativeness of laboratory sampling protocols for purposes of trace metal analysis in soil. Five laboratory protocols were compared, including conventional grab sampling, to assess the influence of sectorial splitting, sieving, and grinding on measured trace metal concentrations and their variability. It was concluded that grinding was the most important factor in controlling the variability of trace metal concentrations. Grinding increased the reproducibility of sample mass reduction by rotary sectorial splitting by up to two orders of magnitude. Combined with rotary sectorial splitting, grinding increased the reproducibility of trace metal concentrations by almost three orders of magnitude compared to grab sampling. Moreover, results showed that if grinding is used as part of a mass reduction protocol by sectorial splitting, the effect of sieving on reproducibility became insignificant. Gy's sampling theory and practice was also used to analyze the aforementioned sampling protocols. While the theoretical relative variances calculated for each sampling protocol qualitatively agreed with the experimental variances, their quantitative agreement was very poor. It was assumed that the parameters used in the calculation of theoretical sampling variances may not correctly estimate the constitutional heterogeneity of soils or soil-like materials. Finally, the results have highlighted the pitfalls of grab sampling, namely, the fact that it does not exert control over incorrect sampling errors and that it is strongly affected by distribution heterogeneity.

  2. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices

    PubMed Central

    Westgate, Philip M.

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539

  3. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    PubMed

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  4. Fully moderated T-statistic for small sample size gene expression arrays.

    PubMed

    Yu, Lianbo; Gulati, Parul; Fernandez, Soledad; Pennell, Michael; Kirschner, Lawrence; Jarjoura, David

    2011-09-15

    Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.

  5. Validation study of the Questionnaire on School Maladjustment Problems (QSMP).

    PubMed

    de la Fuente Arias, Jesús; Peralta Sánchez, Francisco Javier; Sánchez Roda, María Dolores; Trianes Torres, María Victoria

    2012-05-01

    The aim of this study was to analyze the exploratory and confirmatory structure, as well as other psychometric properties, of the Cuestionario de Problemas de Convivencia Escolar (CPCE; in Spanish, the Questionnaire on School Maladjustment Problems [QSMP]), using a sample of Spanish adolescents. The instrument was administered to 60 secondary education teachers (53.4% females and 46.6% males) between the ages of 28 and 54 years (M= 41.2, SD= 11.5), who evaluated a total of 857 adolescent students. The first-order exploratory factor analysis identified 7 factors, explaining a total variance of 62%. A second-order factor analysis yielded three dimensions that explain 84% of the variance. A confirmatory factor analysis was subsequently performed in order to reduce the number of factors obtained in the exploratory analysis as well as the number of items. Lastly, we present the results of reliability, internal consistency, and validity indices. These results and their implications for future research and for the practice of educational guidance and intervention are discussed in the conclusions.

  6. Improved classification accuracy in 1- and 2-dimensional NMR metabolomics data using the variance stabilising generalised logarithm transformation

    PubMed Central

    Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R

    2007-01-01

    Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789

  7. Quantizing and sampling considerations in digital phased-locked loops

    NASA Technical Reports Server (NTRS)

    Hurst, G. T.; Gupta, S. C.

    1974-01-01

    The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.

  8. Determining a one-tailed upper limit for future sample relative reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2006-01-01

    A formula was developed to determine a one-tailed 100p% upper limit for future sample percent relative reproducibility standard deviations (RSD(R),%= 100s(R)/y), where S(R) is the sample reproducibility standard deviation, which is the square root of a linear combination of the sample repeatability variance (s(r)2) plus the sample laboratory-to-laboratory variance (s(L)2), i.e., S(R) = s(L)2, and y is the sample mean. The future RSD(R),% is expected to arise from a population of potential RSD(R),% values whose true mean is zeta(R),% = 100sigmaR, where sigmaR and mu are the population reproducibility standard deviation and mean, respectively.

  9. Viscoelastic characterization of soft biological materials

    NASA Astrophysics Data System (ADS)

    Nayar, Vinod Timothy

    Progressive and irreversible retinal diseases are among the primary causes of blindness in the United States, attacking the cells in the eye that transform environmental light into neural signals for the optic pathway. Medical implants designed to restore visual function to afflicted patients can cause mechanical stress and ultimately damage to the host tissues. Research shows that an accurate understanding of the mechanical properties of the biological tissues can reduce damage and lead to designs with improved safety and efficacy. Prior studies on the mechanical properties of biological tissues show characterization of these materials can be affected by environmental, length-scale, time, mounting, stiffness, size, viscoelastic, and methodological conditions. Using porcine sclera tissue, the effects of environmental, time, and mounting conditions are evaluated when using nanoindentation. Quasi-static tests are used to measure reduced modulus during extended exposure to phosphate-buffered saline (PBS), as well as the chemical and mechanical analysis of mounting the sample to a solid substrate using cyanoacrylate. The less destructive nature of nanoindentation tests allows for variance of tests within a single sample to be compared to the variance between samples. The results indicate that the environmental, time, and mounting conditions can be controlled for using modified nanoindentation procedures for biological samples and are in line with averages modulus values from previous studies but with increased precision. By using the quasi-static and dynamic characterization capabilities of the nanoindentation setup, the additional stiffness and viscoelastic variables are measured. Different quasi-static control methods were evaluated along with maximum load parameters and produced no significant difference in reported reduced modulus values. Dynamic characterization tests varied frequency and quasi-static load, showing that the agar could be modeled as a linearly elastic material. The effects of sample stiffness were evaluated by testing both the quasi-static and dynamic mechanical properties of different concentration agar samples, ranging from 0.5% to 5.0%. The dynamic nanoindentation protocol showed some sensitivity to sample stiffness, but characterization remained consistently applicable to soft biological materials. Comparative experiments were performed on both 0.5% and 5.0% agar as well as porcine eye tissue samples using published dynamic macrocompression standards. By comparing these new tests to those obtained with nanoindentation, the effects due to length-scale, stiffness, size, viscoelastic, and methodological conditions are evaluated. Both testing methodologies can be adapted for the environmental and mounting conditions, but the limitations of standardized macro-scale tests are explored. The factors affecting mechanical characterization of soft and thin viscoelastic biological materials are researched and a comprehensive protocol is presented. This work produces material mechanical properties for use in improving future medical implant designs on a wide variety of biological tissue and materials.

  10. A Technique for Developing Probabilistic Properties of Earth Materials

    DTIC Science & Technology

    1988-04-01

    Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading

  11. Pain-related work interference is a key factor in a worker/workplace model of work absence duration due to musculoskeletal conditions in Canadian nurses.

    PubMed

    Murray, Eleanor; Franche, Renée-Louise; Ibrahim, Selahadin; Smith, Peter; Carnide, Nancy; Côté, Pierre; Gibson, Jane; Guzman, Jaime; Koehoorn, Mieke; Mustard, Cameron

    2013-12-01

    To examine the role of pain experiences in relation to work absence, within the context of other worker health factors and workplace factors among Canadian nurses with work-related musculoskeletal (MSK) injury. Structural equation modeling was used on a sample of 941 employed, female, direct care nurses with at least one day of work absence due to a work-related MSK injury, from the cross-sectional 2005 National Survey of the Work and Health of Nurses. The final model suggests that pain severity and pain-related work interference mediate the impact of the following worker health and workplace factors on work absence duration: depression, back problems, age, unionization, workplace physical demands and low job control. The model accounted for 14 % of the variance in work absence duration and 46.6 % of the variance in pain-related work interference. Our findings support a key role for pain severity and pain-related work interference in mediating the effects of workplace factors and worker health factors on work absence duration. Future interventions should explore reducing pain-related work interference through addressing workplace issues, such as providing modified work, reducing physical demands, and increasing job control.

  12. A long time ago, where were the galaxies far, far away?

    NASA Astrophysics Data System (ADS)

    Sirko, Edwin

    How did the universe get from then to now ? I examine this broad cosmological problem from two perspectives: forward and backward. In the forward perspective, I implement a method of generating initial conditions for N -body simulations that accurately models real-space statistical properties, such as the mass variance in spheres and the correlation function. The method requires running ensembles of simulations because the power in the DC mode is no longer assumed to be zero. For moderately sized boxes, I demonstrate that the new method corrects the previously widely ignored underestimate in the mass variance in spheres and the shape of the correlation function. In the backward perspective, I use reconstruction techniques to transform a simulated or observed cosmological density field back in time to the early universe. A simple reconstruction technique is used to sharpen the baryon acoustic peak in the correlation function in simulations. At z = 0.3, one can reduce the sample variance error bar on the acoustic scale by at least a factor of 2 and in principle by nearly a factor of 4. This has significant implications for future observational surveys aiming to measure the cosmological distance scale. Another reconstruction technique, Monge-Ampere-Kantorovich reconstruction, is used on evolved N -body simulations to calibrate its effectiveness in recovering the linear power spectrum. A new "memory model" parametrizes the evolution of Fourier modes into two parameters that describe the amount of memory a given mode retains and how much the mode has been scrambled by nonlinear evolution. Reconstruction is spectacularly successful in restoring the memory of Fourier modes and reducing the scrambling; however, the success of reconstruction is not so obvious when considering the power spectrum alone. I apply reconstruction to a volume-limited sample of galaxies from the Sloan Digital Sky Survey and conclude that linear bias is not a good model in the range 0.01 h Mpc -1 [Special characters omitted.] k [Special characters omitted.] 0.5 h Mpc -1 . The most impressive success of reconstruction applied to real data is that the confidence interval on the normalization of the power spectrum is typically halved when using the reconstructed instead of the nonlinear power spectrum.

  13. MMPI-2 Symptom Validity (FBS) Scale: psychometric characteristics and limitations in a Veterans Affairs neuropsychological setting.

    PubMed

    Gass, Carlton S; Odland, Anthony P

    2014-01-01

    The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) Symptom Validity (Fake Bad Scale [FBS]) Scale is widely used to assist in determining noncredible symptom reporting, despite a paucity of detailed research regarding its itemmetric characteristics. Originally designed for use in civil litigation, the FBS is often used in a variety of clinical settings. The present study explored its fundamental psychometric characteristics in a sample of 303 patients who were consecutively referred for a comprehensive examination in a Veterans Affairs (VA) neuropsychology clinic. FBS internal consistency (reliability) was .77. Its underlying factor structure consisted of three unitary dimensions (Tiredness/Distractibility, Stomach/Head Discomfort, and Claimed Virtue of Self/Others) accounting for 28.5% of the total variance. The FBS's internal structure showed factoral discordance, as Claimed Virtue was negatively related to most of the FBS and to its somatic complaint components. Scores on this 12-item FBS component reflected a denial of socially undesirable attitudes and behaviors (Antisocial Practices Scale) that is commonly expressed by the 1,138 males in the MMPI-2 normative sample. These 12 items significantly reduced FBS reliability, introducing systematic error variance. In this VA neuropsychological referral setting, scores on the FBS have ambiguous meaning because of its structural discordance.

  14. Adaptive pre-specification in randomized trials with and without pair-matching.

    PubMed

    Balzer, Laura B; van der Laan, Mark J; Petersen, Maya L

    2016-11-10

    In randomized trials, adjustment for measured covariates during the analysis can reduce variance and increase power. To avoid misleading inference, the analysis plan must be pre-specified. However, it is often unclear a priori which baseline covariates (if any) should be adjusted for in the analysis. Consider, for example, the Sustainable East Africa Research in Community Health (SEARCH) trial for HIV prevention and treatment. There are 16 matched pairs of communities and many potential adjustment variables, including region, HIV prevalence, male circumcision coverage, and measures of community-level viral load. In this paper, we propose a rigorous procedure to data-adaptively select the adjustment set, which maximizes the efficiency of the analysis. Specifically, we use cross-validation to select from a pre-specified library the candidate targeted maximum likelihood estimator (TMLE) that minimizes the estimated variance. For further gains in precision, we also propose a collaborative procedure for estimating the known exposure mechanism. Our small sample simulations demonstrate the promise of the methodology to maximize study power, while maintaining nominal confidence interval coverage. We show how our procedure can be tailored to the scientific question (intervention effect for the study sample vs. for the target population) and study design (pair-matched or not). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. A path analysis model of factors influencing children's requests for unhealthy foods.

    PubMed

    Pettigrew, Simone; Jongenelis, Michelle; Miller, Caroline; Chapman, Kathy

    2017-01-01

    Little is known about the complex combination of factors influencing the extent to which children request unhealthy foods from their parents. The aim of this study was to develop a comprehensive model of influencing factors to provide insight into potential methods of reducing these requests. A web panel provider was used to administer a national online survey to a sample of 1302 Australian parent-child dyads (total sample n=2604). Initial univariate analyses identified potential predictors of children's requests for and consumption of unhealthy foods. The identified variables were subsequently incorporated into a path analysis model that included both parents' and children's reports of children's requests for unhealthy foods. The resulting model accounted for a substantial 31% of the variance in parent-reported food request frequency and 27% of the variance in child-reported request frequency. The variable demonstrating the strongest direct association with both parents' and children's reports of request frequency was the frequency of children's current intake of unhealthy foods. Parents' and children's exposure to food advertising and television viewing time were also positively associated with children's unhealthy food requests. The results highlight the need to break the habitual provision of unhealthy foods to avoid a vicious cycle of requests resulting in consumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  17. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is less serious.

  18. The magnitude and colour of noise in genetic negative feedback systems.

    PubMed

    Voliotis, Margaritis; Bowsher, Clive G

    2012-08-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.

  19. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee.

    PubMed

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-08-29

    The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.

  20. Estimation of the biserial correlation and its sampling variance for use in meta-analysis.

    PubMed

    Jacobs, Perke; Viechtbauer, Wolfgang

    2017-06-01

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Mixed emotions: Sensitivity to facial variance in a crowd of faces.

    PubMed

    Haberman, Jason; Lee, Pegan; Whitney, David

    2015-01-01

    The visual system automatically represents summary information from crowds of faces, such as the average expression. This is a useful heuristic insofar as it provides critical information about the state of the world, not simply information about the state of one individual. However, the average alone is not sufficient for making decisions about how to respond to a crowd. The variance or heterogeneity of the crowd--the mixture of emotions--conveys information about the reliability of the average, essential for determining whether the average can be trusted. Despite its importance, the representation of variance within a crowd of faces has yet to be examined. This is addressed here in three experiments. In the first experiment, observers viewed a sample set of faces that varied in emotion, and then adjusted a subsequent set to match the variance of the sample set. To isolate variance as the summary statistic of interest, the average emotion of both sets was random. Results suggested that observers had information regarding crowd variance. The second experiment verified that this was indeed a uniquely high-level phenomenon, as observers were unable to derive the variance of an inverted set of faces as precisely as an upright set of faces. The third experiment replicated and extended the first two experiments using method-of-constant-stimuli. Together, these results show that the visual system is sensitive to emergent information about the emotional heterogeneity, or ambivalence, in crowds of faces.

  2. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  3. Abbreviated neuropsychological assessment in schizophrenia

    PubMed Central

    Harvey, Philip D.; Keefe, Richard S. E.; Patterson, Thomas L.; Heaton, Robert K.; Bowie, Christopher R.

    2008-01-01

    The aim of this study was to identify the best subset of neuropsychological tests for prediction of several different aspects of functioning in a large (n = 236) sample of older people with schizophrenia. While the validity of abbreviated assessment methods has been examined before, there has never been a comparative study of the prediction of different elements of cognitive impairment, real-world outcomes, and performance-based measures of functional capacity. Scores on 10 different tests from a neuropsychological assessment battery were used to predict global neuropsychological (NP) performance (indexed with averaged scores or calculated general deficit scores), performance-based indices of everyday-living skills and social competence, and case-manager ratings of real-world functioning. Forward entry stepwise regression analyses were used to identify the best predictors for each of the outcomes measures. Then, the analyses were adjusted for estimated premorbid IQ, which reduced the magnitude, but not the structure, of the correlations. Substantial amounts (over 70%) of the variance in overall NP performance were accounted for by a limited number of NP tests. Considerable variance in measures of functional capacity was also accounted for by a limited number of tests. Different tests constituted the best predictor set for each outcome measure. A substantial proportion of the variance in several different NP and functional outcomes can be accounted for by a small number of NP tests that can be completed in a few minutes, although there is considerable unexplained variance. However, the abbreviated assessments that best predict different outcomes vary across outcomes. Future studies should determine whether responses to pharmacological and remediation treatments can be captured with brief assessments as well. PMID:18720182

  4. Multisite Reliability of Cognitive BOLD Data

    PubMed Central

    Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.

    2010-01-01

    Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915

  5. Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations

    PubMed Central

    Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.; Hutchison, Janine R.

    2016-01-01

    Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces. PMID:27736999

  6. Evaluating Composite Sampling Methods of Bacillus Spores at Low Concentrations.

    PubMed

    Hess, Becky M; Amidan, Brett G; Anderson, Kevin K; Hutchison, Janine R

    2016-01-01

    Restoring all facility operations after the 2001 Amerithrax attacks took years to complete, highlighting the need to reduce remediation time. Some of the most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite (SM-SPC): a single cellulose sponge samples multiple coupons with a single pass across each coupon; 2) single medium multi-pass composite: a single cellulose sponge samples multiple coupons with multiple passes across each coupon (SM-MPC); and 3) multi-medium post-sample composite (MM-MPC): a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155 CFU/cm2). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted dry wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p< 0.0001) and coupon material (p = 0.0006). Recovery efficiency (RE) was higher overall using the MM-MPC method compared to the SM-SPC and SM-MPC methods. RE with the MM-MPC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, dry wall, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.

  7. Evaluating Composite Sampling Methods of Bacillus spores at Low Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Becky M.; Amidan, Brett G.; Anderson, Kevin K.

    Restoring facility operations after the 2001 Amerithrax attacks took over three months to complete, highlighting the need to reduce remediation time. The most time intensive tasks were environmental sampling and sample analyses. Composite sampling allows disparate samples to be combined, with only a single analysis needed, making it a promising method to reduce response times. We developed a statistical experimental design to test three different composite sampling methods: 1) single medium single pass composite: a single cellulose sponge samples multiple coupons; 2) single medium multi-pass composite: a single cellulose sponge is used to sample multiple coupons; and 3) multi-medium post-samplemore » composite: a single cellulose sponge samples a single surface, and then multiple sponges are combined during sample extraction. Five spore concentrations of Bacillus atrophaeus Nakamura spores were tested; concentrations ranged from 5 to 100 CFU/coupon (0.00775 to 0.155CFU/cm2, respectively). Study variables included four clean surface materials (stainless steel, vinyl tile, ceramic tile, and painted wallboard) and three grime coated/dirty materials (stainless steel, vinyl tile, and ceramic tile). Analysis of variance for the clean study showed two significant factors: composite method (p-value < 0.0001) and coupon material (p-value = 0.0008). Recovery efficiency (RE) was higher overall using the post-sample composite (PSC) method compared to single medium composite from both clean and grime coated materials. RE with the PSC method for concentrations tested (10 to 100 CFU/coupon) was similar for ceramic tile, painted wall board, and stainless steel for clean materials. RE was lowest for vinyl tile with both composite methods. Statistical tests for the dirty study showed RE was significantly higher for vinyl and stainless steel materials, but significantly lower for ceramic tile. These results suggest post-sample compositing can be used to reduce sample analysis time when responding to a Bacillus anthracis contamination event of clean or dirty surfaces.« less

  8. The effect of thermal variance on the phenotype of marine turtle offspring.

    PubMed

    Horne, C R; Fuller, W J; Godley, B J; Rhodes, K A; Snape, R; Stokes, K L; Broderick, A C

    2014-01-01

    Temperature can have a profound effect on the phenotype of reptilian offspring, yet the bulk of current research considers the effects of constant incubation temperatures on offspring morphology, with few studies examining the natural thermal variance that occurs in the wild. Over two consecutive nesting seasons, we placed temperature data loggers in 57 naturally incubating clutches of loggerhead sea turtles Caretta caretta and found that greater diel thermal variance during incubation significantly reduced offspring mass, potentially reducing survival of hatchlings during their journey from the nest to offshore waters and beyond. With predicted scenarios of climate change, behavioral plasticity in nest site selection may be key for the survival of ectothermic species, particularly those with temperature-dependent sex determination.

  9. The relationship between observational scale and explained variance in benthic communities

    PubMed Central

    Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.

    2018-01-01

    This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746

  10. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  11. Adaptive design of an X-ray magnetic circular dichroism spectroscopy experiment with Gaussian process modelling

    NASA Astrophysics Data System (ADS)

    Ueno, Tetsuro; Hino, Hideitsu; Hashimoto, Ai; Takeichi, Yasuo; Sawada, Masahiro; Ono, Kanta

    2018-01-01

    Spectroscopy is a widely used experimental technique, and enhancing its efficiency can have a strong impact on materials research. We propose an adaptive design for spectroscopy experiments that uses a machine learning technique to improve efficiency. We examined X-ray magnetic circular dichroism (XMCD) spectroscopy for the applicability of a machine learning technique to spectroscopy. An XMCD spectrum was predicted by Gaussian process modelling with learning of an experimental spectrum using a limited number of observed data points. Adaptive sampling of data points with maximum variance of the predicted spectrum successfully reduced the total data points for the evaluation of magnetic moments while providing the required accuracy. The present method reduces the time and cost for XMCD spectroscopy and has potential applicability to various spectroscopies.

  12. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  13. Zinc, iron, and lead: relations to head start children's cognitive scores and teachers' ratings of behavior.

    PubMed

    Hubbs-Tait, Laura; Kennedy, Tay Seacord; Droke, Elizabeth A; Belanger, David M; Parker, Jill R

    2007-01-01

    The objective of this study was to conduct a preliminary investigation of lead, zinc, and iron levels in relation to child cognition and behavior in a small sample of Head Start children. The design was cross-sectional and correlational. Participants were 42 3- to 5-year-old children attending rural Head Start centers. Nonfasting blood samples of whole blood lead, plasma zinc, and ferritin were collected. Teachers rated children's behavior on the California Preschool Social Competency Scale, Howes' Sociability subscale, and the Preschool Behavior Questionnaire. Children were tested individually with the McCarthy Scales of Children's Abilities. Hierarchical regression analyses revealed that zinc and ferritin jointly explained 25% of the variance in McCarthy Scales of Children's Abilities verbal scores. Lead levels explained 25% of the variance in teacher ratings of girls' sociability and 20% of the variance in teacher ratings of girls' classroom competence. Zinc levels explained 39% of the variance in teacher ratings of boys' anxiety. Univariate analysis of variance revealed that the four children low in zinc and iron had significantly higher blood lead (median=0.23 micromol/L [4.73 microg/dL]) than the 31 children sufficient in zinc or iron (median=0.07 micromol/L [1.54 microg/dL]) or the 7 children sufficient in both (median=0.12 micromol/L [2.52 microg/dL]), suggesting an interaction among the three minerals. Within this small low-income sample, the results imply both separate and interacting effects of iron, zinc, and lead. They underscore the importance of studying these three minerals in larger samples of low-income preschool children to make more definitive conclusions.

  14. Variation of gene expression in Bacillus subtilis samples of fermentation replicates.

    PubMed

    Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce

    2011-06-01

    The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.

  15. The Statistical Power of Planned Comparisons.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…

  16. Overlap between treatment and control distributions as an effect size measure in experiments.

    PubMed

    Hedges, Larry V; Olkin, Ingram

    2016-03-01

    The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).

  17. Statistical analysis tables for truncated or censored samples

    NASA Technical Reports Server (NTRS)

    Cohen, A. C.; Cooley, C. G.

    1971-01-01

    Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.

  18. Longitudinal design considerations to optimize power to detect variances and covariances among rates of change: Simulation results based on actual longitudinal studies

    PubMed Central

    Rast, Philippe; Hofer, Scott M.

    2014-01-01

    We investigated the power to detect variances and covariances in rates of change in the context of existing longitudinal studies using linear bivariate growth curve models. Power was estimated by means of Monte Carlo simulations. Our findings show that typical longitudinal study designs have substantial power to detect both variances and covariances among rates of change in a variety of cognitive, physical functioning, and mental health outcomes. We performed simulations to investigate the interplay among number and spacing of occasions, total duration of the study, effect size, and error variance on power and required sample size. The relation between growth rate reliability (GRR) and effect size to the sample size required to detect power ≥ .80 was non-linear, with rapidly decreasing sample sizes needed as GRR increases. The results presented here stand in contrast to previous simulation results and recommendations (Hertzog, Lindenberger, Ghisletta, & von Oertzen, 2006; Hertzog, von Oertzen, Ghisletta, & Lindenberger, 2008; von Oertzen, Ghisletta, & Lindenberger, 2010), which are limited due to confounds between study length and number of waves, error variance with GCR, and parameter values which are largely out of bounds of actual study values. Power to detect change is generally low in the early phases (i.e. first years) of longitudinal studies but can substantially increase if the design is optimized. We recommend additional assessments, including embedded intensive measurement designs, to improve power in the early phases of long-term longitudinal studies. PMID:24219544

  19. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  20. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  1. Associations of gender inequality with child malnutrition and mortality across 96 countries.

    PubMed

    Marphatia, A A; Cole, T J; Grijalva-Eternod, C; Wells, J C K

    2016-01-01

    National efforts to reduce low birth weight (LBW) and child malnutrition and mortality prioritise economic growth. However, this may be ineffective, while rising gross domestic product (GDP) also imposes health costs, such as obesity and non-communicable disease. There is a need to identify other potential routes for improving child health. We investigated associations of the Gender Inequality Index (GII), a national marker of women's disadvantages in reproductive health, empowerment and labour market participation, with the prevalence of LBW, child malnutrition (stunting and wasting) and mortality under 5 years in 96 countries, adjusting for national GDP. The GII displaced GDP as a predictor of LBW, explaining 36% of the variance. Independent of GDP, the GII explained 10% of the variance in wasting and stunting and 41% of the variance in child mortality. Simulations indicated that reducing GII could lead to major reductions in LBW, child malnutrition and mortality in low- and middle-income countries. Independent of national wealth, reducing women's disempowerment relative to men may reduce LBW and promote child nutritional status and survival. Longitudinal studies are now needed to evaluate the impact of efforts to reduce societal gender inequality.

  2. Comparison of Clinpro Cario L-Pop estimates with CIA lactic acid estimates of the oral microflora.

    PubMed

    Gerardu, Véronique; Heijnsbroek, Muriel; Buijs, Mark; van der Weijden, Fridus; Ten Cate, Bob; van Loveren, Cor

    2006-04-01

    Clinpro Cario L-Pop (CCLP) is a semiquantitive test claimed to determine the general potential for caries development and to monitor the individual caries risk. This test translates the capacity of the tongue microflora to produce lactic acid into a score of 1-9, indicating a low, medium or high risk for caries development. The aim of this randomized crossover, clinical trial was to evaluate the CCLP on its variation over time and its capacity to monitor the effect of three different oral hygiene procedures. The CCLP readings were compared with measurements of lactic acid in tongue biofilm and plaque samples by capillary ion electrophoresis (CIA). After four washout periods, the distribution of scores in the low-, medium-, and high-risk categories was 10%, 16%, and 74%, respectively. Out of 30 subjects, 11 scored consistently in the same category. The coefficients of variance of lactic acid concentrations were 31% for tongue samples and 25% for plaque samples. After using antimicrobial toothpaste and mouthwash, the number of high-risk scores was reduced to 33%; reduced acidogenicity was also found in tongue and plaque samples. We conclude that CCLP can be used to monitor and stimulate compliance to an antimicrobial oral hygiene protocol.

  3. NIPTmer: rapid k-mer-based software package for detection of fetal aneuploidies.

    PubMed

    Sauk, Martin; Žilina, Olga; Kurg, Ants; Ustav, Eva-Liina; Peters, Maire; Paluoja, Priit; Roost, Anne Mari; Teder, Hindrek; Palta, Priit; Brison, Nathalie; Vermeesch, Joris R; Krjutškov, Kaarel; Salumets, Andres; Kaplinski, Lauris

    2018-04-04

    Non-invasive prenatal testing (NIPT) is a recent and rapidly evolving method for detecting genetic lesions, such as aneuploidies, of a fetus. However, there is a need for faster and cheaper laboratory and analysis methods to make NIPT more widely accessible. We have developed a novel software package for detection of fetal aneuploidies from next-generation low-coverage whole genome sequencing data. Our tool - NIPTmer - is based on counting pre-defined per-chromosome sets of unique k-mers from raw sequencing data, and applying linear regression model on the counts. Additionally, the filtering process used for k-mer list creation allows one to take into account the genetic variance in a specific sample, thus reducing the source of uncertainty. The processing time of one sample is less than 10 CPU-minutes on a high-end workstation. NIPTmer was validated on a cohort of 583 NIPT samples and it correctly predicted 37 non-mosaic fetal aneuploidies. NIPTmer has the potential to reduce significantly the time and complexity of NIPT post-sequencing analysis compared to mapping-based methods. For non-commercial users the software package is freely available at http://bioinfo.ut.ee/NIPTMer/ .

  4. Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

    PubMed Central

    Meyer, Karin

    2016-01-01

    Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681

  5. An internal pilot design for prospective cancer screening trials with unknown disease prevalence.

    PubMed

    Brinton, John T; Ringham, Brandy M; Glueck, Deborah H

    2015-10-13

    For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.

  6. Re-estimating sample size in cluster randomised trials with active recruitment within clusters.

    PubMed

    van Schie, S; Moerbeek, M

    2014-08-30

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster level and individual level variance should be known before the study starts, but this is often not the case. We suggest using an internal pilot study design to address this problem of unknown variances. A pilot can be useful to re-estimate the variances and re-calculate the sample size during the trial. Using simulated data, it is shown that an initially low or high power can be adjusted using an internal pilot with the type I error rate remaining within an acceptable range. The intracluster correlation coefficient can be re-estimated with more precision, which has a positive effect on the sample size. We conclude that an internal pilot study design may be used if active recruitment is feasible within a limited number of clusters. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Variance components of short-term biomarkers of manganese exposure in an inception cohort of welding trainees.

    PubMed

    Baker, Marissa G; Simpson, Christopher D; Sheppard, Lianne; Stover, Bert; Morton, Jackie; Cocker, John; Seixas, Noah

    2015-01-01

    Various biomarkers of exposure have been explored as a way to quantitatively estimate an internal dose of manganese (Mn) exposure, but given the tight regulation of Mn in the body, inter-individual variability in baseline Mn levels, and variability in timing between exposure and uptake into various biological tissues, identification of a valuable and useful biomarker for Mn exposure has been elusive. Thus, a mixed model estimating variance components using restricted maximum likelihood was used to assess the within- and between-subject variance components in whole blood, plasma, and urine (MnB, MnP, and MnU, respectively) in a group of nine newly-exposed apprentice welders, on whom baseline and subsequent longitudinal samples were taken over a three month period. In MnB, the majority of variance was found to be between subjects (94%), while in MnP and MnU the majority of variance was found to be within subjects (79% and 99%, respectively), even when controlling for timing of sample. While blood seemed to exhibit a homeostatic control of Mn, plasma and urine, with the majority of the variance within subjects, did not. Results presented here demonstrate the importance of repeat measure or longitudinal study designs when assessing biomarkers of Mn, and the spurious associations that could result from cross-sectional analyses. Copyright © 2014 Elsevier GmbH. All rights reserved.

  8. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    PubMed

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The use of bulk EC a gradient as an exhaustive variable, known at any node of an interpolation grid, has allowed the optimization of the sampling scheme, distinguishing among areas with different priority levels.

  9. Environmental stress, inbreeding, and the nature of phenotypic and genetic variance in Drosophila melanogaster.

    PubMed Central

    Fowler, Kevin; Whitlock, Michael C

    2002-01-01

    Fifty-two lines of Drosophila melanogaster founded by single-pair population bottlenecks were used to study the effects of inbreeding and environmental stress on phenotypic variance, genetic variance and survivorship. Cold temperature and high density cause reduced survivorship, but these stresses do not cause repeatable changes in the phenotypic variance of most wing morphological traits. Wing area, however, does show increased phenotypic variance under both types of environmental stress. This increase is no greater in inbred than in outbred lines, showing that inbreeding does not increase the developmental effects of stress. Conversely, environmental stress does not increase the extent of inbreeding depression. Genetic variance is not correlated with environmental stress, although the amount of genetic variation varies significantly among environments and lines vary significantly in their response to environmental change. Drastic changes in the environment can cause changes in phenotypic and genetic variance, but not in a way reliably predicted by the notion of 'stress'. PMID:11934358

  10. S175. AMOTIVATION IS ASSOCIATED WITH SMALLER VENTRAL STRIATUM VOLUMES IN OLDER PATIENTS WITH SCHIZOPHRENIA

    PubMed Central

    Caravaggio, Fernando; Fervaha, Gagan; Iwata, Yusuke; Plitman, Eric; Chung, Jun Ku; Nakajima, Shinichiro; Mar, Wanna; Gerretsen, Philip; Kim, Julia; Chakravarty, Mallar; Mulsant, Benoit; Pollock, Bruce; Mamo, David; Remington, Gary; Graff-Guerrero, Ariel

    2018-01-01

    Abstract Background Motivational deficits are prevalent in patients with schizophrenia, persist despite antipsychotic treatment, and predict long‐term outcomes. Evidence suggests that patients with greater amotivation have smaller ventral striatum (VS) volumes. We wished to replicate this finding in a sample of older, chronically medicated patients with schizophrenia. Using structural imaging and positron emission tomography, we examined whether amotivation uniquely predicted VS volumes beyond the effects of striatal dopamine D2/3 receptor (D2/3R) blockade by antipsychotics. Methods Data from 41 older schizophrenia patients (mean age: 60.2 ± 6.7; 11 female) were reanalysed from previously published imaging data. We constructed multivariate linear stepwise regression models with VS volumes as the dependent variable and various sociodemographic and clinical variables as the initial predictors: age, gender, total brain volume, and antipsychotic striatal D2/3R occupancy. Amotivation was included as a subsequent step to determine any unique relationships with VS volumes beyond the contribution of the covariates. In a reduced sample (n = 36), general cognition was also included as a covariate. Results Amotivation uniquely explained 8% and 6% of the variance in right and left VS volumes, respectively (right: β = -.38, t = -2.48, P = .01; left: β = -.31, t = -2.17, P = .03). Considering cognition, amotivation levels uniquely explained 9% of the variance in right VS volumes (β = -.43, t = -0.26, P = .03). Discussion We replicate and extend the finding of reduced VS volumes with greater amotivation. We demonstrate this relationship uniquely beyond the potential contributions of striatal D2/3R blockade by antipsychotics. Elucidating the structural correlates of amotivation in schizophrenia may help develop treatments for this presently irremediable deficit.

  11. Perceived Stress Scale: confirmatory factor analysis of the PSS14 and PSS10 versions in two samples of pregnant women from the BRISA cohort.

    PubMed

    Yokokura, Ana Valéria Carvalho Pires; Silva, Antônio Augusto Moura da; Fernandes, Juliana de Kássia Braga; Del-Ben, Cristina Marta; Figueiredo, Felipe Pinheiro de; Barbieri, Marco Antonio; Bettiol, Heloisa

    2017-12-18

    This study aimed to assess the dimensional structure, reliability, convergent validity, discriminant validity, and scalability of the Perceived Stress Scale (PSS). The sample consisted of 1,447 pregnant women in São Luís (Maranhão State) and 1,400 in Ribeirão Preto (São Paulo State), Brazil. The 14 and 10-item versions of the scale were assessed using confirmatory factor analysis, using weighted least squares means and variance (WLSMV). In both cities, the two-factor models (positive factors, measuring resilience to stressful situations, and negative factors, measuring stressful situations) showed better fit than the single-factor models. The two-factor models for the complete (PSS14) and reduced scale (PSS10) showed good internal consistency (Cronbach's alpha ≥ 0.70). All the factor loadings were ≥ 0.50, except for items 8 and 12 of the negative dimension and item 13 of the positive dimension. The correlations between both dimensions of stress and psychological violence showed the expected magnitude (0.46-0.59), providing evidence of an adequate convergent construct validity. The correlations between the scales' positive and negative dimensions were around 0.74-0.78, less than 0.85, which suggests adequate discriminant validity. Extracted mean variance and scalability were slightly higher for PSS10 than for PSS14. The results were consistent in both cities. In conclusion, the single-factor solution is not recommended for assessing stress in pregnant women. The reduced, 10-item two-factor scale appears to be more appropriate for measuring perceived stress in pregnant women.

  12. Trend analysis and selected summary statistics of annual mean streamflow for 38 selected long-term U.S. Geological Survey streamgages in Texas, water years 1916-2012

    USGS Publications Warehouse

    Asquith, William H.; Barbie, Dana L.

    2014-01-01

    Selected summary statistics (L-moments) and estimates of respective sampling variances were computed for the 35 streamgages lacking statistically significant trends. From the L-moments and estimated sampling variances, weighted means or regional values were computed for each L-moment. An example application is included demonstrating how the L-moments could be used to evaluate the magnitude and frequency of annual mean streamflow.

  13. Trends in Elevated Triglyceride in Adults: United States, 2001-2012

    MedlinePlus

    ... All variance estimates accounted for the complex survey design using Taylor series linearization ( 10 ). Percentage estimates for the total adult ... al. National Health and Nutrition Examination Survey: Sample design, 2007–2010. ... KM. Taylor series methods. In: Introduction to variance estimation. 2nd ed. ...

  14. Effects of sample size on estimates of population growth rates calculated with matrix models.

    PubMed

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  15. A high-resolution speleothem record of western equatorial Pacific rainfall: Implications for Holocene ENSO evolution

    NASA Astrophysics Data System (ADS)

    Chen, Sang; Hoffmann, Sharon S.; Lund, David C.; Cobb, Kim M.; Emile-Geay, Julien; Adkins, Jess F.

    2016-05-01

    The El Niño-Southern Oscillation (ENSO) is the primary driver of interannual climate variability in the tropics and subtropics. Despite substantial progress in understanding ocean-atmosphere feedbacks that drive ENSO today, relatively little is known about its behavior on centennial and longer timescales. Paleoclimate records from lakes, corals, molluscs and deep-sea sediments generally suggest that ENSO variability was weaker during the mid-Holocene (4-6 kyr BP) than the late Holocene (0-4 kyr BP). However, discrepancies amongst the records preclude a clear timeline of Holocene ENSO evolution and therefore the attribution of ENSO variability to specific climate forcing mechanisms. Here we present δ18 O results from a U-Th dated speleothem in Malaysian Borneo sampled at sub-annual resolution. The δ18 O of Borneo rainfall is a robust proxy of regional convective intensity and precipitation amount, both of which are directly influenced by ENSO activity. Our estimates of stalagmite δ18 O variance at ENSO periods (2-7 yr) show a significant reduction in interannual variability during the mid-Holocene (3240-3380 and 5160-5230 yr BP) relative to both the late Holocene (2390-2590 yr BP) and early Holocene (6590-6730 yr BP). The Borneo results are therefore inconsistent with lacustrine records of ENSO from the eastern equatorial Pacific that show little or no ENSO variance during the early Holocene. Instead, our results support coral, mollusc and foraminiferal records from the central and eastern equatorial Pacific that show a mid-Holocene minimum in ENSO variance. Reduced mid-Holocene interannual δ18 O variability in Borneo coincides with an overall minimum in mean δ18 O from 3.5 to 5.5 kyr BP. Persistent warm pool convection would tend to enhance the Walker circulation during the mid-Holocene, which likely contributed to reduced ENSO variance during this period. This finding implies that both convective intensity and interannual variability in Borneo are driven by coupled air-sea dynamics that are sensitive to precessional insolation forcing. Isolating the exact mechanisms that drive long-term ENSO evolution will require additional high-resolution paleoclimatic reconstructions and further investigation of Holocene tropical climate evolution using coupled climate models.

  16. Spatial and temporal variance in fatty acid and stable isotope signatures across trophic levels in large river systems

    USGS Publications Warehouse

    Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca

    2018-01-01

    Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.

  17. Protection motivation theory and physical activity: a longitudinal test among a representative population sample of Canadian adults.

    PubMed

    Plotnikoff, Ronald C; Rhodes, Ryan E; Trinh, Linda

    2009-11-01

    The purpose of this study was to examine the Protection Motivation Theory (PMT) to predict physical activity (PA) behaviour in a large, population-based sample of adults. One thousand six hundred and two randomly selected individuals completed two telephone interviews over two consecutive six-month periods assessing PMT constructs. PMT explained 35 per cent and 20 per cent of the variance in intention and behaviour respectively. Coping cognitions as moderators of threat explained 1 per cent of the variance in intention and behaviour. Age and gender as moderators of threat did not provide additional variance in the models. We conclude that salient PMT predictors (e.g. self-efficacy) may guide the development of effective PA interventions in the general population.

  18. Determinants of fast food consumption among Iranian high school students based on planned behavior theory.

    PubMed

    Sharifirad, Gholamreza; Yarmohammadi, Parastoo; Azadbakht, Leila; Morowatisharifabad, Mohammad Ali; Hassanzadeh, Akbar

    2013-01-01

    This study was conducted to identify some factors (beliefs and norms) which are related to fast food consumption among high school students in Isfahan, Iran. We used the framework of the theory planned behavior (TPB) to predict this behavior. Cross-sectional data were available from high school students (n = 521) who were recruited by cluster randomized sampling. All of the students completed a questionnaire assessing variables of standard TPB model including attitude, subjective norms, perceived behavior control (PBC), and the additional variables past behavior, actual behavior control (ABC). The TPB variables explained 25.7% of the variance in intentions with positive attitude as the strongest (β = 0.31, P < 0.001) and subjective norms as the weakest (β = 0.29, P < 0.001) determinant. Concurrently, intentions accounted for 6% of the variance for fast food consumption. Past behavior and ABC accounted for an additional amount of 20.4% of the variance in fast food consumption. Overall, the present study suggests that the TPB model is useful in predicting related beliefs and norms to the fast food consumption among adolescents. Subjective norms in TPB model and past behavior in TPB model with additional variables (past behavior and actual behavior control) were the most powerful predictors of fast food consumption. Therefore, TPB model may be a useful framework for planning intervention programs to reduce fast food consumption by students.

  19. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    PubMed

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain <65% of the total variance. With the increase of the sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Assessment the impact of samplers change on the uncertainty related to geothermalwater sampling

    NASA Astrophysics Data System (ADS)

    Wątor, Katarzyna; Mika, Anna; Sekuła, Klaudia; Kmiecik, Ewa

    2018-02-01

    The aim of this study is to assess the impact of samplers change on the uncertainty associated with the process of the geothermal water sampling. The study was carried out on geothermal water exploited in Podhale region, southern Poland (Małopolska province). To estimate the uncertainty associated with sampling the results of determinations of metasilicic acid (H2SiO3) in normal and duplicate samples collected in two series were used (in each series the samples were collected by qualified sampler). Chemical analyses were performed using ICP-OES method in the certified Hydrogeochemical Laboratory of the Hydrogeology and Engineering Geology Department at the AGH University of Science and Technology in Krakow (Certificate of Polish Centre for Accreditation No. AB 1050). To evaluate the uncertainty arising from sampling the empirical approach was implemented, based on double analysis of normal and duplicate samples taken from the same well in the series of testing. The analyses of the results were done using ROBAN software based on technique of robust statistics analysis of variance (rANOVA). Conducted research proved that in the case of qualified and experienced samplers uncertainty connected with the sampling can be reduced what results in small measurement uncertainty.

  1. Explaining Common Variance Shared by Early Numeracy and Literacy

    ERIC Educational Resources Information Center

    Davidse, N. J.; De Jong, M. T.; Bus, A. G.

    2014-01-01

    How can it be explained that early literacy and numeracy share variance? We specifically tested whether the correlation between four early literacy skills (rhyming, letter knowledge, emergent writing, and orthographic knowledge) and simple sums (non-symbolic and story condition) reduced after taking into account preschool attention control,…

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis, Alfredo

    The use of Renyi entropy as an uncertainty measure alternative to variance leads to the study of states with quantum fluctuations below the levels established by Gaussian states, which are the position-momentum minimum uncertainty states according to variance. We examine the quantum properties of states with exponential wave functions, which combine reduced fluctuations with practical feasibility.

  3. The magnitude and colour of noise in genetic negative feedback systems

    PubMed Central

    Voliotis, Margaritis; Bowsher, Clive G.

    2012-01-01

    The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772

  4. Logistic and Multiple Regression: A Two-Pronged Approach to Accurately Estimate Cost Growth in Major DoD Weapon Systems

    DTIC Science & Technology

    2004-03-01

    Breusch - Pagan test for constant variance of the residuals. Using Microsoft Excel® we calculate a p-value of 0.841237. This high p-value, which is above...our alpha of 0.05, indicates that our residuals indeed pass the Breusch - Pagan test for constant variance. In addition to the assumption tests , we...Wilk Test for Normality – Support (Reduced) Model (OLS) Finally, we perform a Breusch - Pagan test for constant variance of the residuals. Using

  5. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  6. Explaining the Sex Difference in Dyslexia

    ERIC Educational Resources Information Center

    Arnett, Anne B.; Pennington, Bruce F.; Peterson, Robin L.; Willcutt, Erik G.; DeFries, John C.; Olson, Richard K.

    2017-01-01

    Background: Males are diagnosed with dyslexia more frequently than females, even in epidemiological samples. This may be explained by greater variance in males' reading performance. Methods: We expand on previous research by rigorously testing the variance difference theory, and testing for mediation of the sex difference by cognitive correlates.…

  7. A Scheme for Regrouping WISC-R Subtests.

    ERIC Educational Resources Information Center

    Groff, Martin G.; Hubble, Larry M.

    1984-01-01

    Reviews WISC-R factor analytic findings for developing a scheme for regrouping WISC-R subtests, consisting of verbal comprehension and spatial subtests. Subtests comprising these groupings are shown to have more common variance than specific variance and cluster together consistently across the samples of WISC-R scores. (Author/JAC)

  8. Variance approximations for assessments of classification accuracy

    Treesearch

    R. L. Czaplewski

    1994-01-01

    Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...

  9. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  10. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  11. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    USDA-ARS?s Scientific Manuscript database

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  12. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  13. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  14. Association between virtues and posttraumatic growth: preliminary evidence from a Chinese community sample after earthquakes

    PubMed Central

    Duan, Wenjie

    2015-01-01

    Objective. Relationship, vitality, and conscientiousness are three fundamental virtues that have been recently identified as important individual differences to health, well being, and positive development. This cross-sectional study attempted to explore the relationship between the three constructs and post-traumatic growth (PTG) in three directions, including indirect trauma samples without post-traumatic stress disorder (PTSD), direct trauma samples without PTSD, and direct trauma samples with PTSD. Methods. A total of 340 community participants from Sichuan Province, Mainland China involved in the study, most of which experienced Wenchuan and Lushan Earthquake. Participants were required to complete the self-reported questionnaire packages at one time point for obtaining their scores on virtues (Chinese Virtues Questionnaire), PTSD (PTSD Checklist-Specific), and PTG (Post-traumatic Growth Inventory-Chinese). Results. Significant and positive correlations between the three virtues and PTG were identified (r = .39–.56; p < .01). Further regression analysis by stepwise method reveled that: in the indirect trauma samples, vitality explained 32% variance of PTG. In reference to the direct trauma sample without PTSD, both relationship and conscientiousness explained 32% variance of PTG, whereas in the direct trauma sample with PTSD, only conscientiousness accounted for 31% the variance in PTG. Conclusion.This cross-sectional investigation partly revealed the roles of different virtues in trauma context. Findings suggest important implications for strengths-based treatment. PMID:25870774

  15. Symptoms of acute stress in Jewish and Arab Israeli citizens during the Second Lebanon War.

    PubMed

    Yahav, Rivka; Cohen, Miri

    2007-10-01

    The "Second Lebanon War" exposed northern Israel to massive missile attacks, aimed at civilian centers, Jewish and Arab, for a period of several weeks. To assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Jewish and Arab samples, and their correlates with demographic and exposure variables. Telephone survey conducted in the third week of the second Lebanon war with a random sample of 133 Jewish and 66 Arab adult residents of northern Israel. ASD, ASS and symptoms-related impairment were measured by the Acute Stress Disorder Interview (ASDI) questionnaire, in addition to war-related exposure and demographic data. The majority of respondents experienced at least one of four symptom groups of ASD, 5.5% of the Jewish respondents and 20.3% of the Arabs met the criteria of ASD. Higher rates of Arab respondents reported symptoms of dissociation, reexperiencing and arousal, but a similar rate of avoidance was reported by the two samples. Higher mean scores of ASS and of symptoms-related impairment were reported by the Arab respondents. According to multiple regression analyses, younger age, female gender, Arab ethnicity and experiencing the war more intensely as a stressor significantly explained ASS variance, while Arab ethnicity and proximity to missiles exploding significantly explained the variance of symptoms-related impairment. A substantial rate of participants experienced symptoms of acute stress, while for only small proportion were the symptoms consistent with ASD. Higher ASD and ASS were reported by the Arab sample, calling attention to the need to build interventions to reduce the present symptoms and to help prepare for possible similar situations in the future.

  16. Sample and population exponents of generalized Taylor's law.

    PubMed

    Giometto, Andrea; Formentin, Marco; Rinaldo, Andrea; Cohen, Joel E; Maritan, Amos

    2015-06-23

    Taylor's law (TL) states that the variance V of a nonnegative random variable is a power function of its mean M; i.e., V = aM(b). TL has been verified extensively in ecology, where it applies to population abundance, physics, and other natural sciences. Its ubiquitous empirical verification suggests a context-independent mechanism. Sample exponents b measured empirically via the scaling of sample mean and variance typically cluster around the value b = 2. Some theoretical models of population growth, however, predict a broad range of values for the population exponent b pertaining to the mean and variance of population density, depending on details of the growth process. Is the widely reported sample exponent b ≃ 2 the result of ecological processes or could it be a statistical artifact? Here, we apply large deviations theory and finite-sample arguments to show exactly that in a broad class of growth models the sample exponent is b ≃ 2 regardless of the underlying population exponent. We derive a generalized TL in terms of sample and population exponents b(jk) for the scaling of the kth vs. the jth cumulants. The sample exponent b(jk) depends predictably on the number of samples and for finite samples we obtain b(jk) ≃ k = j asymptotically in time, a prediction that we verify in two empirical examples. Thus, the sample exponent b ≃ 2 may indeed be a statistical artifact and not dependent on population dynamics under conditions that we specify exactly. Given the broad class of models investigated, our results apply to many fields where TL is used although inadequately understood.

  17. Community Health Centers: Providers, Patients, and Content of Care

    MedlinePlus

    ... Statistics (NCHS). NAMCS uses a multistage probability sample design involving geographic primary sampling units (PSUs), physician practices ... 05 level. To account for the complex sample design during variance estimation, all analyses were performed using ...

  18. Additive-dominance genetic model analyses for late-maturity alpha-amylase activity in a bread wheat factorial crossing population.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H

    2015-12-01

    Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.

  19. Analysis of job satisfaction, burnout, and intent of respiratory care practitioners to leave the field or the job.

    PubMed

    Shelledy, D C; Mikles, S P; May, D F; Youtsey, J W

    1992-01-01

    Increased stress, burnout, and lack of job satisfaction may contribute to a decline in work performance, absenteeism, and intent to leave one's job or field. We undertook to determine organizational, job-specific, and personal predictors of level of burnout among respiratory care practitioners (RCPs). We also examined the relationships among burnout, job satisfaction (JS), absenteeism, and RCPs' intent to leave their job or the field. A pilot-tested assessment instrument was mailed to all active NBRC-credentialed RCPs in Georgia (n = 788). There were 458 usable returns (58% response rate). A random sample of 10% of the nonrespondents (n = 33) was then surveyed by telephone, and the results were compared to those of the mail respondents. Variables were compared to burnout and JS scores by correlational analysis, which was followed by stepwise multiple regression analyses to determine the ability of the independent variables to predict burnout and JS scores when used in combination. There were no significant differences between respondents and sampled nonrespondents in burnout scores (p = 0.56) or JS (p = 0.24). Prediction of burnout: The coefficient of multiple correlation, R2, indicated that in combination the independent variables accounted for 61% of the variance in burnout scores. The strongest predictor of burnout was job stress. Other job-related predictors of burnout were size of department, satisfaction with work, satisfaction with co-workers and co-worker support, job independence and job control, recognition by nursing, and role clarity. Personal-variable predictors were age, number of previous jobs held, social support, and intent to leave the field of respiratory care. Prediction of job satisfaction: R2 indicated that, in combination, the independent variables accounted for 63% of the variance observed in satisfaction with work, 36% of the variance observed in satisfaction with pay, 36% of the variance in satisfaction with promotions, 62% of the variance in satisfaction with supervision, and 48% of the variance in satisfaction with co-workers. Predictors of work-satisfaction level were recognition by physicians and nursing, age, burn-out level, absenteeism, and intent to leave the field. Predictors of level of satisfaction with pay were actual salary, job independence, organizational climate, ease of obtaining time off, job stress, absenteeism, intent to leave the field, and number of dependent children. Predictors of level of satisfaction with promotions were recognition by nursing, participation in decision making, job stress, intent to leave the field, past turnover rates, and absenteeism. Predictors of level of satisfaction with supervision included supervisor support, role clarity, independence, and ease of obtaining time off. The strongest predictor of level of satisfaction with co-workers was co-worker support. As overall level of JS increased, level of burnout decreased significantly (r = -0.59, p less than 0.001). As burnout level increased, increases occurred in absenteeism (r = 0.22, p less than 0.001), intent to leave the job (r = 0.48, p less than 0.001), and intent to leave the field (r = 0.51, p less than 0.001). Reduced job stress, increased job independence and job control, improved role clarity, and higher levels of JS were all associated with lower levels of burnout. Managerial attention to these factors may improve patient care and reduce absenteeism and turnover among RCPs.

  20. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    PubMed

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  1. Genetic landscapes GIS Toolbox: tools to map patterns of genetic divergence and diversity.

    USGS Publications Warehouse

    Vandergast, Amy G.; Perry, William M.; Lugo, Roberto V.; Hathaway, Stacie A.

    2011-01-01

    The Landscape Genetics GIS Toolbox contains tools that run in the Geographic Information System software, ArcGIS, to map genetic landscapes and to summarize multiple genetic landscapes as average and variance surfaces. These tools can be used to visualize the distribution of genetic diversity across geographic space and to study associations between patterns of genetic diversity and geographic features or other geo-referenced environmental data sets. Together, these tools create genetic landscape surfaces directly from tables containing genetic distance or diversity data and sample location coordinates, greatly reducing the complexity of building and analyzing these raster surfaces in a Geographic Information System.

  2. Predictors of Burnout Among Nurses in Taiwan.

    PubMed

    Lee, Huan-Fang; Yen, Miaofen; Fetzer, Susan; Chien, Tsair Wei

    2015-08-01

    Nurse burnout is a crucial issue for health care professionals and impacts nurse turnover and nursing shortages. Individual and situational factors are related to nurse burnout with predictors of burnout differing among cultures and health care systems. The predictors of nurse burnout in Asia, particularly Taiwan, are unknown. The purpose of this study was to investigate the predictors of burnout among a national sample of nurses in Taiwan. A secondary data analysis of a nationwide database investigated the predictors of burnout among 1,846 nurses in Taiwan. Hierarchical regression analysis determined the relationship between predictors and burnout. Predictors of Taiwanese nurse burnout were age, physical/psychological symptoms, job satisfaction, work engagement, and work environment. The most significant predictors were physical/psychological symptoms and work engagement. The variables explained 35, 39, and 18 % of the emotional exhaustion, personal accomplishment, and depersonalization variance for 54 % of the total variance of burnout. Individual characteristics and nurse self-awareness, especially work, engagement can impact Taiwanese nurses' burnout. Nurse burnout predictors provide administrators with information to develop strategies including education programs and support services to reduce nurse burnout.

  3. Radiation Transport in Random Media With Large Fluctuations

    NASA Astrophysics Data System (ADS)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  4. Work-family conflict and well-being in university employees.

    PubMed

    Winefield, Helen R; Boyd, Carolyn; Winefield, Anthony H

    2014-01-01

    This is one of the first reported studies to have reviewed the role of work-family conflict in university employees, both academic and nonacademic. The goal of this research was to examine the role of work-family conflict as a mediator of relationships between features of the work environment and worker well-being and organizational outcomes. A sample of 3,326 Australian university workers responded to an online survey. Work-family conflict added substantially to the explained variance in physical symptoms and psychological strain after taking account of job demands and control, and to a lesser extent to the variance in job performance. However, it had no extra impact on organizational commitment, which was most strongly predicted by job autonomy. Despite differing in workloads and work-family conflict, academic ("faculty") and nonacademic staff demonstrated similar predictors of worker and organizational outcomes. Results suggest two pathways through which management policies may be effective in improving worker well-being and productivity: improving job autonomy has mainly direct effects, while reducing job demands is mediated by consequent reductions in work-family conflict.

  5. Development of a survey instrument to measure connectivity to evaluate national public health preparedness and response performance.

    PubMed

    Dorn, Barry C; Savoia, Elena; Testa, Marcia A; Stoto, Michael A; Marcus, Leonard J

    2007-01-01

    Survey instruments for evaluating public health preparedness have focused on measuring the structure and capacity of local, state, and federal agencies, rather than linkages among structure, process, and outcomes. To focus evaluation on the latter, we evaluated the linkages among individuals, organizations, and systems using the construct of "connectivity" and developed a measurement instrument. Results from focus groups of emergency preparedness first responders generated 62 items used in the development sample of 187 respondents. Item reduction and factors analyses were conducted to confirm the scale's components. The 62 items were reduced to 28. Five scales explained 70% of the total variance (number of items, percent variance explained, Cronbach's alpha) including connectivity with the system (8, 45%, 0.94), coworkers (7, 7%, 0.91), organization (7, 12%, 0.93), and perceptions (6, 6%, 0.90). Discriminant validity was found to be consistent with the factor structure. We developed a Connectivity Measurement Tool for the public health workforce consisting of a 34-item questionnaire found to be a reliable measure of connectivity with preliminary evidence of construct validity.

  6. Quantification of the overall measurement uncertainty associated with the passive moss biomonitoring technique: Sample collection and processing.

    PubMed

    Aboal, J R; Boquete, M T; Carballeira, A; Casanova, A; Debén, S; Fernández, J A

    2017-05-01

    In this study we examined 6080 data gathered by our research group during more than 20 years of research on the moss biomonitoring technique, in order to quantify the variability generated by different aspects of the protocol and to calculate the overall measurement uncertainty associated with the technique. The median variance of the concentrations of different pollutants measured in moss tissues attributed to the different methodological aspects was high, reaching values of 2851 (ng·g -1 ) 2 for Cd (sample treatment), 35.1 (μg·g -1 ) 2 for Cu (sample treatment), 861.7 (ng·g -1 ) 2 and for Hg (material selection). These variances correspond to standard deviations that constitute 67, 126 and 59% the regional background levels of these elements in the study region. The overall measurement uncertainty associated with the worst experimental protocol (5 subsamples, refrigerated, washed, 5 × 5 m size of the sampling area and once a year sampling) was between 2 and 6 times higher than that associated with the optimal protocol (30 subsamples, dried, unwashed, 20 × 20 m size of the sampling area and once a week sampling), and between 1.5 and 7 times higher than that associated with the standardized protocol (30 subsamples and once a year sampling). The overall measurement uncertainty associated with the standardized protocol could generate variations of between 14 and 47% in the regional background levels of Cd, Cu, Hg, Pb and Zn in the study area and much higher levels of variation in polluted sampling sites. We demonstrated that although the overall measurement uncertainty of the technique is still high, it can be reduced by using already well defined aspects of the protocol. Further standardization of the protocol together with application of the information on the overall measurement uncertainty would improve the reliability and comparability of the results of different biomonitoring studies, thus extending use of the technique beyond the context of scientific research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Systematic sampling for suspended sediment

    Treesearch

    Robert B. Thomas

    1991-01-01

    Abstract - Because of high costs or complex logistics, scientific populations cannot be measured entirely and must be sampled. Accepted scientific practice holds that sample selection be based on statistical principles to assure objectivity when estimating totals and variances. Probability sampling--obtaining samples with known probabilities--is the only method that...

  8. Visualizing the Sample Standard Deviation

    ERIC Educational Resources Information Center

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  9. TESTING THE EFFECTS OF EXPANSION ON SOLAR WIND TURBULENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vech, Daniel; Chen, Christopher H K, E-mail: dvech@umich.edu

    2016-11-20

    We present a multi-spacecraft approach to test the predictions of recent studies on the effect of solar wind expansion on the radial spectral, variance, and local 3D anisotropies of the turbulence. We found that on small scales (5000–10,000 km) the power levels of the B-trace structure functions do not depend on the sampling direction with respect to the radial suggesting that on this scale the effect of expansion is small possibly due to fast turbulent timescales. On larger scales (110–135 R{sub E}), the fluctuations of the radial magnetic field component are reduced by ∼20% compared to the transverse (perpendicular tomore » radial) ones, which could be due to expansion confining the fluctuations into the plane perpendicular to radial. For the local 3D spectral anisotropy, the B-trace structure functions showed dependence on the sampling direction with respect to radial. The anisotropy in the perpendicular plane is reduced when the increments are taken perpendicular with respect to radial, which could be an effect of expansion.« less

  10. Evaluation of the Acceptance Journeys Social Marketing Campaign to Reduce Homophobia.

    PubMed

    Hull, Shawnika J; Davis, Catasha R; Hollander, Gary; Gasiorowicz, Mari; Jeffries, William L; Gray, Simone; Bertolli, Jeanne; Mohr, Anneke

    2017-01-01

    To evaluate the effectiveness of the Acceptance Journeys social marketing campaign to reduce homophobia in the Black community in Milwaukee, Wisconsin. We assessed the campaign's effectiveness using a rolling cross-sectional survey. Data were collected annually online between 2011 and 2015. Each year, a unique sample of Black and White adults, aged 30 years and older, were surveyed in the treatment city (Milwaukee) and in 2 comparison cities that did not have antihomophobia campaigns (St. Louis, MO, and Cleveland, OH; for total sample, n = 3592). Black self-identification and Milwaukee residence were significantly associated with exposure to the campaign, suggesting successful message targeting. The relationship between exposure and acceptance of gay men was significantly mediated through attitudes toward gay men, perceptions of community acceptance, and perceptions of the impact of stigma on gay men, but not through rejection of stereotypes. This model accounted for 39% of variance in acceptance. This evidence suggests that the Acceptance Journeys model of social marketing may be a promising strategy for addressing homophobia in US Black communities.

  11. Teachers' emotional experiences and exhaustion as predictors of emotional labor in the classroom: an experience sampling study.

    PubMed

    Keller, Melanie M; Chang, Mei-Lin; Becker, Eva S; Goetz, Thomas; Frenzel, Anne C

    2014-01-01

    Emotional exhaustion (EE) is the core component in the study of teacher burnout, with significant impact on teachers' professional lives. Yet, its relation to teachers' emotional experiences and emotional labor (EL) during instruction remains unclear. Thirty-nine German secondary teachers were surveyed about their EE (trait), and via the experience sampling method on their momentary (state; N = 794) emotional experiences (enjoyment, anxiety, anger) and momentary EL (suppression, faking). Teachers reported that in 99 and 39% of all lessons, they experienced enjoyment and anger, respectively, whereas they experienced anxiety less frequently. Teachers reported suppressing or faking their emotions during roughly a third of all lessons. Furthermore, EE was reflected in teachers' decreased experiences of enjoyment and increased experiences of anger. On an intra-individual level, all three emotions predict EL, whereas on an inter-individual level, only anger evokes EL. Explained variances in EL (within: 39%, between: 67%) stress the relevance of emotions in teaching and within the context of teacher burnout. Beyond implying the importance of reducing anger, our findings suggest the potential of enjoyment lessening EL and thereby reducing teacher burnout.

  12. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  13. Testing a theory of organizational culture, climate and youth outcomes in child welfare systems: a United States national study.

    PubMed

    Williams, Nathaniel J; Glisson, Charles

    2014-04-01

    Theories of organizational culture and climate (OCC) applied to child welfare systems hypothesize that strategic dimensions of organizational culture influence organizational climate and that OCC explains system variance in youth outcomes. This study provides the first structural test of the direct and indirect effects of culture and climate on youth outcomes in a national sample of child welfare systems and isolates specific culture and climate dimensions most associated with youth outcomes. The study applies multilevel path analysis (ML-PA) to a U.S. nationwide sample of 2,380 youth in 73 child welfare systems participating in the second National Survey of Child and Adolescent Well-being. Youths were selected in a national, two-stage, stratified random sample design. Youths' psychosocial functioning was assessed by caregivers' responses to the Child Behavior Checklist at intake and at 18-month follow-up. OCC was assessed by front-line caseworkers' (N=1,740) aggregated responses to the Organizational Social Context measure. Comparison of the a priori and subsequent trimmed models confirmed a reduced model that excluded rigid organizational culture and explained 70% of the system variance in youth outcomes. Controlling for youth- and system-level covariates, systems with more proficient and less resistant organizational cultures exhibited more functional, more engaged, and less stressful climates. Systems with more proficient cultures and more engaged, more functional, and more stressful climates exhibited superior youth outcomes. Findings suggest child welfare administrators can support service effectiveness with interventions that improve specific dimensions of culture and climate. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Testing a theory of organizational culture, climate and youth outcomes in child welfare systems: A United States national study

    PubMed Central

    Williams, Nathaniel J.; Glisson, Charles

    2013-01-01

    Theories of organizational culture and climate (OCC) applied to child welfare systems hypothesize that strategic dimensions of organizational culture influence organizational climate and that OCC explains system variance in youth outcomes. This study provides the first structural test of the direct and indirect effects of culture and climate on youth outcomes in a national sample of child welfare systems and isolates specific culture and climate dimensions most associated with youth outcomes. The study applies multilevel path analysis (ML-PA) to a U.S. nationwide sample of 2,380 youth in 73 child welfare systems participating in the second National Survey of Child and Adolescent Well-being. Youths were selected in a national, two-stage, stratified random sample design. Youths’ psychosocial functioning was assessed by caregivers’ responses to the Child Behavior Checklist at intake and at 18-month follow-up. OCC was assessed by front-line caseworkers’ (N=1,740) aggregated responses to the Organizational Social Context measure. Comparison of the a priori and subsequent trimmed models confirmed a reduced model that excluded rigid organizational culture and explained 70% of the system variance in youth outcomes. Controlling for youth- and system-level covariates, systems with more proficient and less resistant organizational cultures exhibited more functional, more engaged, and less stressful climates. Systems with more proficient cultures and more engaged, more functional, and more stressful climates exhibited superior youth outcomes. Findings suggest child welfare administrators can support service effectiveness with interventions that improve specific dimensions of culture and climate. PMID:24094999

  15. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  16. Regional Expertise and Culture Proficiency

    DTIC Science & Technology

    2012-09-01

    Tool for Planners ............................................................................................25 Developing and Testing REC Rating...rated each competency and each behavior as more important than those respondents who had been deployed. We computed an independent sample t- test to...each group, however, it was likely that the homogeneity of variance assumption of the t- test was violated. Therefore, we pooled the variance across

  17. Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1972-01-01

    The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.

  18. Design and analysis of three-arm trials with negative binomially distributed endpoints.

    PubMed

    Mütze, Tobias; Munk, Axel; Friede, Tim

    2016-02-20

    A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.

  19. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  20. Replication of a gene-environment interaction Via Multimodel inference: additive-genetic variance in adolescents' general cognitive ability increases with family-of-origin socioeconomic status.

    PubMed

    Kirkpatrick, Robert M; McGue, Matt; Iacono, William G

    2015-03-01

    The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.

  1. Replication of a Gene-Environment Interaction via Multimodel Inference: Additive-Genetic Variance in Adolescents’ General Cognitive Ability Increases with Family-of-Origin Socioeconomic Status

    PubMed Central

    Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.

    2015-01-01

    The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975

  2. Convenience samples and caregiving research: how generalizable are the findings?

    PubMed

    Pruchno, Rachel A; Brill, Jonathan E; Shands, Yvonne; Gordon, Judith R; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-12-01

    We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Women aged 50 to 64 who work full time and provide care to a community-dwelling older person were recruited using either RDD (N = 55) or convenience methods (N = 87). Telephone interviews were conducted using reliable, valid measures of demographics, characteristics of the care recipient, help provided to the care recipient, evaluations of caregiver-care recipient relationship, and outcomes common to caregiving research. Convenience and RDD samples had similar variances on 68.4% of the examined variables. We found significant mean differences for 63% of the variables examined. Bivariate correlations suggest that one would reach different conclusions using the convenience and RDD sample data sets. Researchers should use convenience samples cautiously, as they may have limited generalizability.

  3. Selecting band combinations with thematic mapper data

    NASA Technical Reports Server (NTRS)

    Sheffield, C. A.

    1983-01-01

    A problem arises in making color composite images because there are 210 different possible color presentations of TM three-band images. A method is given for reducing that 210 to a single choice, decided by the statistics of a scene or subscene, and taking into full account any correlations that exist between different bands. Instead of using total variance as the measure for information content of the band triplets, the ellipsoid of maximum volume is selected which discourages selection of bands with high correlation. The band triplet is obtained by computing and ranking in order the determinants of each 3 x 3 principal submatrix of the original matrix M. After selection of the best triplet, the assignment of colors is made by using the actual variances (the diagonal elements of M): green (maximum variance), red (second largest variance), blue (smallest variance).

  4. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    PubMed

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  5. A Probabilistic Mass Estimation Algorithm for a Novel 7- Channel Capacitive Sample Verification Sensor

    NASA Technical Reports Server (NTRS)

    Wolf, Michael

    2012-01-01

    A document describes an algorithm created to estimate the mass placed on a sample verification sensor (SVS) designed for lunar or planetary robotic sample return missions. A novel SVS measures the capacitance between a rigid bottom plate and an elastic top membrane in seven locations. As additional sample material (soil and/or small rocks) is placed on the top membrane, the deformation of the membrane increases the capacitance. The mass estimation algorithm addresses both the calibration of each SVS channel, and also addresses how to combine the capacitances read from each of the seven channels into a single mass estimate. The probabilistic approach combines the channels according to the variance observed during the training phase, and provides not only the mass estimate, but also a value for the certainty of the estimate. SVS capacitance data is collected for known masses under a wide variety of possible loading scenarios, though in all cases, the distribution of sample within the canister is expected to be approximately uniform. A capacitance-vs-mass curve is fitted to this data, and is subsequently used to determine the mass estimate for the single channel s capacitance reading during the measurement phase. This results in seven different mass estimates, one for each SVS channel. Moreover, the variance of the calibration data is used to place a Gaussian probability distribution function (pdf) around this mass estimate. To blend these seven estimates, the seven pdfs are combined into a single Gaussian distribution function, providing the final mean and variance of the estimate. This blending technique essentially takes the final estimate as an average of the estimates of the seven channels, weighted by the inverse of the channel s variance.

  6. Attributing variance in supportive care needs during cancer: culture-service, and individual differences, before clinical factors.

    PubMed

    Fielding, Richard; Lam, Wendy Wing Tak; Shun, Shiow Ching; Okuyama, Toru; Lai, Yeur Hur; Wada, Makoto; Akechi, Tatsuo; Li, Wylie Wai Yee

    2013-01-01

    Studies using the Supportive Care Needs Survey (SCNS) report high levels of unmet supportive care needs (SCNs) in psychological and less-so physical & daily living domains, interpreted as reflecting disease/treatment-coping deficits. However, service and culture differences may account for unmet SCNs variability. We explored if service and culture differences better account for observed SCNs patterns. Hong Kong (n = 180), Taiwanese (n = 263) and Japanese (n = 109) CRC patients' top 10 ranked SCNS-34 items were contrasted. Mean SCNS-34 domain scores were compared by sample and treatment status, then adjusted for sample composition, disease stage and treatment status using multivariate hierarchical regression. All samples were assessed at comparable time-points. SCNs were most prevalent among Japanese and least among Taiwanese patients. Japanese patients emphasized Psychological (domain mean = 40.73) and Health systems and information (HSI) (38.61) SCN domains, whereas Taiwanese and Hong Kong patients emphasized HSI (27.41; 32.92) and Patient care & support (PCS) (19.70; 18.38) SCN domains. Mean Psychological domain scores differed: Hong Kong = 9.72, Taiwan = 17.84 and Japan = 40.73 (p<0.03-0.001, Bonferroni). Other SCN domains differed only between Chinese and Japanese samples (all p<0.001). Treatment status differentiated Taiwanese more starkly than Hong Kong patients. After adjustment, sample origin accounted for most variance in SCN domain scores (p<0.001), followed by age (p = 0.01-0.001) and employment status (p = 0.01-0.001). Treatment status and Disease stage, though retained, accounted for least variance. Overall accounted variance remained low. Health service and/or cultural influences, age and occupation differences, and less so clinical factors, differentially account for significant variation in published studies of SCNs.

  7. Assessing how much couples work at their relationship: the behavioral self-regulation for effective relationships scale.

    PubMed

    Wilson, Keithia L; Charker, Jill; Lizzio, Alf; Halford, Kim; Kimlin, Siobhan

    2005-09-01

    It is widely believed that satisfying couple relationships require work by the partners. The authors equated the concept of work to relationship self-regulation and developed a scale to assess this construct. A factor analysis of the scale in a sample of 187 newlywed couples showed it comprised 2 factors of relationship strategies and effort. The factor structure was replicated in an independent sample of 97 newlywed couples. In both samples the scale had good internal consistency and high convergent validity between self- and partner-report forms. Self-regulation accounted for substantial variance in relationship satisfaction in both newlywed samples and in a 3rd sample of 61 long-married couples. The self-regulation and satisfaction association was independent of mood or self-report common method variance. (c) 2005 APA, all rights reserved

  8. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  9. Oregon ground-water quality and its relation to hydrogeological factors; a statistical approach

    USGS Publications Warehouse

    Miller, T.L.; Gonthier, J.B.

    1984-01-01

    An appraisal of Oregon ground-water quality was made using existing data accessible through the U.S. Geological Survey computer system. The data available for about 1,000 sites were separated by aquifer units and hydrologic units. Selected statistical moments were described for 19 constituents including major ions. About 96 percent of all sites in the data base were sampled only once. The sample data were classified by aquifer unit and hydrologic unit and analysis of variance was run to determine if significant differences exist between the units within each of these two classifications for the same 19 constituents on which statistical moments were determined. Results of the analysis of variance indicated both classification variables performed about the same, but aquifer unit did provide more separation for some constituents. Samples from the Rogue River basin were classified by location within the flow system and type of flow system. The samples were then analyzed using analysis of variance on 14 constituents to determine if there were significant differences between subsets classified by flow path. Results of this analysis were not definitive, but classification as to the type of flow system did indicate potential for segregating water-quality data into distinct subsets. (USGS)

  10. Prediction of activity-related energy expenditure using accelerometer-derived physical activity under free-living conditions: a systematic review.

    PubMed

    Jeran, S; Steinbrecher, A; Pischon, T

    2016-08-01

    Activity-related energy expenditure (AEE) might be an important factor in the etiology of chronic diseases. However, measurement of free-living AEE is usually not feasible in large-scale epidemiological studies but instead has traditionally been estimated based on self-reported physical activity. Recently, accelerometry has been proposed for objective assessment of physical activity, but it is unclear to what extent this methods explains the variance in AEE. We conducted a systematic review searching MEDLINE database (until 2014) on studies that estimated AEE based on accelerometry-assessed physical activity in adults under free-living conditions (using doubly labeled water method). Extracted study characteristics were sample size, accelerometer (type (uniaxial, triaxial), metrics (for example, activity counts, steps, acceleration), recording period, body position, wear time), explained variance of AEE (R(2)) and number of additional predictors. The relation of univariate and multivariate R(2) with study characteristics was analyzed using nonparametric tests. Nineteen articles were identified. Examination of various accelerometers or subpopulations in one article was treated separately, resulting in 28 studies. Sample sizes ranged from 10 to 149. In most studies the accelerometer was triaxial, worn at the trunk, during waking hours and reported activity counts as output metric. Recording periods ranged from 5 to 15 days. The variance of AEE explained by accelerometer-assessed physical activity ranged from 4 to 80% (median crude R(2)=26%). Sample size was inversely related to the explained variance. Inclusion of 1 to 3 other predictors in addition to accelerometer output significantly increased the explained variance to a range of 12.5-86% (median total R(2)=41%). The increase did not depend on the number of added predictors. We conclude that there is large heterogeneity across studies in the explained variance of AEE when estimated based on accelerometry. Thus, data on predicted AEE based on accelerometry-assessed physical activity need to be interpreted cautiously.

  11. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  12. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  13. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    PubMed

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  14. Pedestrian self-reported use of smart phones: Positive attitudes and high exposure influence intentions to cross the road while distracted.

    PubMed

    Lennon, Alexia; Oviedo-Trespalacios, Oscar; Matthews, Sarah

    2017-01-01

    Pedestrian crashes are an important issue globally as pedestrians are a highly vulnerable road user group, accounting for approximately 35% of road deaths worldwide each year. In highly motorised countries, pedestrian distraction by hand held technological devices appears to be an increasing factor in such crashes. An online survey (N=363) was conducted to 1) obtain prevalence information regarding the extent to which people cross the road while simultaneously using mobile phones for potentially distracting activities; 2) identify whether younger adult pedestrians are more exposed to/at risk of injury due to this cause than older adults; and 3) explore whether the Theory of Planned Behaviour (TPB) might provide insight into the factors influencing the target behaviours. Self-reported frequency of using a smart phone for three levels of distraction (visual and cognitive-texting/internet; cognitive only- voice calls; audio only-listening to music) while crossing the road was collected. Results indicated that about 20% of the sample had high exposure to smart phone use while crossing, especially 18-30year olds who were significantly more likely than other age groups to report frequent exposure. TPB constructs of Attitude, Subjective Norm, and Perceived Behavioural Control significantly predicted intentions to use a smart phone while crossing the road, accounting for 62% of variance in Intentions for the entire sample, and 54% of the variance for 18-30year olds. Additional variables of Mobile Phone Involvement and Group Norms provided an additional significant 6% of the variance explained for both groups. Attitude was by far the strongest predictor for both the whole sample and for 18-30year olds, accounting for 38% and 41% explained variance, respectively. This suggests that pedestrians with positive attitudes towards using their smart phones while crossing the road have stronger intentions to do so. Moreover, high exposure was associated with stronger intentions to use a smart phone while crossing, and the effect was large, suggesting high frequency mobile phone use may lead to riskier habits, such as failing to interrupt use while crossing the road. Interventions should target pedestrians under 30 years old and aim to strengthen negative attitudes towards using smart phones while crossing, or to challenge the perceived advantages or emphasise the disadvantages of using one's phone while crossing in order to reduce intentions to do so. Young people's perceptions that others in their social group approve of smart phone use while crossing could also be an important factor to address. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An Overdetermined System for Improved Autocorrelation Based Spectral Moment Estimator Performance

    NASA Technical Reports Server (NTRS)

    Keel, Byron M.

    1996-01-01

    Autocorrelation based spectral moment estimators are typically derived using the Fourier transform relationship between the power spectrum and the autocorrelation function along with using either an assumed form of the autocorrelation function, e.g., Gaussian, or a generic complex form and applying properties of the characteristic function. Passarelli has used a series expansion of the general complex autocorrelation function and has expressed the coefficients in terms of central moments of the power spectrum. A truncation of this series will produce a closed system of equations which can be solved for the central moments of interest. The autocorrelation function at various lags is estimated from samples of the random process under observation. These estimates themselves are random variables and exhibit a bias and variance that is a function of the number of samples used in the estimates and the operational signal-to-noise ratio. This contributes to a degradation in performance of the moment estimators. This dissertation investigates the use autocorrelation function estimates at higher order lags to reduce the bias and standard deviation in spectral moment estimates. In particular, Passarelli's series expansion is cast in terms of an overdetermined system to form a framework under which the application of additional autocorrelation function estimates at higher order lags can be defined and assessed. The solution of the overdetermined system is the least squares solution. Furthermore, an overdetermined system can be solved for any moment or moments of interest and is not tied to a particular form of the power spectrum or corresponding autocorrelation function. As an application of this approach, autocorrelation based variance estimators are defined by a truncation of Passarelli's series expansion and applied to simulated Doppler weather radar returns which are characterized by a Gaussian shaped power spectrum. The performance of the variance estimators determined from a closed system is shown to improve through the application of additional autocorrelation lags in an overdetermined system. This improvement is greater in the narrowband spectrum region where the information is spread over more lags of the autocorrelation function. The number of lags needed in the overdetermined system is a function of the spectral width, the number of terms in the series expansion, the number of samples used in estimating the autocorrelation function, and the signal-to-noise ratio. The overdetermined system provides a robustness to the chosen variance estimator by expanding the region of spectral widths and signal-to-noise ratios over which the estimator can perform as compared to the closed system.

  16. An Unbiased Estimator of Gene Diversity with Improved Variance for Samples Containing Related and Inbred Individuals of any Ploidy

    PubMed Central

    Harris, Alexandre M.; DeGiorgio, Michael

    2016-01-01

    Gene diversity, or expected heterozygosity (H), is a common statistic for assessing genetic variation within populations. Estimation of this statistic decreases in accuracy and precision when individuals are related or inbred, due to increased dependence among allele copies in the sample. The original unbiased estimator of expected heterozygosity underestimates true population diversity in samples containing relatives, as it only accounts for sample size. More recently, a general unbiased estimator of expected heterozygosity was developed that explicitly accounts for related and inbred individuals in samples. Though unbiased, this estimator’s variance is greater than that of the original estimator. To address this issue, we introduce a general unbiased estimator of gene diversity for samples containing related or inbred individuals, which employs the best linear unbiased estimator of allele frequencies, rather than the commonly used sample proportion. We examine the properties of this estimator, H∼BLUE, relative to alternative estimators using simulations and theoretical predictions, and show that it predominantly has the smallest mean squared error relative to others. Further, we empirically assess the performance of H∼BLUE on a global human microsatellite dataset of 5795 individuals, from 267 populations, genotyped at 645 loci. Additionally, we show that the improved variance of H∼BLUE leads to improved estimates of the population differentiation statistic, FST, which employs measures of gene diversity within its calculation. Finally, we provide an R script, BestHet, to compute this estimator from genomic and pedigree data. PMID:28040781

  17. Estimating the variance and integral scale of the transmissivity field using head residual increments

    USGS Publications Warehouse

    Zheng, Li; Silliman, Stephen E.

    2000-01-01

    A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first‐order solution is developed for the case of a transmissivity field which is isotropic and whose second‐order behavior can be characterized by an exponential covariance structure. The estimates of the variance σY2 and the integral scale λ of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two‐dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (σY2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both λ and σY2 in aquifers with the value of σY2 up to 2.0, but the errors in those estimates were higher for σY2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.

  18. The effect of a family-based intervention with a cognitive-behavioral approach on elder abuse.

    PubMed

    Khanlary, Zahra; Maarefvand, Masoomeh; Biglarian, Akbar; Heravi-Karimooi, Majideh

    2016-01-01

    Elder abuse may become a health issue in developing countries, including Iran. The purpose of this investigation was to study the effectiveness of Family-Based Cognitive-Behavioral Social Work (FBCBSW) in reducing elder abuse. In a randomized clinical trial in Iran, 27 elders participated in intervention and control groups. The intervention groups received a five-session FBCBSW intervention and completed the Domestic-Elder-Abuse-Questionnaire (DEAQ), which evaluates elder abuse at baseline and follow-ups. Repeated measures of analysis of variance (ANOVA) and the Wilcoxon test were used to analyze the data. The repeated measures ANOVA revealed that FBCBSW was successful in reducing elder abuse. The Wilcoxon test indicated that emotional neglect, care neglect, financial neglect, curtailment of personal autonomy, psychological abuse, and financial abuse significantly decreased over time, but there was no statistically significant difference in physical abuse before and after the intervention. The findings from this study suggest that FBCBSW is a promising approach to reducing elder abuse and warrants further study with larger samples.

  19. Variance in prey abundance influences time budgets of breeding seabirds: Evidence from pigeon guillemots Cepphus columba

    USGS Publications Warehouse

    Litzow, Michael A.; Piatt, John F.

    2003-01-01

    We use data on pigeon guillemots Cepphus columba to test the hypothesis that discretionary time in breeding seabirds is correlated with variance in prey abundance. We measured the amount of time that guillemots spent at the colony before delivering fish to chicks ("resting time") in relation to fish abundance as measured by beach seines and bottom trawls. Radio telemetry showed that resting time was inversely correlated with time spent diving for fish during foraging trips (r = -0.95). Pigeon guillemots fed their chicks either Pacific sand lance Ammodytes hexapterus, a schooling midwater fish, which exhibited high interannual variance in abundance (CV = 181%), or a variety of non-schooling demersal fishes, which were less variable in abundance (average CV = 111%). Average resting times were 46% higher at colonies where schooling prey dominated the diet. Individuals at these colonies reduced resting times 32% during years of low food abundance, but did not reduce meal delivery rates. In contrast, individuals feeding on non-schooling fishes did not reduce resting times during low food years, but did reduce meal delivery rates by 27%. Interannual variance in resting times was greater for the schooling group than for the non-schooling group. We conclude from these differences that time allocation in pigeon guillemots is more flexible when variable schooling prey dominate diets. Resting times were also 27% lower for individuals feeding two-chick rather than one-chick broods. The combined effects of diet and brood size on adult time budgets may help to explain higher rates of brood reduction for pigeon guillemot chicks fed non-schooling fishes.

  20. Spanish adaptation of social withdrawal motivation and frequency scales.

    PubMed

    Indias García, Sílvia; De Paúl Ochotorena, Joaquín

    2016-11-01

    To adapt into Spanish three scales measuring frequency (SWFS) and motivation for social withdrawal (CSPS and SWMS) and to develop a scale capable of assessing the five motivations for social withdrawal. Participants were 1,112 Spanish adolescents, aged 12-17 years. The sample was randomly split into two groups in which exploratory and confirmatory (CFA) factor analyses were performed separately. A sample of adolescents in residential care (n = 128) was also used to perform discriminant validity analyses. SWFS was reduced to eight items that account for 40% of explained variance (PVE), and its reliability is high. SWMS worked adequately in the original version, according to CFA. Some items from the CSPS were removed from the final Spanish version. The newly developed scale (SWMS-5D) is composed of 20 items including five subscales: Peer Isolation, Unsociability, Shyness, Low Mood and Avoidance. Analyses reveal adequate convergent and discriminant validities. The resulting SWFS-8 and SWMS-5D could be considered useful instruments to assess frequency and motivation for social withdrawal in Spanish samples.

  1. Educational Module Intervention for Radiographers to Reduce Repetition Rate of Routine Digital Chest Radiography in Makkah Region of Saudi Arabia Tertiary Hospitals: Protocol of a Quasi-Experimental Study.

    PubMed

    Almalki, Abdullah A; Abdul Manaf, Rosliza; Hanafiah Juni, Muhamad; Kadir Shahar, Hayati; Noor, Noramaliza Mohd; Gabbad, Abdelsafi

    2017-09-26

    Repetition of an image is a critical event in any radiology department. When the repetition rate of routine digital chest radiographs is high, radiation exposure of staff and patients is increased. In addition, repetition consumes the equipment's life span, thus affecting the annual budget of the department. The aim of this study is to determine the impact of a printed educational module on reducing the repetition rate of routine digital chest radiography among radiographers in Makkah Region tertiary hospitals. A quasi-experimental time series with a control group will be conducted in Makkah Region tertiary hospitals for 8 months starting in the second quarter of 2017. Four hospitals out of 5 in the region will be selected; 2 of them will be selected as the control group and the other 2 as the intervention group. Stratification and a simple random sampling technique will be used to sample 56 radiographers in each group. Pre- and postintervention assessments will be conducted to determine the radiographer knowledge, motivation, and skills and repetition rate of chest radiographs. Radiographs of the chest performed by sampled radiographers in the selected hospitals will be collected for 2 weeks before and after the intervention. A piloted questionnaire will be distributed and collected by a researcher in both groups. One-way multivariate analysis of variance and 2-way repeated multivariate analysis of variance will be used to analyze the data. It is expected that the repetition rate in the intervention group will decline after implementing the intervention and the change will be statistically significant (P<.05). Furthermore, it is expected that the knowledge, motivation, and skill levels in the intervention group will increase significantly among radiographers after implementation of the intervention (P<.05). Meanwhile, knowledge, motivation, and skills in the control group will not change. A quasi-experimental time series with a control will be conducted to investigate the effect of printed educational material in reducing the repetition rate of routine digital chest radiographs among radiographers in tertiary hospitals in the Makkah Region of Saudi Arabia. ©Abdullah A. Almalki, Rosliza Abdul Manaf, Muhamad Hanafiah Juni, Hayati Kadir Shahar, Noramaliza Mohd Noor, Abdelsafi Gabbad. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 26.09.2017.

  2. The Evolution of Human Intelligence and the Coefficient of Additive Genetic Variance in Human Brain Size

    ERIC Educational Resources Information Center

    Miller, Geoffrey F.; Penke, Lars

    2007-01-01

    Most theories of human mental evolution assume that selection favored higher intelligence and larger brains, which should have reduced genetic variance in both. However, adult human intelligence remains highly heritable, and is genetically correlated with brain size. This conflict might be resolved by estimating the coefficient of additive genetic…

  3. Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Arav, Marina

    2006-01-01

    In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…

  4. Heuristics for Understanding the Concepts of Interaction, Polynomial Trend, and the General Linear Model.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…

  5. Parents' Behavioral Norms as Predictors of Adolescent Sexual Activity and Contraceptive Use.

    ERIC Educational Resources Information Center

    Baker, Sharon A.; And Others

    1988-01-01

    Used clustered sample household survey of 329 males and females aged 14 to 17, and 470 of their parents to examine influence of parental factors on adolescent sexual behavior and contraceptive use. Found parents' reported behavioral norms accounted for 5% of variance in whether adolescents had had intercourse, and for 33% of variance in…

  6. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  7. An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.

    ERIC Educational Resources Information Center

    Harwell, Michael

    1997-01-01

    The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)

  8. Linear Algebra and Sequential Importance Sampling for Network Reliability

    DTIC Science & Technology

    2011-12-01

    first test case is an Erdős- Renyi graph with 100 vertices and 150 edges. Figure 1 depicts the relative variance of the three Algorithms: Algorithm TOP...e va ria nc e Figure 1: Relative variance of various algorithms on Erdős Renyi graph, 100 vertices 250 edges. Key: Solid = TOP-DOWN algorithm

  9. ADHD and Method Variance: A Latent Variable Approach Applied to a Nationally Representative Sample of College Freshmen

    ERIC Educational Resources Information Center

    Konold, Timothy R.; Glutting, Joseph J.

    2008-01-01

    This study employed a correlated trait-correlated method application of confirmatory factor analysis to disentangle trait and method variance from measures of attention-deficit/hyperactivity disorder obtained at the college level. The two trait factors were "Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition" ("DSM-IV")…

  10. Standard errors in forest area

    Treesearch

    Joseph McCollum

    2002-01-01

    I trace the development of standard error equations for forest area, beginning with the theory behind double sampling and the variance of a product. The discussion shifts to the particular problem of forest area - at which time the theory becomes relevant. There are subtle difficulties in figuring out which variance of a product equation should be used. The equations...

  11. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  12. Managing risk and expected financial return from selective expansion of operating room capacity: mean-variance analysis of a hospital's portfolio of surgeons.

    PubMed

    Dexter, Franklin; Ledolter, Johannes

    2003-07-01

    Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.

  13. Linear models for airborne-laser-scanning-based operational forest inventory with small field sample size and highly correlated LiDAR data

    USGS Publications Warehouse

    Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.

    2015-01-01

    Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.

  14. Effects of Reduced Acuity and Stereo Acuity on Saccades and Reaching Movements in Adults With Amblyopia and Strabismus.

    PubMed

    Niechwiej-Szwedo, Ewa; Goltz, Herbert C; Colpa, Linda; Chandrakumar, Manokaraananthan; Wong, Agnes M F

    2017-02-01

    Our previous work has shown that amblyopia disrupts the planning and execution of visually-guided saccadic and reaching movements. We investigated the association between the clinical features of amblyopia and aspects of visuomotor behavior that are disrupted by amblyopia. A total of 55 adults with amblyopia (22 anisometropic, 18 strabismic, 15 mixed mechanism), 14 adults with strabismus without amblyopia, and 22 visually-normal control participants completed a visuomotor task while their eye and hand movements were recorded. Univariate and multivariate analyses were performed to assess the association between three clinical predictors of amblyopia (amblyopic eye [AE] acuity, stereo sensitivity, and eye deviation) and seven kinematic outcomes, including saccadic and reach latency, interocular saccadic and reach latency difference, saccadic and reach precision, and PA/We ratio (an index of reach control strategy efficacy using online feedback correction). Amblyopic eye acuity explained 28% of the variance in saccadic latency, and 48% of the variance in mean saccadic latency difference between the amblyopic and fellow eyes (i.e., interocular latency difference). In contrast, for reach latency, AE acuity explained only 10% of the variance. Amblyopic eye acuity was associated with reduced endpoint saccadic (23% of variance) and reach (22% of variance) precision in the amblyopic group. In the strabismus without amblyopia group, stereo sensitivity and eye deviation did not explain any significant variance in saccadic and reach latency or precision. Stereo sensitivity was the best clinical predictor of deficits in reach control strategy, explaining 23% of total variance of PA/We ratio in the amblyopic group and 12% of variance in the strabismus without amblyopia group when viewing with the amblyopic/nondominant eye. Deficits in eye and limb movement initiation (latency) and target localization (precision) were associated with amblyopic acuity deficit, whereas changes in the sensorimotor reach strategy were associated with deficits in stereopsis. Importantly, more than 50% of variance was not explained by the measured clinical features. Our findings suggest that other factors, including higher order visual processing and attention, may have an important role in explaining the kinematic deficits observed in amblyopia.

  15. Diallel analysis for sex-linked and maternal effects.

    PubMed

    Zhu, J; Weir, B S

    1996-01-01

    Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.

  16. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  17. Predicting negative drinking consequences: examining descriptive norm perception.

    PubMed

    Benton, Stephen L; Downey, Ronald G; Glider, Peggy S; Benton, Sherry A; Shin, Kanghyun; Newton, Douglas W; Arck, William; Price, Amy

    2006-05-01

    This study explored how much variance in college student negative drinking consequences is explained by descriptive norm perception, beyond that accounted for by student gender and self-reported alcohol use. A derivation sample (N=7565; 54% women) and a replication sample (N=8924; 55.5% women) of undergraduate students completed the Campus Alcohol Survey in classroom settings. Hierarchical regression analyses revealed that student gender and average number of drinks when "partying" were significantly related to harmful consequences resulting from drinking. Men reported more consequences than did women, and drinking amounts were positively correlated with consequences. However, descriptive norm perception did not explain any additional variance beyond that attributed to gender and alcohol use. Furthermore, there was no significant three-way interaction among student gender, alcohol use, and descriptive norm perception. Norm perception contributed no significant variance in explaining harmful consequences beyond that explained by college student gender and alcohol use.

  18. Structure of the Wechsler Intelligence Scale for Children--Fourth Edition among a national sample of referred students.

    PubMed

    Watkins, Marley W

    2010-12-01

    The structure of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; D. Wechsler, 2003a) was analyzed via confirmatory factor analysis among a national sample of 355 students referred for psychoeducational evaluation by 93 school psychologists from 35 states. The structure of the WISC-IV core battery was best represented by four first-order factors as per D. Wechsler (2003b), plus a general intelligence factor in a direct hierarchical model. The general factor was the predominate source of variation among WISC-IV subtests, accounting for 48% of the total variance and 75% of the common variance. The largest 1st-order factor, Processing Speed, only accounted for 6.1% total and 9.5% common variance. Given these explanatory contributions, recommendations favoring interpretation of the 1st-order factor scores over the general intelligence score appear to be misguided.

  19. Developing optimum sample size and multistage sampling plans for Lobesia botrana (Lepidoptera: Tortricidae) larval infestation and injury in northern Greece.

    PubMed

    Ifoulis, A A; Savopoulou-Soultani, M

    2006-10-01

    The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.

  20. Attitudes to reducing violence towards women: punishment or prevention?

    PubMed

    Martin, J L; O'Shea, M L; Romans, S E; Anderson, J C; Mullen, P E

    1993-04-14

    To investigate the attitudes of abused and nonabused women to reducing physical and sexual violence in the community. A random community sample of 3000 women was surveyed by postal questionnaire as part of the Otago Women's Health Survey. Seventy three percent (n = 1663) of those under 65 replied. As well as demographic, mental health and abuse information, responses to the question "what steps would you like to see taken to reduce the incidence of sexual and physical harm to women and children?" were analysed. Education was the most favoured approach to reducing violence in the community, followed by increased punishment of the offender. Women who had experienced sexual abuse, particularly as children, were more likely to advocate measures other than punishment. Rural women, those without formal qualifications and those who were not abused were more likely to advocate increased punishment, or made no comment. The finding that victims of sexual assault were likely to report a preference for prevention over punishment highlights the importance of representing the views of the community which appear to be at variance with more extreme views publicized in the media.

  1. An evaluation of flow-stratified sampling for estimating suspended sediment loads

    Treesearch

    Robert B. Thomas; Jack Lewis

    1995-01-01

    Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...

  2. Determinants of Fast Food Consumption among Iranian High School Students Based on Planned Behavior Theory

    PubMed Central

    Sharifirad, Gholamreza; Yarmohammadi, Parastoo; Azadbakht, Leila; Morowatisharifabad, Mohammad Ali; Hassanzadeh, Akbar

    2013-01-01

    Objective. This study was conducted to identify some factors (beliefs and norms) which are related to fast food consumption among high school students in Isfahan, Iran. We used the framework of the theory planned behavior (TPB) to predict this behavior. Subjects & Methods. Cross-sectional data were available from high school students (n = 521) who were recruited by cluster randomized sampling. All of the students completed a questionnaire assessing variables of standard TPB model including attitude, subjective norms, perceived behavior control (PBC), and the additional variables past behavior, actual behavior control (ABC). Results. The TPB variables explained 25.7% of the variance in intentions with positive attitude as the strongest (β = 0.31, P < 0.001) and subjective norms as the weakest (β = 0.29, P < 0.001) determinant. Concurrently, intentions accounted for 6% of the variance for fast food consumption. Past behavior and ABC accounted for an additional amount of 20.4% of the variance in fast food consumption. Conclusion. Overall, the present study suggests that the TPB model is useful in predicting related beliefs and norms to the fast food consumption among adolescents. Subjective norms in TPB model and past behavior in TPB model with additional variables (past behavior and actual behavior control) were the most powerful predictors of fast food consumption. Therefore, TPB model may be a useful framework for planning intervention programs to reduce fast food consumption by students. PMID:23936635

  3. Equality in Educational Policy and the Heritability of Educational Attainment

    PubMed Central

    Colodro-Conde, Lucía; Rijsdijk, Frühling; Tornero-Gómez, María J.; Sánchez-Romera, Juan F.; Ordoñana, Juan R.

    2015-01-01

    Secular variation in the heritability of educational attainment are proposed to be due to the implementation of more egalitarian educational policies leading to increased equality in educational opportunities in the second part of the 20th century. The action of effect is hypothesized to be a decrease of shared environmental (e.g., family socioeconomic status or parents’ education) influences on educational attainment, giving more room for genetic differences between individuals to impact on the variation of the trait. However, this hypothesis has not yet found consistent evidence. Support for this effect relies mainly on comparisons between countries adopting different educational systems or between different time periods within a country reflecting changes in general policy. Using a population-based sample of 1271 pairs of adult twins, we analyzed the effect of the introduction of a specific educational policy in Spain in 1970. The shared-environmental variance decreased, leading to an increase in heritability in the post-reform cohort (44 vs. 67%) for males. Unstandardized estimates of genetic variance were of a similar magnitude (.56 vs. .57) between cohorts, while shared environmental variance decreased from .56 to .04. Heritability remained in the same range for women (40 vs. 34%). Our results support the role of educational policy in affecting the relative weight of genetic and environmental factors on educational attainment, such that increasing equality in educational opportunities increases heritability estimates by reducing variation of non-genetic familial origin. PMID:26618539

  4. Gender differences in psychosocial predictors of texting while driving.

    PubMed

    Struckman-Johnson, Cindy; Gaster, Samuel; Struckman-Johnson, Dave; Johnson, Melissa; May-Shinagle, Gabby

    2015-01-01

    A sample of 158 male and 357 female college students at a midwestern university participated in an on-line study of psychosocial motives for texting while driving. Men and women did not differ in self-reported ratings of how often they texted while driving. However, more women sent texts of less than a sentence while more men sent texts of 1-5 sentences. More women than men said they would quit texting while driving due to police warnings, receiving information about texting dangers, being shown graphic pictures of texting accidents, and being in a car accident. A hierarchical regression for men's data revealed that lower levels of feeling distracted by texting while driving (20% of the variance), higher levels of cell phone dependence (11.5% of the variance), risky behavioral tendencies (6.5% of the variance) and impulsivity (2.3%) of the variance) were significantly associated with more texting while driving (total model variance=42%). A separate regression for women revealed that higher levels of cell phone dependence (10.4% of the variance), risky behavioral tendencies (9.9% of the variance), texting distractibility (6.2%), crash risk estimates (2.2% of the variance) and driving confidence (1.3% of the variance) were significantly associated with more texting while driving (total model variance=31%.) Friendship potential and need for intimacy were not related to men's or women's texting while driving. Implications of the results for gender-specific prevention strategies are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Estimation of genetic parameters and response to selection for a continuous trait subject to culling before testing.

    PubMed

    Arnason, T; Albertsdóttir, E; Fikse, W F; Eriksson, S; Sigurdsson, A

    2012-02-01

    The consequences of assuming a zero environmental covariance between a binary trait 'test-status' and a continuous trait on the estimates of genetic parameters by restricted maximum likelihood and Gibbs sampling and on response from genetic selection when the true environmental covariance deviates from zero were studied. Data were simulated for two traits (one that culling was based on and a continuous trait) using the following true parameters, on the underlying scale: h² = 0.4; r(A) = 0.5; r(E) = 0.5, 0.0 or -0.5. The selection on the continuous trait was applied to five subsequent generations where 25 sires and 500 dams produced 1500 offspring per generation. Mass selection was applied in the analysis of the effect on estimation of genetic parameters. Estimated breeding values were used in the study of the effect of genetic selection on response and accuracy. The culling frequency was either 0.5 or 0.8 within each generation. Each of 10 replicates included 7500 records on 'test-status' and 9600 animals in the pedigree file. Results from bivariate analysis showed unbiased estimates of variance components and genetic parameters when true r(E) = 0.0. For r(E) = 0.5, variance components (13-19% bias) and especially (50-80%) were underestimated for the continuous trait, while heritability estimates were unbiased. For r(E) = -0.5, heritability estimates of test-status were unbiased, while genetic variance and heritability of the continuous trait together with were overestimated (25-50%). The bias was larger for the higher culling frequency. Culling always reduced genetic progress from selection, but the genetic progress was found to be robust to the use of wrong parameter values of the true environmental correlation between test-status and the continuous trait. Use of a bivariate linear-linear model reduced bias in genetic evaluations, when data were subject to culling. © 2011 Blackwell Verlag GmbH.

  6. A Twin Study on Perceived Stress, Depressive Symptoms, and Marriage.

    PubMed

    Beam, Christopher R; Dinescu, Diana; Emery, Robert; Turkheimer, Eric

    2017-03-01

    Marriage is associated with reductions in both perceived stress and depressive symptoms, two constructs found to be influenced by common genetic effects. A study of sibling twins was used to test whether marriage decreases the proportion of variance in depressive symptoms accounted for by genetic and environmental effects underlying perceived stress. The sample consisted of 1,612 male and female twin pairs from the University of Washington Twin Registry. The stress-buffering role of marriage was tested relative to two unmarried groups: the never married and the divorced. Multivariate twin models showed that marriage reduced genetic effects of perceived stress on depressive symptoms but did not reduce environmental effects. The findings suggest a potential marital trade-off for women: access to a spouse may decrease genetic effects of perceived stress on depressive symptoms, although marital and family demands may increase environmental effects of perceived stress on depressive symptoms.

  7. A Twin Study on Perceived Stress, Depressive Symptoms, and Marriage

    PubMed Central

    Beam, Christopher R.; Dinescu, Diana; Emery, Robert E.; Turkheimer, Eric

    2017-01-01

    Marriage is associated with reductions in both perceived stress and depressive symptoms, two constructs found to be influenced by common genetic effects. A study of sibling twins was used to test whether marriage decreases the proportion of variance in depressive symptoms accounted for by genetic and environmental effects underlying perceived stress. The sample consisted of 1,612 male and female twin pairs from the University of Washington Twin Registry. The stress-buffering role of marriage was tested relative to two unmarried groups: the never married and the divorced. Multivariate twin models showed that marriage reduced genetic effects of perceived stress on depressive symptoms, but did not reduce environmental effects. The findings suggest a potential marital trade-off for women: Access to a spouse may decrease genetic effects of perceived stress on depressive symptoms, although marital and family demands may increase environmental effects of perceived stress on depressive symptoms. PMID:28661771

  8. Risk factors for psychopathology in children with intellectual disability: a prospective longitudinal population-based study.

    PubMed

    Wallander, J L; Dekker, M C; Koot, H M

    2006-04-01

    This study examined risk factors for the development of psychopathology in children with intellectual disability (ID) in the developmental, biological, family and social-ecological domains. A population sample of 968 children, aged 6-18, enrolled in special schools in The Netherlands for educable and trainable ID were assessed at Time 1. A random 58% were re-contacted about 1 year later, resulting in a sample of 474 at Time 2. Psychopathology was highly consistent over 1 year. Risk factors jointly accounted for significant, but small, portions of the variance in development of psychopathology. Child physical symptoms, family dysfunction and previous parental mental health treatment reported at Time 1 were uniquely associated with new psychopathology at Time 2. Prevention and early intervention research to find ways to reduce the incidence of psychopathology, possibly targeting family functioning, appear important.

  9. On the nature and nurture of intelligence and specific cognitive abilities: the more heritable, the more culture dependent.

    PubMed

    Kan, Kees-Jan; Wicherts, Jelte M; Dolan, Conor V; van der Maas, Han L J

    2013-12-01

    To further knowledge concerning the nature and nurture of intelligence, we scrutinized how heritability coefficients vary across specific cognitive abilities both theoretically and empirically. Data from 23 twin studies (combined N = 7,852) showed that (a) in adult samples, culture-loaded subtests tend to demonstrate greater heritability coefficients than do culture-reduced subtests; and (b) in samples of both adults and children, a subtest's proportion of variance shared with general intelligence is a function of its cultural load. These findings require an explanation because they do not follow from mainstream theories of intelligence. The findings are consistent with our hypothesis that heritability coefficients differ across cognitive abilities as a result of differences in the contribution of genotype-environment covariance. The counterintuitive finding that the most heritable abilities are the most culture-dependent abilities sheds a new light on the long-standing nature-nurture debate of intelligence.

  10. Family members' unique perspectives of the family: examining their scope, size, and relations to individual adjustment.

    PubMed

    Jager, Justin; Bornstein, Marc H; Putnick, Diane L; Hendricks, Charlene

    2012-06-01

    Using the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's "unique perspective" or nonshared, idiosyncratic view of the family. We used a modified multitrait-multimethod confirmatory factor analysis that (a) isolated for each family member's 6 reports of family dysfunction the nonshared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by 1 or more family members and (b) extracted common variance across each family member's set of nonshared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. In addition, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these "unique perspectives" reflect about the family are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  11. Family Members' Unique Perspectives of the Family: Examining their Scope, Size, and Relations to Individual Adjustment

    PubMed Central

    Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene

    2012-01-01

    Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933

  12. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  13. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  14. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  15. Spatial and temporal study of nitrate concentration in groundwater by means of coregionalization

    USGS Publications Warehouse

    D'Agostino, V.; Greene, E.A.; Passarella, G.; Vurro, M.

    1998-01-01

    Spatial and temporal behavior of hydrochemical parameters in groundwater can be studied using tools provided by geostatistics. The cross-variogram can be used to measure the spatial increments between observations at two given times as a function of distance (spatial structure). Taking into account the existence of such a spatial structure, two different data sets (sampled at two different times), representing concentrations of the same hydrochemical parameter, can be analyzed by cokriging in order to reduce the uncertainty of the estimation. In particular, if one of the two data sets is a subset of the other (that is, an undersampled set), cokriging allows us to study the spatial distribution of the hydrochemical parameter at that time, while also considering the statistical characteristics of the full data set established at a different time. This paper presents an application of cokriging by using temporal subsets to study the spatial distribution of nitrate concentration in the aquifer of the Lucca Plain, central Italy. Three data sets of nitrate concentration in groundwater were collected during three different periods in 1991. The first set was from 47 wells, but the second and the third are undersampled and represent 28 and 27 wells, respectively. Comparing the result of cokriging with ordinary kriging showed an improvement of the uncertainty in terms of reducing the estimation variance. The application of cokriging to the undersampled data sets reduced the uncertainty in estimating nitrate concentration and at the same time decreased the cost of the field sampling and laboratory analysis.Spatial and temporal behavior of hydrochemical parameters in groundwater can be studied using tools provided by geostatistics. The cross-variogram can be used to measure the spatial increments between observations at two given times as a function of distance (spatial structure). Taking into account the existence of such a spatial structure, two different data sets (sampled at two different times), representing concentrations of the same hydrochemical parameter, can be analyzed by cokriging in order to reduce the uncertainty of the estimation. In particular, if one of the two data sets is a subset of the other (that is, an undersampled set), cokriging allows us to study the spatial distribution of the hydrochemical parameter at that time, while also considering the statistical characteristics of the full data set established at a different time. This paper presents an application of cokriging by using temporal subsets to study the spatial distribution of nitrate concentration in the aquifer of the Lucca Plain, central Italy. Three data sets of nitrate concentration in groundwater were collected during three different periods in 1991. The first set was from 47 wells, but the second and the third are undersampled and represent 28 and 27 wells, respectively. Comparing the result of cokriging with ordinary kriging showed an improvement of the uncertainty in terms of reducing the estimation variance. The application of cokriging to the undersampled data sets reduced the uncertainty in estimating nitrate concentration and at the same time decreased the cost of the field sampling and laboratory analysis.

  16. iTemplate: A template-based eye movement data analysis approach.

    PubMed

    Xiao, Naiqi G; Lee, Kang

    2018-02-08

    Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.

  17. Applying the Hájek Approach in Formula-Based Variance Estimation. Research Report. ETS RR-17-24

    ERIC Educational Resources Information Center

    Qian, Jiahe

    2017-01-01

    The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…

  18. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2015-01-01

    Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

  19. Stroop Color-Word Interference Test: Normative data for the Latin American Spanish speaking adult population.

    PubMed

    Rivera, D; Perrin, P B; Stevens, L F; Garza, M T; Weil, C; Saracho, C P; Rodríguez, W; Rodríguez-Agudelo, Y; Rábago, B; Weiler, G; García de la Cadena, C; Longoni, M; Martínez, C; Ocampo-Barba, N; Aliaga, A; Galarza-Del-Angel, J; Guerra, A; Esenarro, L; Arango-Lasprilla, J C

    2015-01-01

    To generate normative data on the Stroop Test across 11 countries in Latin America, with country-specific adjustments for gender, age, and education, where appropriate. The sample consisted of 3,977 healthy adults who were recruited from Argentina, Bolivia, Chile, Cuba, El Salvador, Guatemala, Honduras, Mexico, Paraguay, Peru, and, Puerto Rico. Each subject was administered the Stroop Test, as part of a larger neuropsychological battery. A standardized five-step statistical procedure was used to generate the norms. The final multiple linear regression models explained 14-36% of the variance in Stroop Word scores, 12-41% of the variance in the Stoop Color, 14-36% of the variance in the Stroop Word-Color scores, and 4-15% of variance in Stroop Interference scores. Although t-tests showed significant differences between men and women on the Stroop test, none of the countries had an effect size larger than 0.3. As a result, gender-adjusted norms were not generated. This is the first normative multicenter study conducted in Latin America to create norms for the Stoop Test in a Spanish-Speaking sample. This study will therefore have important implications for the future of neuropsychology research and practice throughout the region.

  20. Risk assessment and stock market volatility in the Eurozone: 1986-2014

    NASA Astrophysics Data System (ADS)

    Menezes, Rui; Oliveira, Álvaro

    2015-04-01

    This paper studies the stock market return's volatility in the Eurozone as an input for evaluating the market risk. Stock market returns are endogenously determined by long-term interest rate changes and so is the return's conditional variance. The conditional variance is the time-dependent variance of the underlying variable. In other words, it is the variance of the returns measured at each moment t, so it changes through time depending on the specific market structure at each time observation. Thus, a multivariate EGARCH model is proposed to capture the complex nature of this network. By network, in this context, we mean the chain of stock exchanges that co-move and interact in such a way that a shock in one of them propagates up to the other ones (contagion). Previous studies provide evidence that the Eurozone stock exchanges are deeply integrated. The results indicate that asymmetry and leverage effects exist along with fat tails and endogeneity. In-sample and out-of-sample forecasting tests provide clear evidence that the multivariate EGARCH model performs better than the univariate counterpart to predict the behavior of returns both before and after the 2008 crisis.

  1. Analysis of Modified SMI Method for Adaptive Array Weight Control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dilsavor, Ronald Louis

    1989-01-01

    An adaptive array is used to receive a desired signal in the presence of weak interference signals which need to be suppressed. A modified sample matrix inversion (SMI) algorithm controls the array weights. The modification leads to increased interference suppression by subtracting a fraction of the noise power from the diagonal elements of the covariance matrix. The modified algorithm maximizes an intuitive power ratio criterion. The expected values and variances of the array weights, output powers, and power ratios as functions of the fraction and the number of snapshots are found and compared to computer simulation and real experimental array performance. Reduced-rank covariance approximations and errors in the estimated covariance are also described.

  2. Passive air sampling using semipermeable membrane devices at different wind-speeds in situ calibrated by performance reference compounds.

    PubMed

    Söderström, Hanna S; Bergqvist, Per-Anders

    2004-09-15

    Semipermeable membrane devices (SPMDs) are passive samplers used to measure the vapor phase of organic pollutants in air. This study tested whether extremely high wind-speeds during a 21-day sampling increased the sampling rates of polycyclic aromatic hydrocarbons (PAHs) and polychlorinated biphenyls (PCBs), and whether the release of performance reference compounds (PRCs) was related to the uptakes at different wind-speeds. Five samplers were deployed in an indoor, unheated, and dark wind tunnel with different wind-speeds at each site (6-50 m s(-1)). In addition, one sampler was deployed outside the wind tunnel and one outside the building. To test whether a sampler, designed to reduce the wind-speeds, decreased the uptake and release rates, each sampler in the wind tunnel included two SPMDs positioned inside a protective device and one unprotected SPMD outside the device. The highest amounts of PAHs and PCBs were found in the SPMDs exposed to the assumed highest wind-speeds. Thus, the SPMD sampling rates increased with increasing wind-speeds, indicating that the uptake was largely controlled by the boundary layer at the membrane-air interface. The coefficient of variance (introduced by the 21-day sampling and the chemical analysis) for the air concentrations of three PAHs and three PCBs, calculated using the PRC data, was 28-46%. Thus, the PRCs had a high ability to predict site effects of wind and assess the actual sampling situation. Comparison between protected and unprotected SPMDs showed that the sampler design reduced the wind-speed inside the devices and thereby the uptake and release rates.

  3. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  4. Studying Variance in the Galactic Ultra-compact Binary Population

    NASA Astrophysics Data System (ADS)

    Larson, Shane L.; Breivik, Katelyn

    2017-01-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations on week-long timescales, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  5. Assessment of metabolic phenotypic variability in children’s urine using 1H NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Maitre, Léa; Lau, Chung-Ho E.; Vizcaino, Esther; Robinson, Oliver; Casas, Maribel; Siskos, Alexandros P.; Want, Elizabeth J.; Athersuch, Toby; Slama, Remy; Vrijheid, Martine; Keun, Hector C.; Coen, Muireann

    2017-04-01

    The application of metabolic phenotyping in clinical and epidemiological studies is limited by a poor understanding of inter-individual, intra-individual and temporal variability in metabolic phenotypes. Using 1H NMR spectroscopy we characterised short-term variability in urinary metabolites measured from 20 children aged 8-9 years old. Daily spot morning, night-time and pooled (50:50 morning and night-time) urine samples across six days (18 samples per child) were analysed, and 44 metabolites quantified. Intraclass correlation coefficients (ICC) and mixed effect models were applied to assess the reproducibility and biological variance of metabolic phenotypes. Excellent analytical reproducibility and precision was demonstrated for the 1H NMR spectroscopic platform (median CV 7.2%). Pooled samples captured the best inter-individual variability with an ICC of 0.40 (median). Trimethylamine, N-acetyl neuraminic acid, 3-hydroxyisobutyrate, 3-hydroxybutyrate/3-aminoisobutyrate, tyrosine, valine and 3-hydroxyisovalerate exhibited the highest stability with over 50% of variance specific to the child. The pooled sample was shown to capture the most inter-individual variance in the metabolic phenotype, which is of importance for molecular epidemiology study design. A substantial proportion of the variation in the urinary metabolome of children is specific to the individual, underlining the potential of such data to inform clinical and exposome studies conducted early in life.

  6. Teacher Burnout: A Comparison of Two Cultures Using Confirmatory Factor and Item Response Models

    PubMed Central

    Denton, Ellen-ge; Chaplin, William F.; Wall, Melanie

    2014-01-01

    The present study addresses teacher burnout and in particular cultural differences and similarities in burnout. We used the Maslach Burnout Inventory Education Survey (MBI-ES) as the starting point for developing a latent model of burnout in two cultures; Jamaica W.I. teachers (N= 150) and New York City teachers (N= 150). We confirm a latent 3 factor structure, using a subset of the items from the MBI-ES that adequately fit both samples. We tested different degrees of measurement invariance (model fit statistics, scale reliabilities, residual variances, item thresholds, and total variance) to describe and compare cultural differences. Results indicate some differences between the samples at the structure and item levels. We found that factor variances were slightly higher in the New York City teacher sample. Emotional Exhaustion (EE) was a more informative construct for differentiating among teachers at moderate levels of burnout, as opposed to extreme high or low levels of burnout, in both cultures. In contrast, Depersonalization in the Workplace (DW) was more informative at the more extreme levels of burnout among both teacher samples. By studying the influence of culture on the experience of burnout we can further our understanding of burnout and potentially discover factors that might prevent burnout among primary and secondary school teachers. PMID:25729572

  7. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  8. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  9. Geostatistical modeling of riparian forest microclimate and its implications for sampling

    USGS Publications Warehouse

    Eskelson, B.N.I.; Anderson, P.D.; Hagar, J.C.; Temesgen, H.

    2011-01-01

    Predictive models of microclimate under various site conditions in forested headwater stream - riparian areas are poorly developed, and sampling designs for characterizing underlying riparian microclimate gradients are sparse. We used riparian microclimate data collected at eight headwater streams in the Oregon Coast Range to compare ordinary kriging (OK), universal kriging (UK), and kriging with external drift (KED) for point prediction of mean maximum air temperature (Tair). Several topographic and forest structure characteristics were considered as site-specific parameters. Height above stream and distance to stream were the most important covariates in the KED models, which outperformed OK and UK in terms of root mean square error. Sample patterns were optimized based on the kriging variance and the weighted means of shortest distance criterion using the simulated annealing algorithm. The optimized sample patterns outperformed systematic sample patterns in terms of mean kriging variance mainly for small sample sizes. These findings suggest methods for increasing efficiency of microclimate monitoring in riparian areas.

  10. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  11. Support, shape and number of replicate samples for tree foliage analysis.

    PubMed

    Luyssaert, Sebastiaan; Mertens, Jan; Raitio, Hannu

    2003-06-01

    Many fundamental features of a sampling program are determined by the heterogeneity of the object under study and the settings for the error (alpha), the power (beta), the effect size (ES), the number of replicate samples, and sample support, which is a feature that is often overlooked. The number of replicates, alpha, beta, ES, and sample support are interconnected. The effect of the sample support and its shape on the required number of replicate samples was investigated by means of a resampling method. The method was applied to a simulated distribution of Cd in the crown of a Salix fragilis L. tree. Increasing the dimensions of the sample support results in a decrease in the variance of the element concentration under study. Analysis of the variance is often the foundation of statistical tests, therefore, valid statistical testing requires the use of a fixed sample support during the experiment. This requirement might be difficult to meet in time-series analyses and long-term monitoring programs. Sample supports have their largest dimension in the direction with the largest heterogeneity, i.e. the direction representing the crown height, and this will give more accurate results than supports with other shapes. Taking the relationships between the sample support and the variance of the element concentrations in tree crowns into account provides guidelines for sampling efficiency in terms of precision and costs. In terms of time, the optimal support to test whether the average Cd concentration of the crown exceeds a threshold value is 0.405 m3 (alpha = 0.05, beta = 0.20, ES = 1.0 mg kg(-1) dry mass). The average weight of this support is 23 g dry mass, and 11 replicate samples need to be taken. It should be noted that in this case the optimal support applies to Cd under conditions similar to those of the simulation, but not necessarily all the examinations for this tree species, element, and hypothesis test.

  12. Mindfulness Is Associated with Increased Hedonic Capacity Among Chronic Pain Patients Receiving Extended Opioid Pharmacotherapy

    PubMed Central

    Thomas, Elizabeth A.; Garland, Eric L.

    2016-01-01

    Objectives Chronic pain and long-term opioid use may lead to a persistent deficit in hedonic capacity, characterized by increased sensitivity to aversive states and insensitivity to natural rewards. Dispositional mindfulness has been linked with improved emotion regulation and pain coping. The aim of the current study was to examine associations between dispositional mindfulness, hedonic capacity, and pain-related interference in an opioid-using chronic pain sample. Methods Data were obtained from a sample of 115 chronic pain patients on long-term opioid therapy (68% females, M age=48.3, SD=13.6) who completed the Five Facet Mindfulness Questionnaire (FFMQ), the Snaith Hamilton Anhedonia and Pleasure Scale (SHAPS), the Brief Pain Inventory (BPI), and a psychiatric assessment of major depression. Bivariate correlations, hierarchical multiple regression, and path analysis were used to determine if dispositional mindfulness scores (FFMQ) predicted variance in hedonic capacity (SHAPS), and if hedonic capacity mediated the association between mindfulness and pain interference. Results We observed a significant positive correlation between dispositional mindfulness and hedonic capacity scores, r=.33, p<.001. Hierarchical regression indicated that after controlling for pain interference and major depressive disorder diagnosis, dispositional mindfulness explained a significant portion of variance in hedonic capacity (Beta = .30, p< .01). The association between dispositional mindfulness and pain interference was mediated by hedonic capacity (b = −.011, SE=.005, 95% C.I. = −.004 to −.024, full model R2=.39). Discussion Findings indicate that dispositional mindfulness was associated with hedonic capacity among this chronic pain sample. In light of this association, it is plausible that interventions that increase mindfulness may reduce pain-related impairment among opioid-using patients by enhancing hedonic capacity. PMID:28060783

  13. Spatial variation of ultrafine particles and black carbon in two cities: results from a short-term measurement campaign.

    PubMed

    Klompmaker, Jochem O; Montagne, Denise R; Meliefste, Kees; Hoek, Gerard; Brunekreef, Bert

    2015-03-01

    Recently, short-term monitoring campaigns have been carried out to investigate the spatial variation of air pollutants within cities. Typically, such campaigns are based on short-term measurements at relatively large numbers of locations. It is largely unknown how well these studies capture the spatial variation of long term average concentrations. The aim of this study was to evaluate the within-site temporal and between-site spatial variation of the concentration of ultrafine particles (UFPs) and black carbon (BC) in a short-term monitoring campaign. In Amsterdam and Rotterdam (the Netherlands) measurements of number counts of particles larger than 10nm as a surrogate for UFP and BC were performed at 80 sites per city. Each site was measured in three different seasons of 2013 (winter, spring, summer). Sites were selected from busy urban streets, urban background, regional background and near highways, waterways and green areas, to obtain sufficient spatial contrast. Continuous measurements were performed for 30 min per site between 9 and 16 h to avoid traffic spikes of the rush hour. Concentrations were simultaneously measured at a reference site to correct for temporal variation. We calculated within- and between-site variance components reflecting temporal and spatial variations. Variance ratios were compared with previous campaigns with longer sampling durations per sample (24h to 14 days). The within-site variance was 2.17 and 2.44 times higher than the between-site variance for UFP and BC, respectively. In two previous studies based upon longer sampling duration much smaller variance ratios were found (0.31 and 0.09 for UFP and BC). Correction for temporal variation from a reference site was less effective for the short-term monitoring campaign compared to the campaigns with longer duration. Concentrations of BC and UFP were on average 1.6 and 1.5 times higher at urban street compared to urban background sites. No significant differences between the other site types and urban background were found. The high within to between-site concentration variances may result in the loss of precision and low explained variance when average concentrations from short-term campaigns are used to develop land use regression models. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Dynamic Repertoire of Intrinsic Brain States Is Reduced in Propofol-Induced Unconsciousness

    PubMed Central

    Liu, Xiping; Pillay, Siveshigan

    2015-01-01

    Abstract The richness of conscious experience is thought to scale with the size of the repertoire of causal brain states, and it may be diminished in anesthesia. We estimated the state repertoire from dynamic analysis of intrinsic functional brain networks in conscious sedated and unconscious anesthetized rats. Functional resonance images were obtained from 30-min whole-brain resting-state blood oxygen level-dependent (BOLD) signals at propofol infusion rates of 20 and 40 mg/kg/h, intravenously. Dynamic brain networks were defined at the voxel level by sliding window analysis of regional homogeneity (ReHo) or coincident threshold crossings (CTC) of the BOLD signal acquired in nine sagittal slices. The state repertoire was characterized by the temporal variance of the number of voxels with significant ReHo or positive CTC. From low to high propofol dose, the temporal variances of ReHo and CTC were reduced by 78%±20% and 76%±20%, respectively. Both baseline and propofol-induced reduction of CTC temporal variance increased from lateral to medial position. Group analysis showed a 20% reduction in the number of unique states at the higher propofol dose. Analysis of temporal variance in 12 anatomically defined regions of interest predicted that the largest changes occurred in visual cortex, parietal cortex, and caudate-putamen. The results suggest that the repertoire of large-scale brain states derived from the spatiotemporal dynamics of intrinsic networks is substantially reduced at an anesthetic dose associated with loss of consciousness. PMID:24702200

  15. Normative morphometric data for cerebral cortical areas over the lifetime of the adult human brain.

    PubMed

    Potvin, Olivier; Dieumegarde, Louis; Duchesne, Simon

    2017-08-01

    Proper normative data of anatomical measurements of cortical regions, allowing to quantify brain abnormalities, are lacking. We developed norms for regional cortical surface areas, thicknesses, and volumes based on cross-sectional MRI scans from 2713 healthy individuals aged 18 to 94 years using 23 samples provided by 21 independent research groups. The segmentation was conducted using FreeSurfer, a widely used and freely available automated segmentation software. Models predicting regional cortical estimates of each hemisphere were produced using age, sex, estimated total intracranial volume (eTIV), scanner manufacturer, magnetic field strength, and interactions as predictors. The explained variance for the left/right cortex was 76%/76% for surface area, 43%/42% for thickness, and 80%/80% for volume. The mean explained variance for all regions was 41% for surface areas, 27% for thicknesses, and 46% for volumes. Age, sex and eTIV predicted most of the explained variance for surface areas and volumes while age was the main predictors for thicknesses. Scanner characteristics generally predicted a limited amount of variance, but this effect was stronger for thicknesses than surface areas and volumes. For new individuals, estimates of their expected surface area, thickness and volume based on their characteristics and the scanner characteristics can be obtained using the derived formulas, as well as Z score effect sizes denoting the extent of the deviation from the normative sample. Models predicting normative values were validated in independent samples of healthy adults, showing satisfactory validation R 2 . Deviations from the normative sample were measured in individuals with mild Alzheimer's disease and schizophrenia and expected patterns of deviations were observed. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  16. Non-stationary internal tides observed with satellite altimetry

    NASA Astrophysics Data System (ADS)

    Ray, R. D.; Zaron, E. D.

    2011-09-01

    Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-1 tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 cm2. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.

  17. Non-Stationary Internal Tides Observed with Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Zaron, E. D.

    2011-01-01

    Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-l tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 sq cm. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.

  18. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  19. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  20. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  1. Costs for Hospital Stays in the United States, 2011

    MedlinePlus

    ... detailed description of HCUP, more information on the design of the Nationwide Inpatient Sample (NIS), and methods ... Nationwide Inpatient Sample (NIS) Variances, 2001. HCUP Methods Series Report #2003-2. Online. June 2005 (revised June ...

  2. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method.

    PubMed

    Batres-Mendoza, Patricia; Ibarra-Manzano, Mario A; Guerra-Hernandez, Erick I; Almanza-Ojeda, Dora L; Montoro-Sanjose, Carlos R; Romero-Troncoso, Rene J; Rostro-Gonzalez, Horacio

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications.

  3. Improving EEG-Based Motor Imagery Classification for Real-Time Applications Using the QSA Method

    PubMed Central

    Batres-Mendoza, Patricia; Guerra-Hernandez, Erick I.; Almanza-Ojeda, Dora L.; Montoro-Sanjose, Carlos R.

    2017-01-01

    We present an improvement to the quaternion-based signal analysis (QSA) technique to extract electroencephalography (EEG) signal features with a view to developing real-time applications, particularly in motor imagery (IM) cognitive processes. The proposed methodology (iQSA, improved QSA) extracts features such as the average, variance, homogeneity, and contrast of EEG signals related to motor imagery in a more efficient manner (i.e., by reducing the number of samples needed to classify the signal and improving the classification percentage) compared to the original QSA technique. Specifically, we can sample the signal in variable time periods (from 0.5 s to 3 s, in half-a-second intervals) to determine the relationship between the number of samples and their effectiveness in classifying signals. In addition, to strengthen the classification process a number of boosting-technique-based decision trees were implemented. The results show an 82.30% accuracy rate for 0.5 s samples and 73.16% for 3 s samples. This is a significant improvement compared to the original QSA technique that offered results from 33.31% to 40.82% without sampling window and from 33.44% to 41.07% with sampling window, respectively. We can thus conclude that iQSA is better suited to develop real-time applications. PMID:29348744

  4. Genetic consequences of polygyny and social structure in an Indian fruit bat, Cynopterus sphinx. II. Variance in male mating success and effective population size.

    PubMed

    Storz, J F; Bhat, H R; Kunz, T H

    2001-06-01

    Variance in reproductive success is a primary determinant of genetically effective population size (Ne), and thus has important implications for the role of genetic drift in the evolutionary dynamics of animal taxa characterized by polygynous mating systems. Here we report the results of a study designed to test the hypothesis that polygynous mating results in significantly reduced Ne in an age-structured population. This hypothesis was tested in a natural population of a harem-forming fruit bat, Cynopterus sphinx (Chiroptera: Pteropodidae), in western India. The influence of the mating system on the ratio of variance Ne to adult census number (N) was assessed using a mathematical model designed for age-structured populations that incorporated demographic and genetic data. Male mating success was assessed by means of direct and indirect paternity analysis using 10-locus microsatellite genotypes of adults and progeny from two consecutive breeding periods (n = 431 individually marked bats). Combined results from both analyses were used to infer the effective number of male parents in each breeding period. The relative proportion of successfully reproducing males and the size distribution of paternal sibships comprising each offspring cohort revealed an extremely high within-season variance in male mating success (up to 9.2 times higher than Poisson expectation). The resultant estimate of Ne/N for the C. sphinx study population was 0.42. As a result of polygynous mating, the predicted rate of drift (1/2Ne per generation) was 17.6% higher than expected from a Poisson distribution of male mating success. However, the estimated Ne/N was well within the 0.25-0.75 range expected for age-structured populations under normal demographic conditions. The life-history schedule of C. sphinx is characterized by a disproportionately short sexual maturation period scaled to adult life span. Consequently, the influence of polygynous mating on Ne/N is mitigated by the extensive overlap of generations. In C. sphinx, turnover of breeding males between seasons ensures a broader sampling of the adult male gamete pool than expected from the variance in mating success within a single breeding period.

  5. Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.

    PubMed

    Beauducel, André

    2018-02-01

    The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Biochemical phenotypes to discriminate microbial subpopulations and improve outbreak detection.

    PubMed

    Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F; Stelling, John

    2013-01-01

    Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as "nuisance" biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events.

  7. Constraining Particle Variation in Lunar Regolith for Simulant Design

    NASA Technical Reports Server (NTRS)

    Schrader, Christian M.; Rickman, Doug; Stoeser, Douglas; Hoelzer, Hans

    2008-01-01

    Simulants are used by the lunar engineering community to develop and test technologies for In Situ Resource Utilization (ISRU), excavation and drilling, and for mitigation of hazards to machinery and human health. Working with the United States Geological Survey (USGS), other NASA centers, private industry and academia, Marshall Space Flight Center (MSFC) is leading NASA s lunar regolith simulant program. There are two main efforts: simulant production and simulant evaluation. This work requires a highly detailed understanding of regolith particle type, size, and shape distribution, and of bulk density. The project has developed Figure of Merit (FoM) algorithms to quantitatively compare these characteristics between two materials. The FoM can be used to compare two lunar regolith samples, regolith to simulant, or two parcels of simulant. In work presented here, we use the FoM algorithm to examine the variance of particle type in Apollo 16 highlands regolith core and surface samples. For this analysis we have used internally consistent particle type data for the 90-150 m fraction of Apollo core 64001/64002 from station 4, core 60009/60010 from station 10, and surface samples from various Apollo 16 stations. We calculate mean modal compositions for each core and for the group of surface samples and quantitatively compare samples of each group to its mean as a measurement of within-group variance; we also calculate an FoM for every sample against the mean composition of 64001/64002. This gives variation with depth at two locations and between Apollo 16 stations. Of the tested groups, core 60009/60010 has the highest internal variance with an average FoM score of 0.76 and core 64001/64002 has the lowest with an average FoM of 0.92. The surface samples have a low but intermediate internal variance with an average FoM of 0.79. FoM s calculated against the 64001/64002 mean reference composition range from 0.79-0.97 for 64001/64002, from 0.41-0.91 for 60009/60010, and from 0.54-0.93 for the surface samples. Six samples fall below 0.70, and they are also the least mature (i.e., have the lowest I(sub s)/FeO). Because agglutinates are the dominant particle type and the agglutinate population increases with sample maturity (I(sub s)/FeO), the maturity of the sample relative to the reference is a prime determinant of the particle type FoM score within these highland samples.

  8. Heritability of physical activity traits in Brazilian families: the Baependi Heart Study

    PubMed Central

    2011-01-01

    Background It is commonly recognized that physical activity has familial aggregation; however, the genetic influences on physical activity phenotypes are not well characterized. This study aimed to (1) estimate the heritability of physical activity traits in Brazilian families; and (2) investigate whether genetic and environmental variance components contribute differently to the expression of these phenotypes in males and females. Methods The sample that constitutes the Baependi Heart Study is comprised of 1,693 individuals in 95 Brazilian families. The phenotypes were self-reported in a questionnaire based on the WHO-MONICA instrument. Variance component approaches, implemented in the SOLAR (Sequential Oligogenic Linkage Analysis Routines) computer package, were applied to estimate the heritability and to evaluate the heterogeneity of variance components by gender on the studied phenotypes. Results The heritability estimates were intermediate (35%) for weekly physical activity among non-sedentary subjects (weekly PA_NS), and low (9-14%) for sedentarism, weekly physical activity (weekly PA), and level of daily physical activity (daily PA). Significant evidence for heterogeneity in variance components by gender was observed for the sedentarism and weekly PA phenotypes. No significant gender differences in genetic or environmental variance components were observed for the weekly PA_NS trait. The daily PA phenotype was predominantly influenced by environmental factors, with larger effects in males than in females. Conclusions Heritability estimates for physical activity phenotypes in this sample of the Brazilian population were significant in both males and females, and varied from low to intermediate magnitude. Significant evidence for heterogeneity in variance components by gender was observed. These data add to the knowledge of the physical activity traits in the Brazilian study population, and are concordant with the notion of significant biological determination in active behavior. PMID:22126647

  9. Heritability of physical activity traits in Brazilian families: the Baependi Heart Study.

    PubMed

    Horimoto, Andréa R V R; Giolo, Suely R; Oliveira, Camila M; Alvim, Rafael O; Soler, Júlia P; de Andrade, Mariza; Krieger, José E; Pereira, Alexandre C

    2011-11-29

    It is commonly recognized that physical activity has familial aggregation; however, the genetic influences on physical activity phenotypes are not well characterized. This study aimed to (1) estimate the heritability of physical activity traits in Brazilian families; and (2) investigate whether genetic and environmental variance components contribute differently to the expression of these phenotypes in males and females. The sample that constitutes the Baependi Heart Study is comprised of 1,693 individuals in 95 Brazilian families. The phenotypes were self-reported in a questionnaire based on the WHO-MONICA instrument. Variance component approaches, implemented in the SOLAR (Sequential Oligogenic Linkage Analysis Routines) computer package, were applied to estimate the heritability and to evaluate the heterogeneity of variance components by gender on the studied phenotypes. The heritability estimates were intermediate (35%) for weekly physical activity among non-sedentary subjects (weekly PA_NS), and low (9-14%) for sedentarism, weekly physical activity (weekly PA), and level of daily physical activity (daily PA). Significant evidence for heterogeneity in variance components by gender was observed for the sedentarism and weekly PA phenotypes. No significant gender differences in genetic or environmental variance components were observed for the weekly PA_NS trait. The daily PA phenotype was predominantly influenced by environmental factors, with larger effects in males than in females. Heritability estimates for physical activity phenotypes in this sample of the Brazilian population were significant in both males and females, and varied from low to intermediate magnitude. Significant evidence for heterogeneity in variance components by gender was observed. These data add to the knowledge of the physical activity traits in the Brazilian study population, and are concordant with the notion of significant biological determination in active behavior.

  10. GPZ: non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshifts

    NASA Astrophysics Data System (ADS)

    Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.

    2016-10-01

    The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.

  11. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    USGS Publications Warehouse

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  12. Conserving genomic variability in large mammals: Effect of population fluctuations and variance in male reproductive success on variability in Yellowstone bison

    Treesearch

    Andres Perez-Figueroa; Rick L. Wallen; Tiago Antao; Jason A. Coombs; Michael K. Schwartz; P. J. White; Gordon Luikart

    2012-01-01

    Loss of genetic variation through genetic drift can reduce population viability. However, relatively little is known about loss of variation caused by the combination of fluctuating population size and variance in reproductive success in age structured populations. We built an individual-based computer simulation model to examine how actual culling and hunting...

  13. Commonly Unrecognized Error Variance in Statewide Assessment Programs: Sources of Error Variance and What Can Be Done to Reduce Them

    ERIC Educational Resources Information Center

    Brockmann, Frank

    2011-01-01

    State testing programs today are more extensive than ever, and their results are required to serve more purposes and high-stakes decisions than one might have imagined. Assessment results are used to hold schools, districts, and states accountable for student performance and to help guide a multitude of important decisions. This report describes…

  14. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  15. Stress in junior enlisted air force women with and without children.

    PubMed

    Hopkins-Chadwick, Denise L; Ryan-Wenger, Nancy

    2009-04-01

    The objective was to determine if there are differences between young enlisted military women with and without preschool children on role strain, stress, health, and military career aspiration and to identify the best predictors of these variables. The study used a cross-sectional descriptive design of 50 junior Air Force women with preschool children and 50 women without children. There were no differences between women with and without children in role strain, stress, health, and military career aspiration. In all women, higher stress was moderately predictive of higher role strain (39.9% of variance explained) but a poor predictor of career aspiration (3.8% of variance explained). Lower mental health scores were predicted by high stress symptoms (27.9% of variance explained), low military career aspiration (4.1% of variance explained), high role strain (4.0% of variance explained), and being non-White (3.9% of variance explained). Aspiration for a military career was predicted by high perceived availability of military resources (16.8% of variance explained), low family of origin socioeconomic status (4.5% of variance explained), and better mental health status (3.3% of variance explained). Contrary to theoretical expectations, in this sample, motherhood was not a significant variable. Increased role strain, stress, and decreased health as well as decreased military career aspiration were evident in both groups and may have more to do with individual coping skills and other unmeasured resources. More research is needed to determine what nursing interventions are needed to best support both groups of women.

  16. Cosmology without cosmic variance

    DOE PAGES

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  17. Variance in Broad Reading Accounted for by Measures of Reading Speed Embedded within Maze and Comprehension Rate Measures

    ERIC Educational Resources Information Center

    Hale, Andrea D.; Skinner, Christopher H.; Wilhoit, Brian; Ciancio, Dennis; Morrow, Jennifer A.

    2012-01-01

    Maze and reading comprehension rate measures are calculated by using measures of reading speed and measures of accuracy (i.e., correctly selected words or answers). In sixth- and seventh-grade samples, we found that the measures of reading speed embedded within our Maze measures accounted for 50% and 39% of broad reading score (BRS) variance,…

  18. Conceptualizing and Testing Random Indirect Effects and Moderated Mediation in Multilevel Models: New Procedures and Recommendations

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Preacher, Kristopher J.; Gil, Karen M.

    2006-01-01

    The authors propose new procedures for evaluating direct, indirect, and total effects in multilevel models when all relevant variables are measured at Level 1 and all effects are random. Formulas are provided for the mean and variance of the indirect and total effects and for the sampling variances of the average indirect and total effects.…

  19. Attributing Variance in Supportive Care Needs during Cancer: Culture-Service, and Individual Differences, before Clinical Factors

    PubMed Central

    Fielding, Richard; Lam, Wendy Wing Tak; Shun, Shiow Ching; Okuyama, Toru; Lai, Yeur Hur; Wada, Makoto; Akechi, Tatsuo; Li, Wylie Wai Yee

    2013-01-01

    Background Studies using the Supportive Care Needs Survey (SCNS) report high levels of unmet supportive care needs (SCNs) in psychological and less-so physical & daily living domains, interpreted as reflecting disease/treatment-coping deficits. However, service and culture differences may account for unmet SCNs variability. We explored if service and culture differences better account for observed SCNs patterns. Methods Hong Kong (n = 180), Taiwanese (n = 263) and Japanese (n = 109) CRC patients’ top 10 ranked SCNS-34 items were contrasted. Mean SCNS-34 domain scores were compared by sample and treatment status, then adjusted for sample composition, disease stage and treatment status using multivariate hierarchical regression. Results All samples were assessed at comparable time-points. SCNs were most prevalent among Japanese and least among Taiwanese patients. Japanese patients emphasized Psychological (domain mean = 40.73) and Health systems and information (HSI) (38.61) SCN domains, whereas Taiwanese and Hong Kong patients emphasized HSI (27.41; 32.92) and Patient care & support (PCS) (19.70; 18.38) SCN domains. Mean Psychological domain scores differed: Hong Kong = 9.72, Taiwan = 17.84 and Japan = 40.73 (p<0.03–0.001, Bonferroni). Other SCN domains differed only between Chinese and Japanese samples (all p<0.001). Treatment status differentiated Taiwanese more starkly than Hong Kong patients. After adjustment, sample origin accounted for most variance in SCN domain scores (p<0.001), followed by age (p = 0.01–0.001) and employment status (p = 0.01–0.001). Treatment status and Disease stage, though retained, accounted for least variance. Overall accounted variance remained low. Conclusions Health service and/or cultural influences, age and occupation differences, and less so clinical factors, differentially account for significant variation in published studies of SCNs. PMID:23741467

  20. Estimating unconsolidated sediment cover thickness by using the horizontal distance to a bedrock outcrop as secondary information

    NASA Astrophysics Data System (ADS)

    Kitterød, Nils-Otto

    2017-08-01

    Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.

  1. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less

  2. Teachers’ emotional experiences and exhaustion as predictors of emotional labor in the classroom: an experience sampling study

    PubMed Central

    Keller, Melanie M.; Chang, Mei-Lin; Becker, Eva S.; Goetz, Thomas; Frenzel, Anne C.

    2014-01-01

    Emotional exhaustion (EE) is the core component in the study of teacher burnout, with significant impact on teachers’ professional lives. Yet, its relation to teachers’ emotional experiences and emotional labor (EL) during instruction remains unclear. Thirty-nine German secondary teachers were surveyed about their EE (trait), and via the experience sampling method on their momentary (state; N = 794) emotional experiences (enjoyment, anxiety, anger) and momentary EL (suppression, faking). Teachers reported that in 99 and 39% of all lessons, they experienced enjoyment and anger, respectively, whereas they experienced anxiety less frequently. Teachers reported suppressing or faking their emotions during roughly a third of all lessons. Furthermore, EE was reflected in teachers’ decreased experiences of enjoyment and increased experiences of anger. On an intra-individual level, all three emotions predict EL, whereas on an inter-individual level, only anger evokes EL. Explained variances in EL (within: 39%, between: 67%) stress the relevance of emotions in teaching and within the context of teacher burnout. Beyond implying the importance of reducing anger, our findings suggest the potential of enjoyment lessening EL and thereby reducing teacher burnout. PMID:25566124

  3. Ultraviolet Shadowing of RNA Can Cause Significant Chemical Damage in Seconds

    PubMed Central

    Kladwang, Wipapat; Hum, Justine; Das, Rhiju

    2012-01-01

    Chemical purity of RNA samples is important for high-precision studies of RNA folding and catalytic behavior, but photodamage accrued during ultraviolet (UV) shadowing steps of sample preparation can reduce this purity. Here, we report the quantitation of UV-induced damage by using reverse transcription and single-nucleotide-resolution capillary electrophoresis. We found photolesions in a dozen natural and artificial RNAs; across multiple sequence contexts, dominantly at but not limited to pyrimidine doublets; and from multiple lamps recommended for UV shadowing. Irradiation time-courses revealed detectable damage within a few seconds of exposure for 254 nm lamps held at a distance of 5 to 10 cm from 0.5-mm thickness gels. Under these conditions, 200-nucleotide RNAs subjected to 20 seconds of UV shadowing incurred damage to 16-27% of molecules; and, due to a ‘skin effect’, the molecule-by-molecule distribution of lesions gave 4-fold higher variance than a Poisson distribution. Thicker gels, longer wavelength lamps, and shorter exposure times reduced but did not eliminate damage. These results suggest that RNA biophysical studies should report precautions taken to avoid artifactual heterogeneity from UV shadowing. PMID:22816040

  4. Gender differences in variance and means on the Naglieri Non-verbal Ability Test: data from the Philippines.

    PubMed

    Vista, Alvin; Care, Esther

    2011-06-01

    Research on gender differences in intelligence has focused mostly on samples from Western countries and empirical evidence on gender differences from Southeast Asia is relatively sparse. This article presents results on gender differences in variance and means on a non-verbal intelligence test using a national sample of public school students from the Philippines. More than 2,700 sixth graders from public schools across the country were tested with the Naglieri Non-verbal Ability Test (NNAT). Variance ratios (VRs) and log-transformed VRs were computed. Proportion ratios for each of the ability levels were also calculated and a chi-square goodness-of-fit test was performed. An analysis of variance was performed to determine the overall gender difference in mean scores as well as within each of three age subgroups. Our data show non-existent or trivial gender difference in mean scores. However, the tails of the distributions show differences between the males and females, with greater variability among males in the upper half of the distribution and greater variability among females in the lower half of the distribution. Descriptions of the results and their implications are discussed. Results on mean score differences support the hypothesis that there are no significant gender differences in cognitive ability. The unusual results regarding differences in variance and the male-female proportion in the tails require more complex investigations. ©2010 The British Psychological Society.

  5. Additive genetic variance in polyandry enables its evolution, but polyandry is unlikely to evolve through sexy or good sperm processes.

    PubMed

    Travers, L M; Simmons, L W; Garcia-Gonzalez, F

    2016-05-01

    Polyandry is widespread despite its costs. The sexually selected sperm hypotheses ('sexy' and 'good' sperm) posit that sperm competition plays a role in the evolution of polyandry. Two poorly studied assumptions of these hypotheses are the presence of additive genetic variance in polyandry and sperm competitiveness. Using a quantitative genetic breeding design in a natural population of Drosophila melanogaster, we first established the potential for polyandry to respond to selection. We then investigated whether polyandry can evolve through sexually selected sperm processes. We measured lifetime polyandry and offensive sperm competitiveness (P2 ) while controlling for sampling variance due to male × male × female interactions. We also measured additive genetic variance in egg-to-adult viability and controlled for its effect on P2 estimates. Female lifetime polyandry showed significant and substantial additive genetic variance and evolvability. In contrast, we found little genetic variance or evolvability in P2 or egg-to-adult viability. Additive genetic variance in polyandry highlights its potential to respond to selection. However, the low levels of genetic variance in sperm competitiveness suggest that the evolution of polyandry may not be driven by sexy sperm or good sperm processes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  6. Performance of some biotic indices in the real variable world: a case study at different spatial scales in North-Western Mediterranean Sea.

    PubMed

    Tataranni, Mariella; Lardicci, Claudio

    2010-01-01

    The aim of this study was to analyse the variability of four different benthic biotic indices (AMBI, BENTIX, H', M-AMBI) in two marine coastal areas of the North-Western Mediterranean Sea. In each coastal area, 36 replicates were randomly selected according to a hierarchical sampling design, which allowed estimating the variance components of the indices associated with four different spatial scales (ranging from metres to kilometres). All the analyses were performed at two different sampling periods in order to evaluate if the observed trends were consistent over the time. The variance components of the four indices revealed complex trends and different patterns in the two sampling periods. These results highlighted that independently from the employed index, a rigorous and appropriate sampling design taking into account different scales should always be used in order to avoid erroneous classifications and to develop effective monitoring programs.

  7. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Sampling design considerations for demographic studies: a case of colonial seabirds

    USGS Publications Warehouse

    Kendall, William L.; Converse, Sarah J.; Doherty, Paul F.; Naughton, Maura B.; Anders, Angela; Hines, James E.; Flint, Elizabeth

    2009-01-01

    For the purposes of making many informed conservation decisions, the main goal for data collection is to assess population status and allow prediction of the consequences of candidate management actions. Reducing the bias and variance of estimates of population parameters reduces uncertainty in population status and projections, thereby reducing the overall uncertainty under which a population manager must make a decision. In capture-recapture studies, imperfect detection of individuals, unobservable life-history states, local movement outside study areas, and tag loss can cause bias or precision problems with estimates of population parameters. Furthermore, excessive disturbance to individuals during capture?recapture sampling may be of concern because disturbance may have demographic consequences. We address these problems using as an example a monitoring program for Black-footed Albatross (Phoebastria nigripes) and Laysan Albatross (Phoebastria immutabilis) nesting populations in the northwestern Hawaiian Islands. To mitigate these estimation problems, we describe a synergistic combination of sampling design and modeling approaches. Solutions include multiple capture periods per season and multistate, robust design statistical models, dead recoveries and incidental observations, telemetry and data loggers, buffer areas around study plots to neutralize the effect of local movements outside study plots, and double banding and statistical models that account for band loss. We also present a variation on the robust capture?recapture design and a corresponding statistical model that minimizes disturbance to individuals. For the albatross case study, this less invasive robust design was more time efficient and, when used in combination with a traditional robust design, reduced the standard error of detection probability by 14% with only two hours of additional effort in the field. These field techniques and associated modeling approaches are applicable to studies of most taxa being marked and in some cases have individually been applied to studies of birds, fish, herpetofauna, and mammals.

  9. Planning and processing multistage samples with a computer program—MUST.

    Treesearch

    John W. Hazard; Larry E. Stewart

    1974-01-01

    A computer program was written to handle multistage sampling designs in insect populations. It is, however, general enough to be used for any population where the number of stages does not exceed three. The program handles three types of sampling situations, all of which assume equal probability sampling. Option 1 takes estimates of sample variances, costs, and either...

  10. Amotivation is associated with smaller ventral striatum volumes in older patients with schizophrenia.

    PubMed

    Caravaggio, Fernando; Fervaha, Gagan; Iwata, Yusuke; Plitman, Eric; Chung, Jun Ku; Nakajima, Shinichiro; Mar, Wanna; Gerretsen, Philip; Kim, Julia; Chakravarty, M Mallar; Mulsant, Benoit; Pollock, Bruce; Mamo, David; Remington, Gary; Graff-Guerrero, Ariel

    2018-03-01

    Motivational deficits are prevalent in patients with schizophrenia, persist despite antipsychotic treatment, and predict long-term outcomes. Evidence suggests that patients with greater amotivation have smaller ventral striatum (VS) volumes. We wished to replicate this finding in a sample of older, chronically medicated patients with schizophrenia. Using structural imaging and positron emission tomography, we examined whether amotivation uniquely predicted VS volumes beyond the effects of striatal dopamine D 2/3 receptor (D 2/3 R) blockade by antipsychotics. Data from 41 older schizophrenia patients (mean age: 60.2 ± 6.7; 11 female) were reanalysed from previously published imaging data. We constructed multivariate linear stepwise regression models with VS volumes as the dependent variable and various sociodemographic and clinical variables as the initial predictors: age, gender, total brain volume, and antipsychotic striatal D 2/3 R occupancy. Amotivation was included as a subsequent step to determine any unique relationships with VS volumes beyond the contribution of the covariates. In a reduced sample (n = 36), general cognition was also included as a covariate. Amotivation uniquely explained 8% and 6% of the variance in right and left VS volumes, respectively (right: β = -.38, t = -2.48, P = .01; left: β = -.31, t = -2.17, P = .03). Considering cognition, amotivation levels uniquely explained 9% of the variance in right VS volumes (β = -.43, t = -0.26, P = .03). We replicate and extend the finding of reduced VS volumes with greater amotivation. We demonstrate this relationship uniquely beyond the potential contributions of striatal D 2/3 R blockade by antipsychotics. Elucidating the structural correlates of amotivation in schizophrenia may help develop treatments for this presently irremediable deficit. Copyright © 2017 John Wiley & Sons, Ltd.

  11. GAGA: a new algorithm for genomic inference of geographic ancestry reveals fine level population substructure in Europeans.

    PubMed

    Lao, Oscar; Liu, Fan; Wollstein, Andreas; Kayser, Manfred

    2014-02-01

    Attempts to detect genetic population substructure in humans are troubled by the fact that the vast majority of the total amount of observed genetic variation is present within populations rather than between populations. Here we introduce a new algorithm for transforming a genetic distance matrix that reduces the within-population variation considerably. Extensive computer simulations revealed that the transformed matrix captured the genetic population differentiation better than the original one which was based on the T1 statistic. In an empirical genomic data set comprising 2,457 individuals from 23 different European subpopulations, the proportion of individuals that were determined as a genetic neighbour to another individual from the same sampling location increased from 25% with the original matrix to 52% with the transformed matrix. Similarly, the percentage of genetic variation explained between populations by means of Analysis of Molecular Variance (AMOVA) increased from 1.62% to 7.98%. Furthermore, the first two dimensions of a classical multidimensional scaling (MDS) using the transformed matrix explained 15% of the variance, compared to 0.7% obtained with the original matrix. Application of MDS with Mclust, SPA with Mclust, and GemTools algorithms to the same dataset also showed that the transformed matrix gave a better association of the genetic clusters with the sampling locations, and particularly so when it was used in the AMOVA framework with a genetic algorithm. Overall, the new matrix transformation introduced here substantially reduces the within population genetic differentiation, and can be broadly applied to methods such as AMOVA to enhance their sensitivity to reveal population substructure. We herewith provide a publically available (http://www.erasmusmc.nl/fmb/resources/GAGA) model-free method for improved genetic population substructure detection that can be applied to human as well as any other species data in future studies relevant to evolutionary biology, behavioural ecology, medicine, and forensics.

  12. Intraclass Correlation Coefficients for Obesity Indicators and Energy Balance-Related Behaviors Among New York City Public Elementary Schools.

    PubMed

    Gray, Heewon Lee; Burgermaster, Marissa; Tipton, Elizabeth; Contento, Isobel R; Koch, Pamela A; Di Noia, Jennifer

    2016-04-01

    Sample size and statistical power calculation should consider clustering effects when schools are the unit of randomization in intervention studies. The objective of the current study was to investigate how student outcomes are clustered within schools in an obesity prevention trial. Baseline data from the Food, Health & Choices project were used. Participants were 9- to 13-year-old students enrolled in 20 New York City public schools (n= 1,387). Body mass index (BMI) was calculated based on measures of height and weight, and body fat percentage was measured with a Tanita® body composition analyzer (Model SC-331s). Energy balance-related behaviors were self-reported with a frequency questionnaire. To examine the cluster effects, intraclass correlation coefficients (ICCs) were calculated as school variance over total variance for outcome variables. School-level covariates, percentage students eligible for free and reduced-price lunch, percentage Black or Hispanic, and English language learners were added in the model to examine ICC changes. The ICCs for obesity indicators are: .026 for BMI-percentile, .031 for BMIz-score, .035 for percentage of overweight students, .037 for body fat percentage, and .041 for absolute BMI. The ICC range for the six energy balance-related behaviors are .008 to .044 for fruit and vegetables, .013 to .055 for physical activity, .031 to .052 for recreational screen time, .013 to .091 for sweetened beverages, .033 to .121 for processed packaged snacks, and .020 to .083 for fast food. When school-level covariates were included in the model, ICC changes varied from -95% to 85%. This is the first study reporting ICCs for obesity-related anthropometric and behavioral outcomes among New York City public schools. The results of the study may aid sample size estimation for future school-based cluster randomized controlled trials in similar urban setting and population. Additionally, identifying school-level covariates that can reduce cluster effects is important when analyzing data. © 2015 Society for Public Health Education.

  13. Replica approach to mean-variance portfolio optimization

    NASA Astrophysics Data System (ADS)

    Varga-Haszonits, Istvan; Caccioli, Fabio; Kondor, Imre

    2016-12-01

    We consider the problem of mean-variance portfolio optimization for a generic covariance matrix subject to the budget constraint and the constraint for the expected return, with the application of the replica method borrowed from the statistical physics of disordered systems. We find that the replica symmetry of the solution does not need to be assumed, but emerges as the unique solution of the optimization problem. We also check the stability of this solution and find that the eigenvalues of the Hessian are positive for r  =  N/T  <  1, where N is the dimension of the portfolio and T the length of the time series used to estimate the covariance matrix. At the critical point r  =  1 a phase transition is taking place. The out of sample estimation error blows up at this point as 1/(1  -  r), independently of the covariance matrix or the expected return, displaying the universality not only of the critical exponent, but also the critical point. As a conspicuous illustration of the dangers of in-sample estimates, the optimal in-sample variance is found to vanish at the critical point inversely proportional to the divergent estimation error.

  14. Examination of Variables That May Affect the Relationship Between Cognition and Functional Status in Individuals with Mild Cognitive Impairment: A Meta-Analysis

    PubMed Central

    Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard

    2016-01-01

    The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. PMID:26743326

  15. Measuring what matters: Effectively predicting language and literacy in children with cochlear implants

    PubMed Central

    Nittrouer, Susan; Caldwell, Amanda; Holloman, Christopher

    2012-01-01

    Objective To evaluate how well various language measures typically used with very young children after they receive cochlear implants predict language and literacy skills as they enter school. Methods Subjects were 50 children who had just completed kindergarten and were 6 or 7 years of age. All had previously participated in a longitudinal study from 12 to 48 months of age. 27 children had severe-to-profound hearing loss and wore cochlear implants, 8 had moderate hearing loss and wore hearing aids, and 15 had normal hearing. A latent variable of language/literacy skill was constructed from scores on six kinds of measures: (1) language comprehension; (2) expressive vocabulary; (3) phonological awareness; (4) literacy; (5) narrative skill; and (6) processing speed. Five kinds of language measures obtained at six-month intervals from 12 to 48 months of age were used as predictor variables in correlational analyses: (1) language comprehension; (2) expressive vocabulary; (3) syntactic structure of productive speech; (4) form and (5) function of language used in language samples. Results Outcomes quantified how much variance in kindergarten language/literacy performance was explained by each predictor variable, at each earlier age of testing. Comprehension measures consistently predicted roughly 25 to 50 percent of the variance in kindergarten language/literacy performance, and were the only effective predictors before 24 months of age. Vocabulary and syntactic complexity were strong predictors after roughly 36 months of age. Amount of speech produced in language samples and number of answers to parental queries explained moderate amounts of variance in performance after 24 months of age. Number of manual gestures and nonspeech vocalizations produced in language samples explained little to no variance before 24 months of age, and after that were negatively correlated with kindergarten performance. The number of imitations produced in language samples at 24 months of age explained about 10 percent of variance in kindergarten performance, but was otherwise not correlated or negatively correlated with kindergarten outcomes. Conclusions Before 24 months of age, the best predictor of later language success is language comprehension. In general, measures that index a child’s cognitive processing of language are the most sensitive predictors of school-age language abilities. PMID:22648088

  16. Reconstructing a herbivore's diet using a novel rbcL DNA mini-barcode for plants.

    PubMed

    Erickson, David L; Reed, Elizabeth; Ramachandran, Padmini; Bourg, Norman A; McShea, William J; Ottesen, Andrea

    2017-05-01

    Next Generation Sequencing and the application of metagenomic analyses can be used to answer questions about animal diet choice and study the consequences of selective foraging by herbivores. The quantification of herbivore diet choice with respect to native versus exotic plant species is particularly relevant given concerns of invasive species establishment and their effects on ecosystems. While increased abundance of white-tailed deer ( Odocoileus virginianus ) appears to correlate with increased incidence of invasive plant species, data supporting a causal link is scarce. We used a metabarcoding approach (PCR amplicons of the plant rbc L gene) to survey the diet of white-tailed deer (fecal samples), from a forested site in Warren County, Virginia with a comprehensive plant species inventory and corresponding reference collection of plant barcode and chloroplast sequences. We sampled fecal pellet piles and extracted DNA from 12 individual deer in October 2014. These samples were compared to a reference DNA library of plant species collected within the study area. For 72 % of the amplicons, we were able to assign taxonomy at the species level, which provides for the first time-sufficient taxonomic resolution to quantify the relative frequency at which native and exotic plant species are being consumed by white-tailed deer. For each of the 12 individual deer we collected three subsamples from the same fecal sample, resulting in sequencing 36 total samples. Using Qiime, we quantified the plant DNA found in all 36 samples, and found that variance within samples was less than variance between samples ( F  = 1.73, P  = 0.004), indicating additional subsamples may not be necessary. Species level diversity ranged from 60 to 93 OTUs per individual and nearly 70 % of all plant sequences recovered were from native plant species. The number of species detected did reduce significantly (range 4-12) when we excluded species whose OTU composed <1 % of each sample's total. When compared to the abundance of native and non-natives plants inventoried in the local community, our results support the observation that white-tailed deer have strong foraging preferences, but these preferences were not consistent for species in either class. Deer forage behaviour may favour some exotic species, but not all.

  17. Estimating total suspended sediment yield with probability sampling

    Treesearch

    Robert B. Thomas

    1985-01-01

    The ""Selection At List Time"" (SALT) scheme controls sampling of concentration for estimating total suspended sediment yield. The probability of taking a sample is proportional to its estimated contribution to total suspended sediment discharge. This procedure gives unbiased estimates of total suspended sediment yield and the variance of the...

  18. Credit Building in IDA Programs: Early Findings of a Longitudinal Study

    ERIC Educational Resources Information Center

    Birkenmaier, Julie; Curley, Jami; Kelly, Patrick

    2012-01-01

    Objective: This article reports on the impact of the Individual Development Account (IDA) program on credit. Method: Using a convenience sample of IDA participants (N = 165), data were analyzed using paired sample "t" tests, independent sample "t" test, one-way analysis of variance, Mann-Whitney "U" Tests, and…

  19. A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models

    ERIC Educational Resources Information Center

    Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.

    2013-01-01

    Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…

  20. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  1. Beliefs and Intentions for Skin Protection and Exposure

    PubMed Central

    Heckman, Carolyn J.; Manne, Sharon L.; Kloss, Jacqueline D.; Bass, Sarah Bauerle; Collins, Bradley; Lessin, Stuart R.

    2010-01-01

    Objectives To evaluate Fishbein’s Integrative Model in predicting young adults’ skin protection, sun exposure, and indoor tanning intentions. Methods 212 participants completed an online survey. Results Damage distress, self-efficacy, and perceived control accounted for 34% of the variance in skin protection intentions. Outcome beliefs and low self-efficacy for sun avoidance accounted for 25% of the variance in sun exposure intentions. Perceived damage, outcome evaluation, norms, and indoor tanning prototype accounted for 32% of the variance in indoor tanning intentions. Conclusions Future research should investigate whether these variables predict exposure and protection behaviors and whether intervening can reduce young adults’ skin cancer risk behaviors. PMID:22251761

  2. Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2014-01-01

    This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.

  3. Analytical pricing formulas for hybrid variance swaps with regime-switching

    NASA Astrophysics Data System (ADS)

    Roslan, Teh Raihana Nazirah; Cao, Jiling; Zhang, Wenjun

    2017-11-01

    The problem of pricing discretely-sampled variance swaps under stochastic volatility, stochastic interest rate and regime-switching is being considered in this paper. An extension of the Heston stochastic volatility model structure is done by adding the Cox-Ingersoll-Ross (CIR) stochastic interest rate model. In addition, the parameters of the model are permitted to have transitions following a Markov chain process which is continuous and discoverable. This hybrid model can be used to illustrate certain macroeconomic conditions, for example the changing phases of business stages. The outcome of our regime-switching hybrid model is presented in terms of analytical pricing formulas for variance swaps.

  4. Estimating rare events in biochemical systems using conditional sampling.

    PubMed

    Sundar, V S

    2017-01-28

    The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.

  5. Bet-hedging applications for conservation

    USGS Publications Warehouse

    Boyce, M.S.; Kirsch, E.M.; Servheen, C.

    2002-01-01

    One of the early tenets of conservation biology is that population viability is enhanced by maintaining multiple populations of a species. The strength of this tenet is justified by principles of bet-hedging. Management strategies that reduce variance in population size will also reduce risk of extinction. Asynchrony in population fluctuations in independent populations reduces variance in the aggregate of populations whereas environmental correlation among areas increases the risk that all populations will go extinct. We review the theoretical rationale of bet-hedging and suggest applications for conservation management of least terns in Nebraska and grizzly bears in the northern Rocky Mountains of the United States. The risk of extinction for least terns will be reduced if we can sustain the small central Platte River population in addition to the larger population on the lower Platte. Similarly, by restoring grizzly bears to the Bitterroot wilderness of Idaho and Montana can reduce the probability of extinction for grizzly bears in the Rocky Mountains of the United States by as much as 69-93%.

  6. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    PubMed

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  7. A generalized Levene's scale test for variance heterogeneity in the presence of sample correlation and group uncertainty.

    PubMed

    Soave, David; Sun, Lei

    2017-09-01

    We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  8. Mindfulness and emotion regulation difficulties in generalized anxiety disorder: Preliminary evidence for independent and overlapping contributions

    PubMed Central

    Roemer, Lizabeth; Lee, Jonathan K.; Salters-Pedneault, Kristalyn; Erisman, Shannon M.; Orsillo, Susan M.; Mennin, Douglas S.

    2013-01-01

    Diminished levels of mindfulness (awareness and acceptance/nonjudgment) and difficulties in emotion regulation have both been proposed to play a role in symptoms of generalized anxiety disorder (GAD); the current studies investigated these relationships in a nonclinical and a clinical sample. In the first study, among a sample of 395 individuals at an urban commuter campus, we found that self reports of both emotion regulation difficulties and aspects of mindfulness accounted for unique variance in GAD symptom severity, above and beyond shared variance with depressive and anxious symptoms, as well as shared variance with one another. In the second study, we found that individuals diagnosed with clinically significant GAD (n = 16) reported significantly lower levels of mindfulness and significantly higher levels of difficulties in emotion regulation than individuals in a non-anxious control group (n = 16). Results are discussed in terms of directions for future research and potential implications for treatment development. PMID:19433145

  9. Evidence of Convergent and Discriminant Validity of Child, Teacher, and Peer Reports of Teacher-Student Support

    PubMed Central

    Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan

    2012-01-01

    This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024

  10. Characterization of turbulence stability through the identification of multifractional Brownian motions

    NASA Astrophysics Data System (ADS)

    Lee, K. C.

    2013-02-01

    Multifractional Brownian motions have become popular as flexible models in describing real-life signals of high-frequency features in geoscience, microeconomics, and turbulence, to name a few. The time-changing Hurst exponent, which describes regularity levels depending on time measurements, and variance, which relates to an energy level, are two parameters that characterize multifractional Brownian motions. This research suggests a combined method of estimating the time-changing Hurst exponent and variance using the local variation of sampled paths of signals. The method consists of two phases: initially estimating global variance and then accurately estimating the time-changing Hurst exponent. A simulation study shows its performance in estimation of the parameters. The proposed method is applied to characterization of atmospheric stability in which descriptive statistics from the estimated time-changing Hurst exponent and variance classify stable atmosphere flows from unstable ones.

  11. Effects of Axial Vibration on Needle Insertion into the Tail Veins of Rats and Subsequent Serial Blood Corticosterone Levels

    PubMed Central

    Clement, Ryan S; Unger, Erica L; Ocón-Grove, Olga M; Cronin, Thomas L; Mulvihill, Maureen L

    2016-01-01

    Blood collection is commonplace in biomedical research. Obtaining sufficient sample while minimizing animal stress requires significant skill and practice. Repeated needle punctures can cause discomfort and lead to variable release of stress hormones, potentially confounding analysis. We designed a handheld device to reduce the force necessary for needle insertion by using low-frequency, axial (forward and backward) micromotions (that is, vibration) delivered to the needle during venipuncture. Tests with cadaver rat-tail segments (n = 18) confirmed that peak insertion forces were reduced by 73% on average with needle vibration. A serial blood-sampling study was then conducted by using Sprague–Dawley rats divided into 2 groups based on needle condition used to cause bleeds: vibration on (n = 10) and vibration off (n = 9). On 3 days (1 wk apart), 3 tail-vein blood collections were performed in each subject at 1-h intervals. To evaluate associated stress levels, plasma corticosterone concentration was quantified by radioimmunoassay and behavior (that is, movement and vocalization) was scored by blinded review of blood-sampling videos. After the initial trial, average corticosterone was lower (46% difference), the mean intrasubject variance trended lower (72%), and behavioral indications of stress were rated lower for the vibration-on group compared with the vibration-off group. Adding controlled vibrations to needles during insertion may decrease the stress associated with blood sampling from rats—an important methodologic advance for investigators studying and assessing stress processes and a refinement over current blood sampling techniques. PMID:27025813

  12. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  13. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    NASA Astrophysics Data System (ADS)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amidan, Brett G.; Hutchison, Janine R.

    There are many sources of variability that exist in the sample collection and analysis process. This paper addresses many, but not all, sources of variability. The main focus of this paper was to better understand and estimate variability due to differences between samplers. Variability between days was also studied, as well as random variability within each sampler. Experiments were performed using multiple surface materials (ceramic and stainless steel), multiple contaminant concentrations (10 spores and 100 spores), and with and without the presence of interfering material. All testing was done with sponge sticks using 10-inch by 10-inch coupons. Bacillus atrophaeus wasmore » used as the BA surrogate. Spores were deposited using wet deposition. Grime was coated on the coupons which were planned to include the interfering material (Section 3.3). Samples were prepared and analyzed at PNNL using CDC protocol (Section 3.4) and then cultured and counted. Five samplers were trained so that samples were taken using the same protocol. Each sampler randomly sampled eight coupons each day, four coupons with 10 spores deposited and four coupons with 100 spores deposited. Each day consisted of one material being tested. The clean samples (no interfering materials) were run first, followed by the dirty samples (coated with interfering material). There was a significant difference in recovery efficiency between the coupons with 10 spores deposited (mean of 48.9%) and those with 100 spores deposited (mean of 59.8%). There was no general significant difference between the clean and dirty (containing interfering material) coupons or between the two surface materials; however, there was a significant interaction between concentration amount and presence of interfering material. The recovery efficiency was close to the same for coupons with 10 spores deposited, but for the coupons with 100 spores deposited, the recovery efficiency for the dirty samples was significantly larger (65.9% - dirty vs. 53.6% - clean) (see Figure 4.1). Variance component analysis was used to estimate the amount of variability for each source of variability. There wasn’t much difference in variability for dirty and clean samples, as well as between materials, so these results were pooled together. There was a significant difference in amount of concentration deposited, so results were separated for the 10 spore and 100 spore deposited tests. In each case the within sampler variability was the largest with variances of 426.2 for 10 spores and 173.1 for 100 spores. The within sampler variability constitutes the variability between the four samples of similar material, interfering material, and concentration taken by each sampler. The between sampler variance was estimated to be 0 for 10 spores and 1.2 for 100 spores. The between day variance was estimated to be 42.1 for 10 spores and 78.9 for 100 spores. Standard deviations can be calculated in each case by taking the square root of the variance.« less

  15. Self-perception and value system as possible predictors of stress.

    PubMed

    Sivberg, B

    1998-03-01

    This study was directed towards personality-related, value system and sociodemographic variables of nursing students in a situation of change, using a longitudinal perspective to measure their improvement in principle-based moral judgement (Kohlberg; Rest) as possible predictors of stress. Three subgroups of students were included from the commencement of the first three-year academic nursing programme in 1993. The students came from the colleges of health at Jönköping, Växjö and Kristianstad in the south of Sweden. A principal component factor analysis (varimax) was performed using data obtained from the students in the spring of 1994 (n = 122) and in the spring of 1996 (n = 112). There were 23 variables, of which two were sociodemographic, eight represented self-image, six were self-values, six were interpersonal values, and one was principle-based moral judgement. The analysis of data from students in the first year of a three-year programme demonstrated eight factors that explained 68.8% of the variance. The most important factors were: (1) ascendant decisive disorderly sociability and nonpractical mindedness (18.1% of the variance); (2) original vigour person-related trust (13.3%) of the variance); (3) orderly nonvigour achievement (8.9% of the variance) and (4) independent leadership (7.9% of the variance). (The term 'ascendancy' refers to self-confidence, and 'vigour' denotes responding well to challenges and coping with stress.) The analysis in 1996 demonstrated nine factors, of which the most important were: (1) ascendant original sociability with decisive nonconformist leadership (18.2% of the variance); (2) cautious person-related responsibility (12.6% of the variance); (3) orderly nonvariety achievement (8.4% of the variance); and (4) nonsupportive benevolent conformity (7.2% of the variance). A comparison of the two most prominent factors in 1994 and 1996 showed the process of change to be stronger for 18.2% and weaker for 30% of the variance. Principle-based moral judgement was measured in March 1994 and in May 1996, using the Swedish version of the Defining Issues Test and Index P. The result was that Index P for the students at Jönköping changed significantly (paired samples t-test) between 1994 and 1996 (p = 0.028), but that for the Växjö and Kristianstad students did not. The mean of Index P was 44.3% at Växjö, which was greater than the international average for college students (42.3%) it differed significantly in the spring of 1996 (independent samples t-test), but not in 1994, from the students at Jönköping (p = 0.032) and Kristianstad (p = 0.025). Index P was very heterogeneous for the group of students at Växjö, with the result that the paired samples t-test reached a value close to significance only. The conclusion of this study was that, if self-perception and value system are predictors of stress, only one-third of the students had improved their ability to cope with stress at the end of the programme. This article contains the author's application to the teaching process of reflecting on the structure of expectations in professional ethical relationships.

  16. Sensitivity of the Hydrogen Epoch of Reionization Array and its build-out stages to one-point statistics from redshifted 21 cm observations

    NASA Astrophysics Data System (ADS)

    Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan

    2018-03-01

    We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.

  17. Clinical brain MR imaging prescriptions in Talairach space: technologist- and computer-driven methods.

    PubMed

    Weiss, Kenneth L; Pan, Hai; Storrs, Judd; Strub, William; Weiss, Jane L; Jia, Li; Eldevik, O Petter

    2003-05-01

    Variability in patient head positioning may yield substantial interstudy image variance in the clinical setting. We describe and test three-step technologist and computer-automated algorithms designed to image the brain in a standard reference system and reduce variance. Triple oblique axial images obtained parallel to the Talairach anterior commissure (AC)-posterior commissure (PC) plane were reviewed in a prospective analysis of 126 consecutive patients. Requisite roll, yaw, and pitch correction, as three authors determined independently and subsequently by consensus, were compared with the technologists' actual graphical prescriptions and those generated by a novel computer automated three-step (CATS) program. Automated pitch determinations generated with Statistical Parametric Mapping '99 (SPM'99) were also compared. Requisite pitch correction (15.2 degrees +/- 10.2 degrees ) far exceeded that for roll (-0.6 degrees +/- 3.7 degrees ) and yaw (-0.9 degrees +/- 4.7 degrees ) in terms of magnitude and variance (P <.001). Technologist and computer-generated prescriptions substantially reduced interpatient image variance with regard to roll (3.4 degrees and 3.9 degrees vs 13.5 degrees ), yaw (0.6 degrees and 2.5 degrees vs 22.3 degrees ), and pitch (28.6 degrees, 18.5 degrees with CATS, and 59.3 degrees with SPM'99 vs 104 degrees ). CATS performed worse than the technologists in yaw prescription, and it was equivalent in roll and pitch prescriptions. Talairach prescriptions better approximated standard CT canthomeatal angulations (9 degrees vs 24 degrees ) and provided more efficient brain coverage than that of routine axial imaging. Brain MR prescriptions corrected for direct roll, yaw, and Talairach AC-PC pitch can be readily achieved by trained technologists or automated computer algorithms. This ability will substantially reduce interpatient variance, allow better approximation of standard CT angulation, and yield more efficient brain coverage than that of routine clinical axial imaging.

  18. Atmospheric pressure loading effects on Global Positioning System coordinate determinations

    NASA Technical Reports Server (NTRS)

    Vandam, Tonie M.; Blewitt, Geoffrey; Heflin, Michael B.

    1994-01-01

    Earth deformation signals caused by atmospheric pressure loading are detected in vertical position estimates at Global Positioning System (GPS) stations. Surface displacements due to changes in atmospheric pressure account for up to 24% of the total variance in the GPS height estimates. The detected loading signals are larger at higher latitudes where pressure variations are greatest; the largest effect is observed at Fairbanks, Alaska (latitude 65 deg), with a signal root mean square (RMS) of 5 mm. Out of 19 continuously operating GPS sites (with a mean of 281 daily solutions per site), 18 show a positive correlation between the GPS vertical estimates and the modeled loading displacements. Accounting for loading reduces the variance of the vertical station positions on 12 of the 19 sites investigated. Removing the modeled pressure loading from GPS determinations of baseline length for baselines longer than 6000 km reduces the variance on 73 of the 117 baselines investigated. The slight increase in variance for some of the sites and baselines is consistent with expected statistical fluctuations. The results from most stations are consistent with approximately 65% of the modeled pressure load being found in the GPS vertical position measurements. Removing an annual signal from both the measured heights and the modeled load time series leaves this value unchanged. The source of the remaining discrepancy between the modeled and observed loading signal may be the result of (1) anisotropic effects in the Earth's loading response, (2) errors in GPS estimates of tropospheric delay, (3) errors in the surface pressure data, or (4) annual signals in the time series of loading and station heights. In addition, we find that using site dependent coefficients, determined by fitting local pressure to the modeled radial displacements, reduces the variance of the measured station heights as well as or better than using the global convolution sum.

  19. On the brain structure heterogeneity of autism: Parsing out acquisition site effects with significance-weighted principal component analysis.

    PubMed

    Martinez-Murcia, Francisco Jesús; Lai, Meng-Chuan; Górriz, Juan Manuel; Ramírez, Javier; Young, Adam M H; Deoni, Sean C L; Ecker, Christine; Lombardo, Michael V; Baron-Cohen, Simon; Murphy, Declan G M; Bullmore, Edward T; Suckling, John

    2017-03-01

    Neuroimaging studies have reported structural and physiological differences that could help understand the causes and development of Autism Spectrum Disorder (ASD). Many of them rely on multisite designs, with the recruitment of larger samples increasing statistical power. However, recent large-scale studies have put some findings into question, considering the results to be strongly dependent on the database used, and demonstrating the substantial heterogeneity within this clinically defined category. One major source of variance may be the acquisition of the data in multiple centres. In this work we analysed the differences found in the multisite, multi-modal neuroimaging database from the UK Medical Research Council Autism Imaging Multicentre Study (MRC AIMS) in terms of both diagnosis and acquisition sites. Since the dissimilarities between sites were higher than between diagnostic groups, we developed a technique called Significance Weighted Principal Component Analysis (SWPCA) to reduce the undesired intensity variance due to acquisition site and to increase the statistical power in detecting group differences. After eliminating site-related variance, statistically significant group differences were found, including Broca's area and the temporo-parietal junction. However, discriminative power was not sufficient to classify diagnostic groups, yielding accuracies results close to random. Our work supports recent claims that ASD is a highly heterogeneous condition that is difficult to globally characterize by neuroimaging, and therefore different (and more homogenous) subgroups should be defined to obtain a deeper understanding of ASD. Hum Brain Mapp 38:1208-1223, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations

    NASA Astrophysics Data System (ADS)

    Gómez-Uribe, Carlos A.; Verghese, George C.

    2007-01-01

    The intrinsic stochastic effects in chemical reactions, and particularly in biochemical networks, may result in behaviors significantly different from those predicted by deterministic mass action kinetics (MAK). Analyzing stochastic effects, however, is often computationally taxing and complex. The authors describe here the derivation and application of what they term the mass fluctuation kinetics (MFK), a set of deterministic equations to track the means, variances, and covariances of the concentrations of the chemical species in the system. These equations are obtained by approximating the dynamics of the first and second moments of the chemical master equation. Apart from needing knowledge of the system volume, the MFK description requires only the same information used to specify the MAK model, and is not significantly harder to write down or apply. When the effects of fluctuations are negligible, the MFK description typically reduces to MAK. The MFK equations are capable of describing the average behavior of the network substantially better than MAK, because they incorporate the effects of fluctuations on the evolution of the means. They also account for the effects of the means on the evolution of the variances and covariances, to produce quite accurate uncertainty bands around the average behavior. The MFK computations, although approximate, are significantly faster than Monte Carlo methods for computing first and second moments in systems of chemical reactions. They may therefore be used, perhaps along with a few Monte Carlo simulations of sample state trajectories, to efficiently provide a detailed picture of the behavior of a chemical system.

  1. The Australian National Sub-acute and Non-acute Patient Casemix Classification (AN-SNAP): its application and value in a stroke rehabilitation programme.

    PubMed

    Lowthian, P; Disler, P; Ma, S; Eagar, K; Green, J; de Graaff, S

    2000-10-01

    To investigate whether the Australian National Sub-acute and Non-acute Patient Casemix Classification (SNAP) and Functional Independence Measure and Functional Related Group (Version 2) (FIM-FRG2) casemix systems can be used to predict functional outcome, and reduce the variance of length of stay (LOS) of patients undergoing rehabilitation after strokes. The study comprised a retrospective analysis of the records of patients admitted to the Cedar Court Healthsouth Rehabilitation Hospital for rehabilitation after stroke. The sample included 547 patients (83.3% of those admitted with stroke during this period). Patient data were stratified for analysis into the five SNAP or nine FIM-FRG2 groups, on the basis of the admission FIM scores and age. The AN-SNAP classification accounted for a 30.7% reduction of the variance of LOS, and 44.2% of motor FIM, and the FIM-FRG2 accounts for 33.5% and 56.4% reduction respectively. Comparison of the Cedar Court with the national AN-SNAP data showed differences in the LOS and functional outcomes of older, severely disabled patients. Intensive rehabilitation in selected patients of this type appears to have positive effects, albeit with a slightly longer period of inpatient rehabilitation. Casemix classifications can be powerful management tools. Although FIM-FRG2 accounts for more reduction in variance than SNAP, division into nine groups meant that some contained few subjects. This paper supports the introduction of AN-SNAP as the standard casemix tool for rehabilitation in Australia, which will hopefully lead to rational, adequate funding of the rehabilitation phase of care.

  2. Accelerated Creep Testing of High Strength Aramid Webbing

    NASA Technical Reports Server (NTRS)

    Jones, Thomas C.; Doggett, William R.; Stnfield, Clarence E.; Valverde, Omar

    2012-01-01

    A series of preliminary accelerated creep tests were performed on four variants of 12K and 24K lbf rated Vectran webbing to help develop an accelerated creep test methodology and analysis capability for high strength aramid webbings. The variants included pristine, aged, folded and stitched samples. This class of webbings is used in the restraint layer of habitable, inflatable space structures, for which the lifetime properties are currently not well characterized. The Stepped Isothermal Method was used to accelerate the creep life of the webbings and a novel stereo photogrammetry system was used to measure the full-field strains. A custom MATLAB code is described, and used to reduce the strain data to produce master creep curves for the test samples. Initial results show good correlation between replicates; however, it is clear that a larger number of samples are needed to build confidence in the consistency of the results. It is noted that local fiber breaks affect the creep response in a similar manner to increasing the load, thus raising the creep rate and reducing the time to creep failure. The stitched webbings produced the highest variance between replicates, due to the combination of higher local stresses and thread-on-fiber damage. Large variability in the strength of the webbings is also shown to have an impact on the range of predicted creep life.

  3. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  4. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.

  5. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  6. Design tradeoffs for trend assessment in aquatic biological monitoring programs

    USGS Publications Warehouse

    Gurtz, Martin E.; Van Sickle, John; Carlisle, Daren M.; Paulsen, Steven G.

    2013-01-01

    Assessments of long-term (multiyear) temporal trends in biological monitoring programs are generally undertaken without an adequate understanding of the temporal variability of biological communities. When the sources and levels of variability are unknown, managers cannot make informed choices in sampling design to achieve monitoring goals in a cost-effective manner. We evaluated different trend sampling designs by estimating components of both short- and long-term variability in biological indicators of water quality in streams. Invertebrate samples were collected from 32 sites—9 urban, 6 agricultural, and 17 relatively undisturbed (reference) streams—distributed throughout the United States. Between 5 and 12 yearly samples were collected at each site during the period 1993–2008, plus 2 samples within a 10-week index period during either 2007 or 2008. These data allowed calculation of four sources of variance for invertebrate indicators: among sites, among years within sites, interaction among sites and years (site-specific annual variation), and among samples collected within an index period at a site (residual). When estimates of these variance components are known, changes to sampling design can be made to improve trend detection. Design modifications that result in the ability to detect the smallest trend with the fewest samples are, from most to least effective: (1) increasing the number of years in the sampling period (duration of the monitoring program), (2) decreasing the interval between samples, and (3) increasing the number of repeat-visit samples per year (within an index period). This order of improvement in trend detection, which achieves the greatest gain for the fewest samples, is the same whether trends are assessed at an individual site or an average trend of multiple sites. In multiple-site surveys, increasing the number of sites has an effect similar to that of decreasing the sampling interval; the benefit of adding sites is greater when a new set of different sites is selected for each sampling effort than when the same sites are sampled each time. Understanding variance components of the ecological attributes of interest can lead to more cost-effective monitoring designs to detect trends.

  7. Genomic scan as a tool for assessing the genetic component of phenotypic variance in wild populations.

    PubMed

    Herrera, Carlos M

    2012-01-01

    Methods for estimating quantitative trait heritability in wild populations have been developed in recent years which take advantage of the increased availability of genetic markers to reconstruct pedigrees or estimate relatedness between individuals, but their application to real-world data is not exempt from difficulties. This chapter describes a recent marker-based technique which, by adopting a genomic scan approach and focusing on the relationship between phenotypes and genotypes at the individual level, avoids the problems inherent to marker-based estimators of relatedness. This method allows the quantification of the genetic component of phenotypic variance ("degree of genetic determination" or "heritability in the broad sense") in wild populations and is applicable whenever phenotypic trait values and multilocus data for a large number of genetic markers (e.g., amplified fragment length polymorphisms, AFLPs) are simultaneously available for a sample of individuals from the same population. The method proceeds by first identifying those markers whose variation across individuals is significantly correlated with individual phenotypic differences ("adaptive loci"). The proportion of phenotypic variance in the sample that is statistically accounted for by individual differences in adaptive loci is then estimated by fitting a linear model to the data, with trait value as the dependent variable and scores of adaptive loci as independent ones. The method can be easily extended to accommodate quantitative or qualitative information on biologically relevant features of the environment experienced by each sampled individual, in which case estimates of the environmental and genotype × environment components of phenotypic variance can also be obtained.

  8. Integrating mean and variance heterogeneities to identify differentially expressed genes.

    PubMed

    Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen

    2016-12-06

    In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.

  9. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  10. High brightness, low coherence, digital holographic microscopy for 3D visualization of an in-vitro sandwiched biological sample.

    PubMed

    Abdelsalam, D G; Yasui, Takeshi

    2017-05-01

    We achieve practically a bright-field digital holographic microscopy (DHM) configuration free from coherent noise for three-dimensional (3D) visualization of an in-vitro sandwiched sarcomere sample. Visualization of such sandwiched samples by conventional atomic force microscope (AFM) is impossible, while visualization using DHM with long coherent lengths is challenging. The proposed configuration is comprised of an ultrashort pulse laser source and a Mach-Zehnder interferometer in transmission. Periodically poled lithium niobate (PPLN) crystal was used to convert the fundamental beam by second harmonic generation (SHG) to the generated beam fit to the CCD camera used. The experimental results show that the contrast of the reconstructed phase image is improved to a higher degree compared to a He-Ne laser based result. We attribute this improvement to two things: the feature of the femtosecond pulse light, which acts as a chopper for coherent noise suppression, and the fact that the variance of a coherent mode can be reduced by a factor of 9 due to low loss through a nonlinear medium.

  11. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  12. Reconstructing a herbivore’s diet using a novel rbcL DNA mini-barcode for plants

    USGS Publications Warehouse

    Erickson, David L.; Reed, Elizabeth; Ramachandran, Padmini; Bourg, Norman; McShea, William J.; Ottesen, Andrea

    2017-01-01

    Next Generation Sequencing and the application of metagenomic analyses can be used to answer questions about animal diet choice and study the consequences of selective foraging by herbivores. The quantification of herbivore diet choice with respect to native versus exotic plant species is particularly relevant given concerns of invasive species establishment and their effects on ecosystems. While increased abundance of white-tailed deer (Odocoileus virginianus) appears to correlate with increased incidence of invasive plant species, data supporting a causal link is scarce. We used a metabarcoding approach (PCR amplicons of the plant rbcL gene) to survey the diet of white-tailed deer (fecal samples), from a forested site in Warren County, Virginia with a comprehensive plant species inventory and corresponding reference collection of plant barcode and chloroplast sequences. We sampled fecal pellet piles and extracted DNA from 12 individual deer in October 2014. These samples were compared to a reference DNA library of plant species collected within the study area. For 72 % of the amplicons, we were able to assign taxonomy at the species level, which provides for the first time—sufficient taxonomic resolution to quantify the relative frequency at which native and exotic plant species are being consumed by white-tailed deer. For each of the 12 individual deer we collected three subsamples from the same fecal sample, resulting in sequencing 36 total samples. Using Qiime, we quantified the plant DNA found in all 36 samples, and found that variance within samples was less than variance between samples (F = 1.73, P = 0.004), indicating additional subsamples may not be necessary. Species level diversity ranged from 60 to 93 OTUs per individual and nearly 70 % of all plant sequences recovered were from native plant species. The number of species detected did reduce significantly (range 4–12) when we excluded species whose OTU composed <1 % of each sample’s total. When compared to the abundance of native and non-natives plants inventoried in the local community, our results support the observation that white-tailed deer have strong foraging preferences, but these preferences were not consistent for species in either class. Deer forage behaviour may favour some exotic species, but not all.

  13. Reconstructing a herbivore’s diet using a novel rbcL DNA mini-barcode for plants

    PubMed Central

    Erickson, David L.; Reed, Elizabeth; Ramachandran, Padmini; Bourg, Norman A.; Ottesen, Andrea

    2017-01-01

    Abstract Next Generation Sequencing and the application of metagenomic analyses can be used to answer questions about animal diet choice and study the consequences of selective foraging by herbivores. The quantification of herbivore diet choice with respect to native versus exotic plant species is particularly relevant given concerns of invasive species establishment and their effects on ecosystems. While increased abundance of white-tailed deer (Odocoileus virginianus) appears to correlate with increased incidence of invasive plant species, data supporting a causal link is scarce. We used a metabarcoding approach (PCR amplicons of the plant rbcL gene) to survey the diet of white-tailed deer (fecal samples), from a forested site in Warren County, Virginia with a comprehensive plant species inventory and corresponding reference collection of plant barcode and chloroplast sequences. We sampled fecal pellet piles and extracted DNA from 12 individual deer in October 2014. These samples were compared to a reference DNA library of plant species collected within the study area. For 72 % of the amplicons, we were able to assign taxonomy at the species level, which provides for the first time—sufficient taxonomic resolution to quantify the relative frequency at which native and exotic plant species are being consumed by white-tailed deer. For each of the 12 individual deer we collected three subsamples from the same fecal sample, resulting in sequencing 36 total samples. Using Qiime, we quantified the plant DNA found in all 36 samples, and found that variance within samples was less than variance between samples (F = 1.73, P = 0.004), indicating additional subsamples may not be necessary. Species level diversity ranged from 60 to 93 OTUs per individual and nearly 70 % of all plant sequences recovered were from native plant species. The number of species detected did reduce significantly (range 4–12) when we excluded species whose OTU composed <1 % of each sample’s total. When compared to the abundance of native and non-natives plants inventoried in the local community, our results support the observation that white-tailed deer have strong foraging preferences, but these preferences were not consistent for species in either class. Deer forage behaviour may favour some exotic species, but not all. PMID:28533898

  14. Health Insurance Coverage: Early Release of Estimates from the National Health Interview Survey, January -- June 2013

    MedlinePlus

    ... Park, NC) to account for the complex sample design of NHIS, taking into account stratum and primary sampling unit (PSU) identifiers. The Taylor series linearization method was chosen for variance estimation. Trends ...

  15. The spatial structure and temporal synchrony of water quality in stream networks

    NASA Astrophysics Data System (ADS)

    Abbott, Benjamin; Gruau, Gerard; Zarneske, Jay; Barbe, Lou; Gu, Sen; Kolbe, Tamara; Thomas, Zahra; Jaffrezic, Anne; Moatar, Florentina; Pinay, Gilles

    2017-04-01

    To feed nine billion people in 2050 while maintaining viable aquatic ecosystems will require an understanding of nutrient pollution dynamics throughout stream networks. Most regulatory frameworks such as the European Water Framework Directive and U.S. Clean Water Act, focus on nutrient concentrations in medium to large rivers. This strategy is appealing because large rivers integrate many small catchments and total nutrient loads drive eutrophication in estuarine and oceanic ecosystems. However, there is growing evidence that to understand and reduce downstream nutrient fluxes we need to look upstream. While headwater streams receive the bulk of nutrients in river networks, the relationship between land cover and nutrient flux often breaks down for small catchments, representing an important ecological unknown since 90% of global stream length occurs in catchments smaller than 15 km2. Though continuous monitoring of thousands of small streams is not feasible, what if we could learn what we needed about where and when to implement monitoring and conservation efforts with periodic sampling of headwater catchments? To address this question we performed repeat synoptic sampling of 56 nested catchments ranging in size from 1 to 370 km2 in western France. Spatial variability in carbon and nutrient concentrations decreased non-linearly as catchment size increased, with thresholds in variance for organic carbon and nutrients occurring between 36 and 68 km2. While it is widely held that temporal variance is higher in smaller streams, we observed consistent temporal variance across spatial scales and the ranking of catchments based on water quality showed strong synchrony in the water chemistry response to seasonal variation and hydrological events. We used these observations to develop two simple management frameworks. The subcatchment leverage concept proposes that mitigation and restoration efforts are more likely to succeed when implemented at spatial scales expressing high variability in the target parameter, which indicates decreased system inertia and demonstrates that alternative system responses are possible. The subcatchment synchrony concept suggests that periodic sampling of headwaters can provide valuable information about pollutant sources and inherent resilience in subcatchments and that if agricultural activity were redistributed based on this assessment of catchment vulnerability to nutrient loading, water quality could be improved while maintaining crop yields.

  16. Convenience Samples and Caregiving Research: How Generalizable Are the Findings?

    ERIC Educational Resources Information Center

    Pruchno, Rachel A.; Brill, Jonathan E.; Shands, Yvonne; Gordon, Judith R.; Genderson, Maureen Wilson; Rose, Miriam; Cartwright, Francine

    2008-01-01

    Purpose: We contrast characteristics of respondents recruited using convenience strategies with those of respondents recruited by random digit dial (RDD) methods. We compare sample variances, means, and interrelationships among variables generated from the convenience and RDD samples. Design and Methods: Women aged 50 to 64 who work full time and…

  17. Regression sampling: some results for resource managers and researchers

    Treesearch

    William G. O' Regan; Robert W. Boyd

    1974-01-01

    Regression sampling is widely used in natural resources management and research to estimate quantities of resources per unit area. This note brings together results found in the statistical literature in the application of this sampling technique. Conditional and unconditional estimators are listed and for each estimator, exact variances and unbiased estimators for the...

  18. Factor Covariance Analysis in Subgroups.

    ERIC Educational Resources Information Center

    Pennell, Roger

    The problem considered is that of an investigator sampling two or more correlation matrices and desiring to fit a model where a factor pattern matrix is assumed to be identical across samples and we need to estimate only the factor covariance matrix and the unique variance for each sample. A flexible, least squares solution is worked out and…

  19. Stability of measures from children's interviews: the effects of time, sample length, and topic.

    PubMed

    Heilmann, John; DeBrock, Lindsay; Riley-Tillman, T Chris

    2013-08-01

    The purpose of this study was to examine the reliability of, and sources of variability in, language measures from interviews collected from young school-age children. Two 10-min interviews were collected from 20 at-risk kindergarten children by an examiner using a standardized set of questions. Test-retest reliability coefficients were calculated for 8 language measures. Generalizability theory (G-theory) analyses were completed to document the variability introduced into the measures from the child, session, sample length, and topic. Significant and strong reliability correlation coefficients were observed for most of the language sample measures. The G-theory analyses revealed that most of the variance in the language measures was attributed to the child. Session, sample length, and topic accounted for negligible amounts of variance in most of the language measures. Measures from interviews were reliable across sessions, and the sample length and topic did not have a substantial impact on the reliability of the language measures. Implications regarding the clinical feasibility of language sample analysis for assessment and progress monitoring are discussed.

  20. Dose coverage calculation using a statistical shape model—applied to cervical cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; van de Schoot, Agustinus J. A. J.; Grusell, Erik; Bel, Arjan; Ahnesjö, Anders

    2017-05-01

    A comprehensive methodology for treatment simulation and evaluation of dose coverage probabilities is presented where a population based statistical shape model (SSM) provide samples of fraction specific patient geometry deformations. The learning data consists of vector fields from deformable image registration of repeated imaging giving intra-patient deformations which are mapped to an average patient serving as a common frame of reference. The SSM is created by extracting the most dominating eigenmodes through principal component analysis of the deformations from all patients. The sampling of a deformation is thus reduced to sampling weights for enough of the most dominating eigenmodes that describe the deformations. For the cervical cancer patient datasets in this work, we found seven eigenmodes to be sufficient to capture 90% of the variance in the deformations of the, and only three eigenmodes for stability in the simulated dose coverage probabilities. The normality assumption of the eigenmode weights was tested and found relevant for the 20 most dominating eigenmodes except for the first. Individualization of the SSM is demonstrated to be improved using two deformation samples from a new patient. The probabilistic evaluation provided additional information about the trade-offs compared to the conventional single dataset treatment planning.

  1. Effects of caffeine supplementation in post-thaw human semen over different incubation periods.

    PubMed

    Pariz, J R; Hallak, J

    2016-11-01

    This study aimed to evaluate the effects of caffeine supplementation in post-cryopreservation human semen over different incubation periods. After collection by masturbation, 17 semen samples were analysed according to World Health Organization criteria, processed and cryopreserved with TEST-yolk buffer (1 : 1) in liquid nitrogen. After a thawing protocol, samples were incubated with 2 mm of caffeine for 0, 5, 15, 30 or 60 min, followed by analysis of motility and mitochondrial activity using 3,3'-diaminobenzidine (DAB). Mean variance analysis was performed, and P < 0.05 was the adopted significance threshold. Samples incubated for 15 min showed increased progressive motility compared to other periods of incubation, as well as a reduced percentage of immotile spermatozoa (P < 0.05). In samples incubated for 5 min, increased mitochondrial activity above 50% was observed (DABI and DABII). Although cryosurvival rates were low after the cryopreservation process, incubation with caffeine was associated with an increase in sperm motility, particularly 15-min incubation, suggesting that incubation with caffeine can be an important tool in patients with worsening seminal quality undergoing infertility treatment. © 2016 Blackwell Verlag GmbH.

  2. A genetic survey of Salvinia minima in the southern United States

    USGS Publications Warehouse

    Madeira, Paul T.; Jacono, C.C.; Tipping, Phil; Van, Thai K.; Center, Ted D.

    2003-01-01

    The genetic relationships among 68 samples of Salvinia minima (Salviniaceae) were investigated using RAPD analysis. Neighbor joining, principle components, and AMOVA analyses were used to detect differences among geographically referenced samples within and outside of Florida. Genetic distances (Nei and Li) range up to 0.48, although most are under 0.30, still relatively high levels for an introduced, clonally reproducing plant. Despite the diversity AMOVA analysis yielded no indication that the Florida plants, as a group, were significantly different from the plants sampled elsewhere in its adventive, North American range. A single, genetically dissimilar population probably exists in the recent (1998) horticultural introduction to Mississippi. When the samples were grouped into 10 regional (but artificial) units and analyzed using AMOVA the between region variance was only 7.7%. Genetic similarity among these regions may indicate introduction and dispersal from common sources. The reduced aggressiveness of Florida populations (compared to other states) may be due to herbivory. The weevil Cyrtobagous salviniae, a selective feeder, is found in Florida but not other states. The genetic similarity also suggests that there are no obvious genetic obstacles to the establishment or efficacy of C. salviniae as a biological control agent on S. minima outside of Florida.

  3. Are Apparent Sex Differences in Mean IQ Scores Created in Part by Sample Restriction and Increased Male Variance?

    ERIC Educational Resources Information Center

    Dykiert, Dominika; Gale, Catharine R.; Deary, Ian J.

    2009-01-01

    This study investigated the possibility that apparent sex differences in IQ are at least partly created by the degree of sample restriction from the baseline population. We used a nationally representative sample, the 1970 British Cohort Study. Sample sizes varied from 6518 to 11,389 between data-collection sweeps. Principal components analysis of…

  4. Robust versus consistent variance estimators in marginal structural Cox models.

    PubMed

    Enders, Dirk; Engel, Susanne; Linder, Roland; Pigeot, Iris

    2018-06-11

    In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and consistent variance estimators in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the 2 estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Population ecology of breeding Pacific common eiders on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Wilson, Heather M.; Flint, Paul L.; Powell, Abby N.; Grand, J. Barry; Moral, Christine L.

    2012-01-01

    Populations of Pacific common eiders (Somateria mollissima v-nigrum) on the Yukon-Kuskokwim Delta (YKD) in western Alaska declined by 50–90% from 1957 to 1992 and then stabilized at reduced numbers from the early 1990s to the present. We investigated the underlying processes affecting their population dynamics by collection and analysis of demographic data from Pacific common eiders at 3 sites on the YKD (1991–2004) for 29 site-years. We examined variation in components of reproduction, tested hypotheses about the influence of specific ecological factors on life-history variables, and investigated their relative contributions to local population dynamics. Reproductive output was low and variable, both within and among individuals, whereas apparent survival of adult females was high and relatively invariant (0.89 ± 0.005). All reproductive parameters varied across study sites and years. Clutch initiation dates ranged from 4 May to 28 June, with peak (modal) initiation occurring on 26 May. Females at an island study site consistently initiated clutches 3–5 days earlier in each year than those on 2 mainland sites. Population variance in nest initiation date was negatively related to the peak, suggesting increased synchrony in years of delayed initiation. On average, total clutch size (laid) ranged from 4.8 to 6.6 eggs, and declined with date of nest initiation. After accounting for partial predation and non-viability of eggs, average clutch size at hatch ranged from 2.0 to 5.8 eggs. Within seasons, daily survival probability (DSP) of nests was lowest during egg-laying and late-initiation dates. Estimated nest survival varied considerably across sites and years (mean = 0.55, range: 0.06–0.92), but process variance in nest survival was relatively low (0.02, CI: 0.01–0.05), indicating that most variance was likely attributed to sampling error. We found evidence that observer effects may have reduced overall nest survival by 0.0–0.36 across site-years. Study sites with lower sample sizes and more frequent visitations appeared to experience greater observer effects. In general, Pacific common eiders exhibited high spatio-temporal variance in reproductive components. Larger clutch sizes and high nest survival at early initiation dates suggested directional selection favoring early nesting. However, stochastic environmental effects may have precluded response to this apparent selection pressure. Our results suggest that females breeding early in the season have the greatest reproductive value, as these birds lay the largest clutches and have the highest probability of successfully hatching. We developed stochastic, stage-based, matrix population models that incorporated observed spatio-temporal (process) variance and co-variation in vital rates, and projected the stable stage distribution () and population growth rate (λ). We used perturbation analyses to examine the relative influence of changes in vital rates on λ and variance decomposition to assess the proportion of variation in λ explained by process variation in each vital rate. In addition to matrix-based λ, we estimated λ using capture–recapture approaches, and log-linear regression. We found the stable age distribution for Pacific common eiders was weighted heavily towards experienced adult females (≥4 yr of age), and all calculations of λ indicated that the YKD population was stable to slightly increasing (λmatrix = 1.02, CI: 1.00–1.04); λreverse-capture–recapture = 1.05, CI: 0.99–1.11; λlog-linear = 1.04, CI: 0.98–1.10). Perturbation analyses suggested the population would respond most dramatically to changes in adult female survival (relative influence of adult survival was 1.5 times that of fecundity), whereas retrospective variation in λ was primarily explained by fecundity parameters (60%), particularly duckling survival (42%). Among components of fecundity, sensitivities were highest for duckling survival, suggesti

  6. [Effective size of subpopulations in early-run sockeye salmon Oncorhynchus nerka from Azabach'e Lake (Kamchatka): the effect of relative reproductive success of different-year cohorts].

    PubMed

    Efremov, V V

    2004-05-01

    The effect of variation in reproductive success of cohorts of different year of birth (within generation) on the effective subpopulation (breeding group) size in early-run sockeye salmon Oncorhynchus nerka from Azabach'e Lake (Kamchatka). The annual variation in census size and overlapping of year classes reduced the ratio of the effective subpopulation size to the census size by 7 to 88% in different subpopulations. The total effect of the variance of reproductive success in individual years and the variance of reproductive success of different cohorts reduced the effective size/census size ratio by 68-96%.

  7. Hedged Monte-Carlo: low variance derivative pricing with objective probabilities

    NASA Astrophysics Data System (ADS)

    Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan

    2001-01-01

    We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.

  8. Shared genetic variance between obesity and white matter integrity in Mexican Americans.

    PubMed

    Spieker, Elena A; Kochunov, Peter; Rowland, Laura M; Sprooten, Emma; Winkler, Anderson M; Olvera, Rene L; Almasy, Laura; Duggirala, Ravi; Fox, Peter T; Blangero, John; Glahn, David C; Curran, Joanne E

    2015-01-01

    Obesity is a chronic metabolic disorder that may also lead to reduced white matter integrity, potentially due to shared genetic risk factors. Genetic correlation analyses were conducted in a large cohort of Mexican American families in San Antonio (N = 761, 58% females, ages 18-81 years; 41.3 ± 14.5) from the Genetics of Brain Structure and Function Study. Shared genetic variance was calculated between measures of adiposity [(body mass index (BMI; kg/m(2)) and waist circumference (WC; in)] and whole-brain and regional measurements of cerebral white matter integrity (fractional anisotropy). Whole-brain average and regional fractional anisotropy values for 10 major white matter tracts were calculated from high angular resolution diffusion tensor imaging data (DTI; 1.7 × 1.7 × 3 mm; 55 directions). Additive genetic factors explained intersubject variance in BMI (heritability, h (2) = 0.58), WC (h (2) = 0.57), and FA (h (2) = 0.49). FA shared significant portions of genetic variance with BMI in the genu (ρG = -0.25), body (ρG = -0.30), and splenium (ρG = -0.26) of the corpus callosum, internal capsule (ρG = -0.29), and thalamic radiation (ρG = -0.31) (all p's = 0.043). The strongest evidence of shared variance was between BMI/WC and FA in the superior fronto-occipital fasciculus (ρG = -0.39, p = 0.020; ρG = -0.39, p = 0.030), which highlights region-specific variation in neural correlates of obesity. This may suggest that increase in obesity and reduced white matter integrity share common genetic risk factors.

  9. The Relationship Between Domestic Partner Violence and Suicidal Behaviors in an Adult Community Sample: Examining Hope Agency and Pathways as Protective Factors.

    PubMed

    Chang, Edward C; Yu, Elizabeth A; Kahle, Emma R; Du, Yifeng; Chang, Olivia D; Jilani, Zunaira; Yu, Tina; Hirsch, Jameson K

    2017-10-01

    We examined an additive and interactive model involving domestic partner violence (DPV) and hope in accounting for suicidal behaviors in a sample of 98 community adults. Results showed that DPV accounted for a significant amount of variance in suicidal behaviors. Hope further augmented the prediction model and accounted for suicidal behaviors beyond DPV. Finally, we found that DPV significantly interacted with both dimensions of hope to further account for additional variance in suicidal behaviors above and beyond the independent effects of DPV and hope. Implications for the role of hope in the relationship between DPV and suicidal behaviors are discussed.

  10. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öztürk, Hande; Noyan, I. Cevdet

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  11. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE PAGES

    Öztürk, Hande; Noyan, I. Cevdet

    2017-08-24

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  12. Variance in total levels of phospholipase C zeta (PLC-ζ) in human sperm may limit the applicability of quantitative immunofluorescent analysis as a diagnostic indicator of oocyte activation capability.

    PubMed

    Kashir, Junaid; Jones, Celine; Mounce, Ginny; Ramadan, Walaa M; Lemmon, Bernadette; Heindryckx, Bjorn; de Sutter, Petra; Parrington, John; Turner, Karen; Child, Tim; McVeigh, Enda; Coward, Kevin

    2013-01-01

    To examine whether similar levels of phospholipase C zeta (PLC-ζ) protein are present in sperm from men whose ejaculates resulted in normal oocyte activation, and to examine whether a predominant pattern of PLC-ζ localization is linked to normal oocyte activation ability. Laboratory study. University laboratory. Control subjects (men with proven oocyte activation capacity; n = 16) and men whose sperm resulted in recurrent intracytoplasmic sperm injection failure (oocyte activation deficient [OAD]; n = 5). Quantitative immunofluorescent analysis of PLC-ζ protein in human sperm. Total levels of PLC-ζ fluorescence, proportions of sperm exhibiting PLC-ζ immunoreactivity, and proportions of PLC-ζ localization patterns in sperm from control and OAD men. Sperm from control subjects presented a significantly higher proportion of sperm exhibiting PLC-ζ immunofluorescence compared with infertile men diagnosed with OAD (82.6% and 27.4%, respectively). Total levels of PLC-ζ in sperm from individual control and OAD patients exhibited significant variance, with sperm from 10 out of 16 (62.5%) exhibiting levels similar to OAD samples. Predominant PLC-ζ localization patterns varied between control and OAD samples with no predictable or consistent pattern. The results indicate that sperm from control men exhibited significant variance in total levels of PLC-ζ protein, as well as significant variance in the predominant localization pattern. Such variance may hinder the diagnostic application of quantitative PLC-ζ immunofluorescent analysis. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  13. Variance associated with walking velocity during force platform gait analysis of a heterogeneous sample of clinically normal dogs.

    PubMed

    Piazza, Alexander M; Binversie, Emily E; Baker, Lauren A; Nemke, Brett; Sample, Susannah J; Muir, Peter

    2017-04-01

    OBJECTIVE To determine whether walking at specific ranges of absolute and relative (V*) velocity would aid efficient capture of gait trial data with low ground reaction force (GRF) variance in a heterogeneous sample of dogs. ANIMALS 17 clinically normal dogs of various breeds, ages, and sexes. PROCEDURES Each dog was walked across a force platform at its preferred velocity, with controlled acceleration within 0.5 m/s 2 . Ranges in V* were created for height at the highest point of the shoulders (withers; WHV*). Variance effects from 8 walking absolute velocity ranges and associated WHV* ranges were examined by means of repeated-measures ANCOVA. RESULTS The individual dog effect provided the greatest contribution to variance. Narrow velocity ranges typically resulted in capture of a smaller percentage of valid trials and were not consistently associated with lower variance. The WHV* range of 0.33 to 0.46 allowed capture of valid trials efficiently, with no significant effects on peak vertical force and vertical impulse. CONCLUSIONS AND CLINICAL RELEVANCE Dogs with severe lameness may be unable to trot or may have a decline in mobility with gait trial repetition. Gait analysis involving evaluation of individual dogs at their preferred absolute velocity, such that dogs are evaluated at a similar V*, may facilitate efficient capture of valid trials without significant effects on GRF. Use of individual velocity ranges derived from a WHV* range of 0.33 to 0.46 can account for heterogeneity and appears suitable for use in clinical trials involving dogs at a walking gait.

  14. Attenuating effects of ecosystem management on coral reefs.

    PubMed

    Steneck, Robert S; Mumby, Peter J; MacDonald, Chancey; Rasher, Douglas B; Stoyle, George

    2018-05-01

    Managing diverse ecosystems is challenging because structuring drivers are often processes having diffuse impacts that attenuate from the people who were "managed" to the expected ecosystem-wide outcome. Coral reef fishes targeted for management only indirectly link to the ecosystem's foundation (reef corals). Three successively weakening interaction tiers separate management of fishing from coral abundance. We studied 12 islands along the 700-km eastern Caribbean archipelago, comparing fished and unfished coral reefs. Fishing reduced biomass of carnivorous (snappers and groupers) and herbivorous (parrotfish and surgeonfish) fishes. We document attenuating but important effects of managing fishing, which explained 37% of variance in parrotfish abundance, 20% of variance in harmful algal abundance, and 17% of variance in juvenile coral abundance. The explained variance increased when we quantified herbivory using area-specific bite rates. Local fisheries management resulted in a 62% increase in the archipelago's juvenile coral density, improving the ecosystem's recovery potential from major disturbances.

  15. Attenuating effects of ecosystem management on coral reefs

    PubMed Central

    Rasher, Douglas B.; Stoyle, George

    2018-01-01

    Managing diverse ecosystems is challenging because structuring drivers are often processes having diffuse impacts that attenuate from the people who were “managed” to the expected ecosystem-wide outcome. Coral reef fishes targeted for management only indirectly link to the ecosystem’s foundation (reef corals). Three successively weakening interaction tiers separate management of fishing from coral abundance. We studied 12 islands along the 700-km eastern Caribbean archipelago, comparing fished and unfished coral reefs. Fishing reduced biomass of carnivorous (snappers and groupers) and herbivorous (parrotfish and surgeonfish) fishes. We document attenuating but important effects of managing fishing, which explained 37% of variance in parrotfish abundance, 20% of variance in harmful algal abundance, and 17% of variance in juvenile coral abundance. The explained variance increased when we quantified herbivory using area-specific bite rates. Local fisheries management resulted in a 62% increase in the archipelago’s juvenile coral density, improving the ecosystem’s recovery potential from major disturbances. PMID:29750192

  16. Accounting for sampling error when inferring population synchrony from time-series data: a Bayesian state-space modelling approach with applications.

    PubMed

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.

  17. Accounting for Sampling Error When Inferring Population Synchrony from Time-Series Data: A Bayesian State-Space Modelling Approach with Applications

    PubMed Central

    Santin-Janin, Hugues; Hugueny, Bernard; Aubry, Philippe; Fouchet, David; Gimenez, Olivier; Pontier, Dominique

    2014-01-01

    Background Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation. Methodology/Principal findings The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength. Conclusion/Significance The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates. PMID:24489839

  18. Genetic and environmental variance in content dimensions of the MMPI.

    PubMed

    Rose, R J

    1988-08-01

    To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.

  19. Estimating Required Contingency Funds for Construction Projects using Multiple Linear Regression

    DTIC Science & Technology

    2006-03-01

    Breusch - Pagan test , in which the null hypothesis states that the residuals have constant variance. The alternate hypothesis is that the residuals do not...variance, the Breusch - Pagan test provides statistical evidence that the assumption is justified. For the proposed model, the p-value is 0.173...entire test sample. v Acknowledgments First, I would like to acknowledge the influence and help of Greg Hoffman. His work served as the

  20. Estimation of a Constant False Alarm Rate Processing Loss for a High-Resolution Maritime Radar System

    DTIC Science & Technology

    2008-08-01

    a sample with clutter of mean level y0 and noise of variance σ 2, with a threshold CACA zt β= . Using the results presented in [15, 16, 23], it can...level y0 and noise of variance σ 2, with a threshold CACA zt β= . Using (3.107) and (3.98), the expression for the expected Pd of a Swerling 2 target can

Top