Sample records for variance components approach

  1. Heritability construction for provenance and family selection

    Treesearch

    Fan H. Kung; Calvin F. Bey

    1977-01-01

    Concepts and procedures for heritability estimations through the variance components and the unified F-statistics approach are described. The variance components approach is illustrated by five possible family selection schemes within a diallel mating test, while the unified F-statistics approach is demonstrated by a geographic variation study. In a balance design, the...

  2. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  3. Proportion of general factor variance in a hierarchical multiple-component measuring instrument: a note on a confidence interval estimation procedure.

    PubMed

    Raykov, Tenko; Zinbarg, Richard E

    2011-05-01

    A confidence interval construction procedure for the proportion of explained variance by a hierarchical, general factor in a multi-component measuring instrument is outlined. The method provides point and interval estimates for the proportion of total scale score variance that is accounted for by the general factor, which could be viewed as common to all components. The approach may also be used for testing composite (one-tailed) or simple hypotheses about this proportion, and is illustrated with a pair of examples. ©2010 The British Psychological Society.

  4. Approaches to Capture Variance Differences in Rest fMRI Networks in the Spatial Geometric Features: Application to Schizophrenia.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Identification of functionally connected regions while at rest has been at the forefront of research focusing on understanding interactions between different brain regions. Studies have utilized a variety of approaches including seed based as well as data-driven approaches to identifying such networks. Most such techniques involve differentiating groups based on group mean measures. There has been little work focused on differences in spatial characteristics of resting fMRI data. We present a method to identify between group differences in the variability in the cluster characteristics of network regions within components estimated via independent vector analysis (IVA). IVA is a blind source separation approach shown to perform well in capturing individual subject variability within a group model. We evaluate performance of the approach using simulations and then apply to a relatively large schizophrenia data set (82 schizophrenia patients and 89 healthy controls). We postulate, that group differences in the intra-network distributional characteristics of resting state network voxel intensities might indirectly capture important distinctions between the brain function of healthy and clinical populations. Results demonstrate that specific areas of the brain, superior, and middle temporal gyrus that are involved in language and recognition of emotions, show greater component level variance in amplitude weights for schizophrenia patients than healthy controls. Statistically significant correlation between component level spatial variance and component volume was observed in 19 of the 27 non-artifactual components implying an evident relationship between the two parameters. Additionally, the greater spread in the distance of the cluster peak of a component from the centroid in schizophrenia patients compared to healthy controls was observed for seven components. These results indicate that there is hidden potential in exploring variance and possibly higher-order measures in resting state networks to better understand diseases such as schizophrenia. It furthers comprehension of how spatial characteristics can highlight previously unexplored differences between populations such as schizophrenia patients and healthy controls.

  5. Principal variance component analysis of crop composition data: a case study on herbicide-tolerant cotton.

    PubMed

    Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D

    2013-07-03

    Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.

  6. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  7. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market.

  8. Robust LOD scores for variance component-based linkage analysis.

    PubMed

    Blangero, J; Williams, J T; Almasy, L

    2000-01-01

    The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.

  9. A General Approach to Defining Latent Growth Components

    ERIC Educational Resources Information Center

    Mayer, Axel; Steyer, Rolf; Mueller, Horst

    2012-01-01

    We present a 3-step approach to defining latent growth components. In the first step, a measurement model with at least 2 indicators for each time point is formulated to identify measurement error variances and obtain latent variables that are purged from measurement error. In the second step, we use contrast matrices to define the latent growth…

  10. How Reliable Are Students' Evaluations of Teaching Quality? A Variance Components Approach

    ERIC Educational Resources Information Center

    Feistauer, Daniela; Richter, Tobias

    2017-01-01

    The inter-rater reliability of university students' evaluations of teaching quality was examined with cross-classified multilevel models. Students (N = 480) evaluated lectures and seminars over three years with a standardised evaluation questionnaire, yielding 4224 data points. The total variance of these student evaluations was separated into the…

  11. Heritability of physical activity traits in Brazilian families: the Baependi Heart Study

    PubMed Central

    2011-01-01

    Background It is commonly recognized that physical activity has familial aggregation; however, the genetic influences on physical activity phenotypes are not well characterized. This study aimed to (1) estimate the heritability of physical activity traits in Brazilian families; and (2) investigate whether genetic and environmental variance components contribute differently to the expression of these phenotypes in males and females. Methods The sample that constitutes the Baependi Heart Study is comprised of 1,693 individuals in 95 Brazilian families. The phenotypes were self-reported in a questionnaire based on the WHO-MONICA instrument. Variance component approaches, implemented in the SOLAR (Sequential Oligogenic Linkage Analysis Routines) computer package, were applied to estimate the heritability and to evaluate the heterogeneity of variance components by gender on the studied phenotypes. Results The heritability estimates were intermediate (35%) for weekly physical activity among non-sedentary subjects (weekly PA_NS), and low (9-14%) for sedentarism, weekly physical activity (weekly PA), and level of daily physical activity (daily PA). Significant evidence for heterogeneity in variance components by gender was observed for the sedentarism and weekly PA phenotypes. No significant gender differences in genetic or environmental variance components were observed for the weekly PA_NS trait. The daily PA phenotype was predominantly influenced by environmental factors, with larger effects in males than in females. Conclusions Heritability estimates for physical activity phenotypes in this sample of the Brazilian population were significant in both males and females, and varied from low to intermediate magnitude. Significant evidence for heterogeneity in variance components by gender was observed. These data add to the knowledge of the physical activity traits in the Brazilian study population, and are concordant with the notion of significant biological determination in active behavior. PMID:22126647

  12. Heritability of physical activity traits in Brazilian families: the Baependi Heart Study.

    PubMed

    Horimoto, Andréa R V R; Giolo, Suely R; Oliveira, Camila M; Alvim, Rafael O; Soler, Júlia P; de Andrade, Mariza; Krieger, José E; Pereira, Alexandre C

    2011-11-29

    It is commonly recognized that physical activity has familial aggregation; however, the genetic influences on physical activity phenotypes are not well characterized. This study aimed to (1) estimate the heritability of physical activity traits in Brazilian families; and (2) investigate whether genetic and environmental variance components contribute differently to the expression of these phenotypes in males and females. The sample that constitutes the Baependi Heart Study is comprised of 1,693 individuals in 95 Brazilian families. The phenotypes were self-reported in a questionnaire based on the WHO-MONICA instrument. Variance component approaches, implemented in the SOLAR (Sequential Oligogenic Linkage Analysis Routines) computer package, were applied to estimate the heritability and to evaluate the heterogeneity of variance components by gender on the studied phenotypes. The heritability estimates were intermediate (35%) for weekly physical activity among non-sedentary subjects (weekly PA_NS), and low (9-14%) for sedentarism, weekly physical activity (weekly PA), and level of daily physical activity (daily PA). Significant evidence for heterogeneity in variance components by gender was observed for the sedentarism and weekly PA phenotypes. No significant gender differences in genetic or environmental variance components were observed for the weekly PA_NS trait. The daily PA phenotype was predominantly influenced by environmental factors, with larger effects in males than in females. Heritability estimates for physical activity phenotypes in this sample of the Brazilian population were significant in both males and females, and varied from low to intermediate magnitude. Significant evidence for heterogeneity in variance components by gender was observed. These data add to the knowledge of the physical activity traits in the Brazilian study population, and are concordant with the notion of significant biological determination in active behavior.

  13. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  14. Principal components analysis in clinical studies.

    PubMed

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  15. Detection of gene-environment interaction in pedigree data using genome-wide genotypes.

    PubMed

    Nivard, Michel G; Middeldorp, Christel M; Lubke, Gitta; Hottenga, Jouke-Jan; Abdellaoui, Abdel; Boomsma, Dorret I; Dolan, Conor V

    2016-12-01

    Heritability may be estimated using phenotypic data collected in relatives or in distantly related individuals using genome-wide single nucleotide polymorphism (SNP) data. We combined these approaches by re-parameterizing the model proposed by Zaitlen et al and extended this model to include moderation of (total and SNP-based) genetic and environmental variance components by a measured moderator. By means of data simulation, we demonstrated that the type 1 error rates of the proposed test are correct and parameter estimates are accurate. As an application, we considered the moderation by age or year of birth of variance components associated with body mass index (BMI), height, attention problems (AP), and symptoms of anxiety and depression. The genetic variance of BMI was found to increase with age, but the environmental variance displayed a greater increase with age, resulting in a proportional decrease of the heritability of BMI. Environmental variance of height increased with year of birth. The environmental variance of AP increased with age. These results illustrate the assessment of moderation of environmental and genetic effects, when estimating heritability from combined SNP and family data. The assessment of moderation of genetic and environmental variance will enhance our understanding of the genetic architecture of complex traits.

  16. Using variance structure to quantify responses to perturbation in fish catches

    USGS Publications Warehouse

    Vidal, Tiffany E.; Irwin, Brian J.; Wagner, Tyler; Rudstam, Lars G.; Jackson, James R.; Bence, James R.

    2017-01-01

    We present a case study evaluation of gill-net catches of Walleye Sander vitreus to assess potential effects of large-scale changes in Oneida Lake, New York, including the disruption of trophic interactions by double-crested cormorants Phalacrocorax auritus and invasive dreissenid mussels. We used the empirical long-term gill-net time series and a negative binomial linear mixed model to partition the variability in catches into spatial and coherent temporal variance components, hypothesizing that variance partitioning can help quantify spatiotemporal variability and determine whether variance structure differs before and after large-scale perturbations. We found that the mean catch and the total variability of catches decreased following perturbation but that not all sampling locations responded in a consistent manner. There was also evidence of some spatial homogenization concurrent with a restructuring of the relative productivity of individual sites. Specifically, offshore sites generally became more productive following the estimated break point in the gill-net time series. These results provide support for the idea that variance structure is responsive to large-scale perturbations; therefore, variance components have potential utility as statistical indicators of response to a changing environment more broadly. The modeling approach described herein is flexible and would be transferable to other systems and metrics. For example, variance partitioning could be used to examine responses to alternative management regimes, to compare variability across physiographic regions, and to describe differences among climate zones. Understanding how individual variance components respond to perturbation may yield finer-scale insights into ecological shifts than focusing on patterns in the mean responses or total variability alone.

  17. Introductory Guide to the Statistics of Molecular Genetics

    ERIC Educational Resources Information Center

    Eley, Thalia C.; Rijsdijk, Fruhling

    2005-01-01

    Background: This introductory guide presents the main two analytical approaches used by molecular geneticists: linkage and association. Methods: Traditional linkage and association methods are described, along with more recent advances in methodologies such as those using a variance components approach. Results: New methods are being developed all…

  18. Detection of gene–environment interaction in pedigree data using genome-wide genotypes

    PubMed Central

    Nivard, Michel G; Middeldorp, Christel M; Lubke, Gitta; Hottenga, Jouke-Jan; Abdellaoui, Abdel; Boomsma, Dorret I; Dolan, Conor V

    2016-01-01

    Heritability may be estimated using phenotypic data collected in relatives or in distantly related individuals using genome-wide single nucleotide polymorphism (SNP) data. We combined these approaches by re-parameterizing the model proposed by Zaitlen et al and extended this model to include moderation of (total and SNP-based) genetic and environmental variance components by a measured moderator. By means of data simulation, we demonstrated that the type 1 error rates of the proposed test are correct and parameter estimates are accurate. As an application, we considered the moderation by age or year of birth of variance components associated with body mass index (BMI), height, attention problems (AP), and symptoms of anxiety and depression. The genetic variance of BMI was found to increase with age, but the environmental variance displayed a greater increase with age, resulting in a proportional decrease of the heritability of BMI. Environmental variance of height increased with year of birth. The environmental variance of AP increased with age. These results illustrate the assessment of moderation of environmental and genetic effects, when estimating heritability from combined SNP and family data. The assessment of moderation of genetic and environmental variance will enhance our understanding of the genetic architecture of complex traits. PMID:27436263

  19. Detection of mastitis in dairy cattle by use of mixture models for repeated somatic cell scores: a Bayesian approach via Gibbs sampling.

    PubMed

    Odegård, J; Jensen, J; Madsen, P; Gianola, D; Klemetsdal, G; Heringstad, B

    2003-11-01

    The distribution of somatic cell scores could be regarded as a mixture of at least two components depending on a cow's udder health status. A heteroscedastic two-component Bayesian normal mixture model with random effects was developed and implemented via Gibbs sampling. The model was evaluated using datasets consisting of simulated somatic cell score records. Somatic cell score was simulated as a mixture representing two alternative udder health statuses ("healthy" or "diseased"). Animals were assigned randomly to the two components according to the probability of group membership (Pm). Random effects (additive genetic and permanent environment), when included, had identical distributions across mixture components. Posterior probabilities of putative mastitis were estimated for all observations, and model adequacy was evaluated using measures of sensitivity, specificity, and posterior probability of misclassification. Fitting different residual variances in the two mixture components caused some bias in estimation of parameters. When the components were difficult to disentangle, so were their residual variances, causing bias in estimation of Pm and of location parameters of the two underlying distributions. When all variance components were identical across mixture components, the mixture model analyses returned parameter estimates essentially without bias and with a high degree of precision. Including random effects in the model increased the probability of correct classification substantially. No sizable differences in probability of correct classification were found between models in which a single cow effect (ignoring relationships) was fitted and models where this effect was split into genetic and permanent environmental components, utilizing relationship information. When genetic and permanent environmental effects were fitted, the between-replicate variance of estimates of posterior means was smaller because the model accounted for random genetic drift.

  20. Analysis of molecular variance inferred from metric distances among DNA haplotypes: application to human mitochondrial DNA restriction data.

    PubMed

    Excoffier, L; Smouse, P E; Quattro, J M

    1992-06-01

    We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as phi-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivision. The method is flexible enough to accommodate several alternative input matrices, corresponding to different types of molecular data, as well as different types of evolutionary assumptions, without modifying the basic structure of the analysis. The significance of the variance components and phi-statistics is tested using a permutational approach, eliminating the normality assumption that is conventional for analysis of variance but inappropriate for molecular data. Application of AMOVA to human mitochondrial DNA haplotype data shows that population subdivisions are better resolved when some measure of molecular differences among haplotypes is introduced into the analysis. At the intraspecific level, however, the additional information provided by knowing the exact phylogenetic relations among haplotypes or by a nonlinear translation of restriction-site change into nucleotide diversity does not significantly modify the inferred population genetic structure. Monte Carlo studies show that site sampling does not fundamentally affect the significance of the molecular variance components. The AMOVA treatment is easily extended in several different directions and it constitutes a coherent and flexible framework for the statistical analysis of molecular data.

  1. On the additive and dominant variance and covariance of individuals within the genomic selection scope.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Legarra, Andres

    2013-12-01

    Genomic evaluation models can fit additive and dominant SNP effects. Under quantitative genetics theory, additive or "breeding" values of individuals are generated by substitution effects, which involve both "biological" additive and dominant effects of the markers. Dominance deviations include only a portion of the biological dominant effects of the markers. Additive variance includes variation due to the additive and dominant effects of the markers. We describe a matrix of dominant genomic relationships across individuals, D, which is similar to the G matrix used in genomic best linear unbiased prediction. This matrix can be used in a mixed-model context for genomic evaluations or to estimate dominant and additive variances in the population. From the "genotypic" value of individuals, an alternative parameterization defines additive and dominance as the parts attributable to the additive and dominant effect of the markers. This approach underestimates the additive genetic variance and overestimates the dominance variance. Transforming the variances from one model into the other is trivial if the distribution of allelic frequencies is known. We illustrate these results with mouse data (four traits, 1884 mice, and 10,946 markers) and simulated data (2100 individuals and 10,000 markers). Variance components were estimated correctly in the model, considering breeding values and dominance deviations. For the model considering genotypic values, the inclusion of dominant effects biased the estimate of additive variance. Genomic models were more accurate for the estimation of variance components than their pedigree-based counterparts.

  2. Bias and robustness of uncertainty components estimates in transient climate projections

    NASA Astrophysics Data System (ADS)

    Hingray, Benoit; Blanchet, Juliette; Jean-Philippe, Vidal

    2016-04-01

    A critical issue in climate change studies is the estimation of uncertainties in projections along with the contribution of the different uncertainty sources, including scenario uncertainty, the different components of model uncertainty and internal variability. Quantifying the different uncertainty sources faces actually different problems. For instance and for the sake of simplicity, an estimate of model uncertainty is classically obtained from the empirical variance of the climate responses obtained for the different modeling chains. These estimates are however biased. Another difficulty arises from the limited number of members that are classically available for most modeling chains. In this case, the climate response of one given chain and the effect of its internal variability may be actually difficult if not impossible to separate. The estimate of scenario uncertainty, model uncertainty and internal variability components are thus likely to be not really robust. We explore the importance of the bias and the robustness of the estimates for two classical Analysis of Variance (ANOVA) approaches: a Single Time approach (STANOVA), based on the only data available for the considered projection lead time and a time series based approach (QEANOVA), which assumes quasi-ergodicity of climate outputs over the whole available climate simulation period (Hingray and Saïd, 2014). We explore both issues for a simple but classical configuration where uncertainties in projections are composed of two single sources: model uncertainty and internal climate variability. The bias in model uncertainty estimates is explored from theoretical expressions of unbiased estimators developed for both ANOVA approaches. The robustness of uncertainty estimates is explored for multiple synthetic ensembles of time series projections generated with MonteCarlo simulations. For both ANOVA approaches, when the empirical variance of climate responses is used to estimate model uncertainty, the bias is always positive. It can be especially high with STANOVA. In the most critical configurations, when the number of members available for each modeling chain is small (< 3) and when internal variability explains most of total uncertainty variance (75% or more), the overestimation is higher than 100% of the true model uncertainty variance. The bias can be considerably reduced with a time series ANOVA approach, owing to the multiple time steps accounted for. The longer the transient time period used for the analysis, the larger the reduction. When a quasi-ergodic ANOVA approach is applied to decadal data for the whole 1980-2100 period, the bias is reduced by a factor 2.5 to 20 depending on the projection lead time. In all cases, the bias is likely to be not negligible for a large number of climate impact studies resulting in a likely large overestimation of the contribution of model uncertainty to total variance. For both approaches, the robustness of all uncertainty estimates is higher when more members are available, when internal variability is smaller and/or the response-to-uncertainty ratio is higher. QEANOVA estimates are much more robust than STANOVA ones: QEANOVA simulated confidence intervals are roughly 3 to 5 times smaller than STANOVA ones. Excepted for STANOVA when less than 3 members is available, the robustness is rather high for total uncertainty and moderate for internal variability estimates. For model uncertainty or response-to-uncertainty ratio estimates, the robustness is conversely low for QEANOVA to very low for STANOVA. In the most critical configurations (small number of member, large internal variability), large over- or underestimation of uncertainty components is very thus likely. To propose relevant uncertainty analyses and avoid misleading interpretations, estimates of uncertainty components should be therefore bias corrected and ideally come with estimates of their robustness. This work is part of the COMPLEX Project (European Collaborative Project FP7-ENV-2012 number: 308601; http://www.complex.ac.uk/). Hingray, B., Saïd, M., 2014. Partitioning internal variability and model uncertainty components in a multimodel multireplicate ensemble of climate projections. J.Climate. doi:10.1175/JCLI-D-13-00629.1 Hingray, B., Blanchet, J. (revision) Unbiased estimators for uncertainty components in transient climate projections. J. Climate Hingray, B., Blanchet, J., Vidal, J.P. (revision) Robustness of uncertainty components estimates in climate projections. J.Climate

  3. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy.

    PubMed

    Jesse, Stephen; Kalinin, Sergei V

    2009-02-25

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  4. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    PubMed

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  5. GIS-based niche modeling for mapping species' habitats

    USGS Publications Warehouse

    Rotenberry, J.T.; Preston, K.L.; Knick, S.

    2006-01-01

    Ecological a??niche modelinga?? using presence-only locality data and large-scale environmental variables provides a powerful tool for identifying and mapping suitable habitat for species over large spatial extents. We describe a niche modeling approach that identifies a minimum (rather than an optimum) set of basic habitat requirements for a species, based on the assumption that constant environmental relationships in a species' distribution (i.e., variables that maintain a consistent value where the species occurs) are most likely to be associated with limiting factors. Environmental variables that take on a wide range of values where a species occurs are less informative because they do not limit a species' distribution, at least over the range of variation sampled. This approach is operationalized by partitioning Mahalanobis D2 (standardized difference between values of a set of environmental variables for any point and mean values for those same variables calculated from all points at which a species was detected) into independent components. The smallest of these components represents the linear combination of variables with minimum variance; increasingly larger components represent larger variances and are increasingly less limiting. We illustrate this approach using the California Gnatcatcher (Polioptila californica Brewster) and provide SAS code to implement it.

  6. Variance component and breeding value estimation for genetic heterogeneity of residual variance in Swedish Holstein dairy cattle.

    PubMed

    Rönnegård, L; Felleki, M; Fikse, W F; Mulder, H A; Strandberg, E

    2013-04-01

    Trait uniformity, or micro-environmental sensitivity, may be studied through individual differences in residual variance. These differences appear to be heritable, and the need exists, therefore, to fit models to predict breeding values explaining differences in residual variance. The aim of this paper is to estimate breeding values for micro-environmental sensitivity (vEBV) in milk yield and somatic cell score, and their associated variance components, on a large dairy cattle data set having more than 1.6 million records. Estimation of variance components, ordinary breeding values, and vEBV was performed using standard variance component estimation software (ASReml), applying the methodology for double hierarchical generalized linear models. Estimation using ASReml took less than 7 d on a Linux server. The genetic standard deviations for residual variance were 0.21 and 0.22 for somatic cell score and milk yield, respectively, which indicate moderate genetic variance for residual variance and imply that a standard deviation change in vEBV for one of these traits would alter the residual variance by 20%. This study shows that estimation of variance components, estimated breeding values and vEBV, is feasible for large dairy cattle data sets using standard variance component estimation software. The possibility to select for uniformity in Holstein dairy cattle based on these estimates is discussed. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Principal component of explained variance: An efficient and optimal data dimension reduction framework for association studies.

    PubMed

    Turgeon, Maxime; Oualkacha, Karim; Ciampi, Antonio; Miftah, Hanane; Dehghan, Golsa; Zanke, Brent W; Benedet, Andréa L; Rosa-Neto, Pedro; Greenwood, Celia Mt; Labbe, Aurélie

    2018-05-01

    The genomics era has led to an increase in the dimensionality of data collected in the investigation of biological questions. In this context, dimension-reduction techniques can be used to summarise high-dimensional signals into low-dimensional ones, to further test for association with one or more covariates of interest. This paper revisits one such approach, previously known as principal component of heritability and renamed here as principal component of explained variance (PCEV). As its name suggests, the PCEV seeks a linear combination of outcomes in an optimal manner, by maximising the proportion of variance explained by one or several covariates of interest. By construction, this method optimises power; however, due to its computational complexity, it has unfortunately received little attention in the past. Here, we propose a general analytical PCEV framework that builds on the assets of the original method, i.e. conceptually simple and free of tuning parameters. Moreover, our framework extends the range of applications of the original procedure by providing a computationally simple strategy for high-dimensional outcomes, along with exact and asymptotic testing procedures that drastically reduce its computational cost. We investigate the merits of the PCEV using an extensive set of simulations. Furthermore, the use of the PCEV approach is illustrated using three examples taken from the fields of epigenetics and brain imaging.

  8. Assessing variance components in multilevel linear models using approximate Bayes factors: A case study of ethnic disparities in birthweight

    PubMed Central

    Saville, Benjamin R.; Herring, Amy H.; Kaufman, Jay S.

    2013-01-01

    Racial/ethnic disparities in birthweight are a large source of differential morbidity and mortality worldwide and have remained largely unexplained in epidemiologic models. We assess the impact of maternal ancestry and census tract residence on infant birth weights in New York City and the modifying effects of race and nativity by incorporating random effects in a multilevel linear model. Evaluating the significance of these predictors involves the test of whether the variances of the random effects are equal to zero. This is problematic because the null hypothesis lies on the boundary of the parameter space. We generalize an approach for assessing random effects in the two-level linear model to a broader class of multilevel linear models by scaling the random effects to the residual variance and introducing parameters that control the relative contribution of the random effects. After integrating over the random effects and variance components, the resulting integrals needed to calculate the Bayes factor can be efficiently approximated with Laplace’s method. PMID:24082430

  9. [Analytic methods for seed models with genotype x environment interactions].

    PubMed

    Zhu, J

    1996-01-01

    Genetic models with genotype effect (G) and genotype x environment interaction effect (GE) are proposed for analyzing generation means of seed quantitative traits in crops. The total genetic effect (G) is partitioned into seed direct genetic effect (G0), cytoplasm genetic of effect (C), and maternal plant genetic effect (Gm). Seed direct genetic effect (G0) can be further partitioned into direct additive (A) and direct dominance (D) genetic components. Maternal genetic effect (Gm) can also be partitioned into maternal additive (Am) and maternal dominance (Dm) genetic components. The total genotype x environment interaction effect (GE) can also be partitioned into direct genetic by environment interaction effect (G0E), cytoplasm genetic by environment interaction effect (CE), and maternal genetic by environment interaction effect (GmE). G0E can be partitioned into direct additive by environment interaction (AE) and direct dominance by environment interaction (DE) genetic components. GmE can also be partitioned into maternal additive by environment interaction (AmE) and maternal dominance by environment interaction (DmE) genetic components. Partitions of genetic components are listed for parent, F1, F2 and backcrosses. A set of parents, their reciprocal F1 and F2 seeds is applicable for efficient analysis of seed quantitative traits. MINQUE(0/1) method can be used for estimating variance and covariance components. Unbiased estimation for covariance components between two traits can also be obtained by the MINQUE(0/1) method. Random genetic effects in seed models are predictable by the Adjusted Unbiased Prediction (AUP) approach with MINQUE(0/1) method. The jackknife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects, which can be further used in a t-test for parameter. Unbiasedness and efficiency for estimating variance components and predicting genetic effects are tested by Monte Carlo simulations.

  10. Sampling in freshwater environments: suspended particle traps and variability in the final data.

    PubMed

    Barbizzi, Sabrina; Pati, Alessandra

    2008-11-01

    This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.

  11. Multivariate analysis of variance of designed chromatographic data. A case study involving fermentation of rooibos tea.

    PubMed

    Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata

    2017-03-17

    An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Sparse principal component analysis in medical shape modeling

    NASA Astrophysics Data System (ADS)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  13. Multivariate classification of small order watersheds in the Quabbin Reservoir Basin, Massachusetts

    USGS Publications Warehouse

    Lent, R.M.; Waldron, M.C.; Rader, J.C.

    1998-01-01

    A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.A multivariate approach was used to analyze hydrologic, geologic, geographic, and water-chemistry data from small order watersheds in the Quabbin Reservoir Basin in central Massachusetts. Eighty three small order watersheds were delineated and landscape attributes defining hydrologic, geologic, and geographic features of the watersheds were compiled from geographic information system data layers. Principal components analysis was used to evaluate 11 chemical constituents collected bi-weekly for 1 year at 15 surface-water stations in order to subdivide the basin into subbasins comprised of watersheds with similar water quality characteristics. Three principal components accounted for about 90 percent of the variance in water chemistry data. The principal components were defined as a biogeochemical variable related to wetland density, an acid-neutralization variable, and a road-salt variable related to density of primary roads. Three subbasins were identified. Analysis of variance and multiple comparisons of means were used to identify significant differences in stream water chemistry and landscape attributes among subbasins. All stream water constituents were significantly different among subbasins. Multiple regression techniques were used to relate stream water chemistry to landscape attributes. Important differences in landscape attributes were related to wetlands, slope, and soil type.

  14. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  15. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  16. A variance-decomposition approach to investigating multiscale habitat associations

    USGS Publications Warehouse

    Lawler, J.J.; Edwards, T.C.

    2006-01-01

    The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.

  17. Procedures for estimating confidence intervals for selected method performance parameters.

    PubMed

    McClure, F D; Lee, J K

    2001-01-01

    Procedures for estimating confidence intervals (CIs) for the repeatability variance (sigmar2), reproducibility variance (sigmaR2 = sigmaL2 + sigmar2), laboratory component (sigmaL2), and their corresponding standard deviations sigmar, sigmaR, and sigmaL, respectively, are presented. In addition, CIs for the ratio of the repeatability component to the reproducibility variance (sigmar2/sigmaR2) and the ratio of the laboratory component to the reproducibility variance (sigmaL2/sigmaR2) are also presented.

  18. Decomposing genomic variance using information from GWA, GWE and eQTL analysis.

    PubMed

    Ehsani, A; Janss, L; Pomp, D; Sørensen, P

    2016-04-01

    A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. © 2015 Stichting International Foundation for Animal Genetics.

  19. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  20. Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Micol, John R.

    2010-01-01

    The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.

  1. Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.

    PubMed

    Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M

    2005-11-01

    We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.

  2. Differential contribution of genomic regions to marked genetic variation and prediction of quantitative traits in broiler chickens.

    PubMed

    Abdollahi-Arpanahi, Rostam; Morota, Gota; Valente, Bruno D; Kranis, Andreas; Rosa, Guilherme J M; Gianola, Daniel

    2016-02-03

    Genome-wide association studies in humans have found enrichment of trait-associated single nucleotide polymorphisms (SNPs) in coding regions of the genome and depletion of these in intergenic regions. However, a recent release of the ENCyclopedia of DNA elements showed that ~80 % of the human genome has a biochemical function. Similar studies on the chicken genome are lacking, thus assessing the relative contribution of its genic and non-genic regions to variation is relevant for biological studies and genetic improvement of chicken populations. A dataset including 1351 birds that were genotyped with the 600K Affymetrix platform was used. We partitioned SNPs according to genome annotation data into six classes to characterize the relative contribution of genic and non-genic regions to genetic variation as well as their predictive power using all available quality-filtered SNPs. Target traits were body weight, ultrasound measurement of breast muscle and hen house egg production in broiler chickens. Six genomic regions were considered: intergenic regions, introns, missense, synonymous, 5' and 3' untranslated regions, and regions that are located 5 kb upstream and downstream of coding genes. Genomic relationship matrices were constructed for each genomic region and fitted in the models, separately or simultaneously. Kernel-based ridge regression was used to estimate variance components and assess predictive ability. Contribution of each class of genomic regions to dominance variance was also considered. Variance component estimates indicated that all genomic regions contributed to marked additive genetic variation and that the class of synonymous regions tended to have the greatest contribution. The marked dominance genetic variation explained by each class of genomic regions was similar and negligible (~0.05). In terms of prediction mean-square error, the whole-genome approach showed the best predictive ability. All genic and non-genic regions contributed to phenotypic variation for the three traits studied. Overall, the contribution of additive genetic variance to the total genetic variance was much greater than that of dominance variance. Our results show that all genomic regions are important for the prediction of the targeted traits, and the whole-genome approach was reaffirmed as the best tool for genome-enabled prediction of quantitative traits.

  3. Age-specific survival of male golden-cheeked warblers on the Fort Hood Military Reservation, Texas

    USGS Publications Warehouse

    Duarte, Adam; Hines, James E.; Nichols, James D.; Hatfield, Jeffrey S.; Weckerly, Floyd W.

    2014-01-01

    Population models are essential components of large-scale conservation and management plans for the federally endangered Golden-cheeked Warbler (Setophaga chrysoparia; hereafter GCWA). However, existing models are based on vital rate estimates calculated using relatively small data sets that are now more than a decade old. We estimated more current, precise adult and juvenile apparent survival (Φ) probabilities and their associated variances for male GCWAs. In addition to providing estimates for use in population modeling, we tested hypotheses about spatial and temporal variation in Φ. We assessed whether a linear trend in Φ or a change in the overall mean Φ corresponded to an observed increase in GCWA abundance during 1992-2000 and if Φ varied among study plots. To accomplish these objectives, we analyzed long-term GCWA capture-resight data from 1992 through 2011, collected across seven study plots on the Fort Hood Military Reservation using a Cormack-Jolly-Seber model structure within program MARK. We also estimated Φ process and sampling variances using a variance-components approach. Our results did not provide evidence of site-specific variation in adult Φ on the installation. Because of a lack of data, we could not assess whether juvenile Φ varied spatially. We did not detect a strong temporal association between GCWA abundance and Φ. Mean estimates of Φ for adult and juvenile male GCWAs for all years analyzed were 0.47 with a process variance of 0.0120 and a sampling variance of 0.0113 and 0.28 with a process variance of 0.0076 and a sampling variance of 0.0149, respectively. Although juvenile Φ did not differ greatly from previous estimates, our adult Φ estimate suggests previous GCWA population models were overly optimistic with respect to adult survival. These updated Φ probabilities and their associated variances will be incorporated into new population models to assist with GCWA conservation decision making.

  4. Genomic scan as a tool for assessing the genetic component of phenotypic variance in wild populations.

    PubMed

    Herrera, Carlos M

    2012-01-01

    Methods for estimating quantitative trait heritability in wild populations have been developed in recent years which take advantage of the increased availability of genetic markers to reconstruct pedigrees or estimate relatedness between individuals, but their application to real-world data is not exempt from difficulties. This chapter describes a recent marker-based technique which, by adopting a genomic scan approach and focusing on the relationship between phenotypes and genotypes at the individual level, avoids the problems inherent to marker-based estimators of relatedness. This method allows the quantification of the genetic component of phenotypic variance ("degree of genetic determination" or "heritability in the broad sense") in wild populations and is applicable whenever phenotypic trait values and multilocus data for a large number of genetic markers (e.g., amplified fragment length polymorphisms, AFLPs) are simultaneously available for a sample of individuals from the same population. The method proceeds by first identifying those markers whose variation across individuals is significantly correlated with individual phenotypic differences ("adaptive loci"). The proportion of phenotypic variance in the sample that is statistically accounted for by individual differences in adaptive loci is then estimated by fitting a linear model to the data, with trait value as the dependent variable and scores of adaptive loci as independent ones. The method can be easily extended to accommodate quantitative or qualitative information on biologically relevant features of the environment experienced by each sampled individual, in which case estimates of the environmental and genotype × environment components of phenotypic variance can also be obtained.

  5. Removing an intersubject variance component in a general linear model improves multiway factoring of event-related spectral perturbations in group EEG studies.

    PubMed

    Spence, Jeffrey S; Brier, Matthew R; Hart, John; Ferree, Thomas C

    2013-03-01

    Linear statistical models are used very effectively to assess task-related differences in EEG power spectral analyses. Mixed models, in particular, accommodate more than one variance component in a multisubject study, where many trials of each condition of interest are measured on each subject. Generally, intra- and intersubject variances are both important to determine correct standard errors for inference on functions of model parameters, but it is often assumed that intersubject variance is the most important consideration in a group study. In this article, we show that, under common assumptions, estimates of some functions of model parameters, including estimates of task-related differences, are properly tested relative to the intrasubject variance component only. A substantial gain in statistical power can arise from the proper separation of variance components when there is more than one source of variability. We first develop this result analytically, then show how it benefits a multiway factoring of spectral, spatial, and temporal components from EEG data acquired in a group of healthy subjects performing a well-studied response inhibition task. Copyright © 2011 Wiley Periodicals, Inc.

  6. Smoothing of the bivariate LOD score for non-normal quantitative traits.

    PubMed

    Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John

    2005-12-30

    Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.

  7. Genomic BLUP including additive and dominant variation in purebreds and F1 crossbreds, with an application in pigs.

    PubMed

    Vitezica, Zulma G; Varona, Luis; Elsen, Jean-Michel; Misztal, Ignacy; Herring, William; Legarra, Andrès

    2016-01-29

    Most developments in quantitative genetics theory focus on the study of intra-breed/line concepts. With the availability of massive genomic information, it becomes necessary to revisit the theory for crossbred populations. We propose methods to construct genomic covariances with additive and non-additive (dominance) inheritance in the case of pure lines and crossbred populations. We describe substitution effects and dominant deviations across two pure parental populations and the crossbred population. Gene effects are assumed to be independent of the origin of alleles and allelic frequencies can differ between parental populations. Based on these assumptions, the theoretical variance components (additive and dominant) are obtained as a function of marker effects and allelic frequencies. The additive genetic variance in the crossbred population includes the biological additive and dominant effects of a gene and a covariance term. Dominance variance in the crossbred population is proportional to the product of the heterozygosity coefficients of both parental populations. A genomic BLUP (best linear unbiased prediction) equivalent model is presented. We illustrate this approach by using pig data (two pure lines and their cross, including 8265 phenotyped and genotyped sows). For the total number of piglets born, the dominance variance in the crossbred population represented about 13 % of the total genetic variance. Dominance variation is only marginally important for litter size in the crossbred population. We present a coherent marker-based model that includes purebred and crossbred data and additive and dominant actions. Using this model, it is possible to estimate breeding values, dominant deviations and variance components in a dataset that comprises data on purebred and crossbred individuals. These methods can be exploited to plan assortative mating in pig, maize or other species, in order to generate superior crossbred individuals in terms of performance.

  8. Biochemical phenotypes to discriminate microbial subpopulations and improve outbreak detection.

    PubMed

    Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F; Stelling, John

    2013-01-01

    Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as "nuisance" biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events.

  9. Modified multiblock partial least squares path modeling algorithm with backpropagation neural networks approach

    NASA Astrophysics Data System (ADS)

    Yuniarto, Budi; Kurniawan, Robert

    2017-03-01

    PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.

  10. Measuring self-rated productivity: factor structure and variance component analysis of the Health and Work Questionnaire.

    PubMed

    von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne

    2014-12-01

    To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.

  11. Decomposing the relation between Rapid Automatized Naming (RAN) and reading ability.

    PubMed

    Arnell, Karen M; Joanisse, Marc F; Klein, Raymond M; Busseri, Michael A; Tannock, Rosemary

    2009-09-01

    The Rapid Automatized Naming (RAN) test involves rapidly naming sequences of items presented in a visual array. RAN has generated considerable interest because RAN performance predicts reading achievement. This study sought to determine what elements of RAN are responsible for the shared variance between RAN and reading performance using a series of cognitive tasks and a latent variable modelling approach. Participants performed RAN measures, a test of reading speed and comprehension, and six tasks, which tapped various hypothesised components of the RAN. RAN shared 10% of the variance with reading comprehension and 17% with reading rate. Together, the decomposition tasks explained 52% and 39% of the variance shared between RAN and reading comprehension and between RAN and reading rate, respectively. Significant predictors suggested that working memory encoding underlies part of the relationship between RAN and reading ability.

  12. Statistical study of EBR-II fuel elements manufactured by the cold line at Argonne-West and by Atomics International

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harkness, A. L.

    1977-09-01

    Nine elements from each batch of fuel elements manufactured for the EBR-II reactor have been analyzed for /sup 235/U content by NDA methods. These values, together with those of the manufacturer, are used to estimate the product variance and the variances of the two measuring methods. These variances are compared with the variances computed from the stipulations of the contract. A method is derived for resolving the several variances into their within-batch and between-batch components. Some of these variance components have also been estimated by independent and more familiar conventional methods for comparison.

  13. [Relevance and validity of a new French composite index to measure poverty on a geographical level].

    PubMed

    Challier, B; Viel, J F

    2001-02-01

    A number of disease conditions are influenced by deprivation. Geographical measurement of deprivation can provide an independent contribution to individual measures by accounting for the social context. Such a geographical approach, based on deprivation indices, is classical in Great Britain but scarcely used in France. The objective of this work was to build and validate an index readily usable in French municipalities and cantons. Socioeconomic data (unemployment, occupations, housing specifications, income, etc.) were derived from the 1990 census of municipalities and cantons in the Doubs departement. A new index was built by principal components analysis on the municipality data. The validity of the new index was checked and tested for correlations with British deprivation indices. Principal components analysis on municipality data identified four components (explaining 76% of the variance). Only the first component (CP1 explaining 42% of the variance) was retained. Content validity (wide choice of potential deprivation items, correlation between items and CP1: 0.52 to 0.96) and construct validity (CP1 socially relevant; Cronbach's alpha=0.91; correlation between CP1 and three out of four British indices ranging from 0.73 to 0.88) were sufficient. Analysis on canton data supported that on municipality data. The validation of the new index being satisfactory, the user will have to make a choice. The new index, CP1, is closer to the local background and was derived from data from a French departement. It is therefore better adapted to more descriptive approaches such as health care planning. To examine the relationship between deprivation and health with a more etiological approach, the British indices (anteriority, international comparisons) would be more appropriate, but CP1, once validated in various health problem situations, should be most useful for French studies.

  14. Modelling heterogeneity variances in multiple treatment comparison meta-analysis--are informative priors the better solution?

    PubMed

    Thorlund, Kristian; Thabane, Lehana; Mills, Edward J

    2013-01-11

    Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.

  15. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  16. Variance Component Selection With Applications to Microbiome Taxonomic Data.

    PubMed

    Zhai, Jing; Kim, Juhyun; Knox, Kenneth S; Twigg, Homer L; Zhou, Hua; Zhou, Jin J

    2018-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator) penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV) infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  17. Trait and State Variance in Oppositional Defiant Disorder Symptoms: A Multi-Source Investigation with Spanish Children

    PubMed Central

    Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu

    2016-01-01

    The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784

  18. Modelling temporal variance of component temperatures and directional anisotropy over vegetated canopy

    NASA Astrophysics Data System (ADS)

    Bian, Zunjian; du, yongming; li, hua

    2016-04-01

    Land surface temperature (LST) as a key variable plays an important role on hydrological, meteorology and climatological study. Thermal infrared directional anisotropy is one of essential factors to LST retrieval and application on longwave radiance estimation. Many approaches have been proposed to estimate directional brightness temperatures (DBT) over natural and urban surfaces. While less efforts focus on 3-D scene and the surface component temperatures used in DBT models are quiet difficult to acquire. Therefor a combined 3-D model of TRGM (Thermal-region Radiosity-Graphics combined Model) and energy balance method is proposed in the paper for the attempt of synchronously simulation of component temperatures and DBT in the row planted canopy. The surface thermodynamic equilibrium can be final determined by the iteration strategy of TRGM and energy balance method. The combined model was validated by the top-of-canopy DBTs using airborne observations. The results indicated that the proposed model performs well on the simulation of directional anisotropy, especially the hotspot effect. Though we find that the model overestimate the DBT with Bias of 1.2K, it can be an option as a data reference to study temporal variance of component temperatures and DBTs when field measurement is inaccessible

  19. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  20. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    NASA Astrophysics Data System (ADS)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  1. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  2. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  3. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  4. A Multisource Approach to Assessing Child Maltreatment From Records, Caregivers, and Children.

    PubMed

    Sierau, Susan; Brand, Tilman; Manly, Jody Todd; Schlesier-Michel, Andrea; Klein, Annette M; Andreas, Anna; Garzón, Leonhard Quintero; Keil, Jan; Binser, Martin J; von Klitzing, Kai; White, Lars O

    2017-02-01

    Practitioners and researchers alike face the challenge that different sources report inconsistent information regarding child maltreatment. The present study capitalizes on concordance and discordance between different sources and probes applicability of a multisource approach to data from three perspectives on maltreatment-Child Protection Services (CPS) records, caregivers, and children. The sample comprised 686 participants in early childhood (3- to 8-year-olds; n = 275) or late childhood/adolescence (9- to 16-year-olds; n = 411), 161 from two CPS sites and 525 from the community oversampled for psychosocial risk. We established three components within a factor-analytic approach: the shared variance between sources on presence of maltreatment (convergence), nonshared variance resulting from the child's own perspective, and the caregiver versus CPS perspective. The shared variance between sources was the strongest predictor of caregiver- and self-reported child symptoms. Child perspective and caregiver versus CPS perspective mainly added predictive strength of symptoms in late childhood/adolescence over and above convergence in the case of emotional maltreatment, lack of supervision, and physical abuse. By contrast, convergence almost fully accounted for child symptoms for failure to provide. Our results suggest consistent information from different sources reporting on maltreatment is, on average, the best indicator of child risk.

  5. Variance components estimation for continuous and discrete data, with emphasis on cross-classified sampling designs

    USGS Publications Warehouse

    Gray, Brian R.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Variance components may play multiple roles (cf. Cox and Solomon 2003). First, magnitudes and relative magnitudes of the variances of random factors may have important scientific and management value in their own right. For example, variation in levels of invasive vegetation among and within lakes may suggest causal agents that operate at both spatial scales – a finding that may be important for scientific and management reasons. Second, variance components may also be of interest when they affect precision of means and covariate coefficients. For example, variation in the effect of water depth on the probability of aquatic plant presence in a study of multiple lakes may vary by lake. This variation will affect the precision of the average depth-presence association. Third, variance component estimates may be used when designing studies, including monitoring programs. For example, to estimate the numbers of years and of samples per year required to meet long-term monitoring goals, investigators need estimates of within and among-year variances. Other chapters in this volume (Chapters 7, 8, and 10) as well as extensive external literature outline a framework for applying estimates of variance components to the design of monitoring efforts. For example, a series of papers with an ecological monitoring theme examined the relative importance of multiple sources of variation, including variation in means among sites, years, and site-years, for the purposes of temporal trend detection and estimation (Larsen et al. 2004, and references therein).

  6. The dependability of medical students' performance ratings as documented on in-training evaluations.

    PubMed

    van Barneveld, Christina

    2005-03-01

    To demonstrate an approach to obtain an unbiased estimate of the dependability of students' performance ratings during training, when the data-collection design includes nesting of student in rater, unbalanced nest sizes, and dependent observations. In 2003, two variance components analyses of in-training evaluation (ITE) report data were conducted using urGENOVA software. In the first analysis, the dependability for the nested and unbalanced data-collection design was calculated. In the second analysis, an approach using multiple generalizability studies was used to obtain an unbiased estimate of the student variance component, resulting in an unbiased estimate of dependability. Results suggested that there is bias in estimates of the dependability of students' performance on ITEs that are attributable to the data-collection design. When the bias was corrected, the results indicated that the dependability of ratings of student performance was almost zero. The combination of the multiple generalizability studies method and the use of specialized software provides an unbiased estimate of the dependability of ratings of student performance on ITE scores for data-collection designs that include nesting of student in rater, unbalanced nest sizes, and dependent observations.

  7. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  8. A Versatile Omnibus Test for Detecting Mean and Variance Heterogeneity

    PubMed Central

    Bailey, Matthew; Kauwe, John S. K.; Maxwell, Taylor J.

    2014-01-01

    Recent research has revealed loci that display variance heterogeneity through various means such as biological disruption, linkage disequilibrium (LD), gene-by-gene (GxG), or gene-by-environment (GxE) interaction. We propose a versatile likelihood ratio test that allows joint testing for mean and variance heterogeneity (LRTMV) or either effect alone (LRTM or LRTV) in the presence of covariates. Using extensive simulations for our method and others we found that all parametric tests were sensitive to non-normality regardless of any trait transformations. Coupling our test with the parametric bootstrap solves this issue. Using simulations and empirical data from a known mean-only functional variant we demonstrate how linkage disequilibrium (LD) can produce variance-heterogeneity loci (vQTL) in a predictable fashion based on differential allele frequencies, high D’ and relatively low r2 values. We propose that a joint test for mean and variance heterogeneity is more powerful than a variance only test for detecting vQTL. This takes advantage of loci that also have mean effects without sacrificing much power to detect variance only effects. We discuss using vQTL as an approach to detect gene-by-gene interactions and also how vQTL are related to relationship loci (rQTL) and how both can create prior hypothesis for each other and reveal the relationships between traits and possibly between components of a composite trait. PMID:24482837

  9. Modelling heterogeneity variances in multiple treatment comparison meta-analysis – Are informative priors the better solution?

    PubMed Central

    2013-01-01

    Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298

  10. Linkage disequilibrium and association mapping.

    PubMed

    Weir, B S

    2008-01-01

    Linkage disequilibrium refers to the association between alleles at different loci. The standard definition applies to two alleles in the same gamete, and it can be regarded as the covariance of indicator variables for the states of those two alleles. The corresponding correlation coefficient rho is the parameter that arises naturally in discussions of tests of association between markers and genetic diseases. A general treatment of association tests makes use of the additive and nonadditive components of variance for the disease gene. In almost all expressions that describe the behavior of association tests, additive variance components are modified by the squared correlation coefficient rho2 and the nonadditive variance components by rho4, suggesting that nonadditive components have less influence than additive components on association tests.

  11. The impact of case specificity and generalisable skills on clinical performance: a correlated traits-correlated methods approach.

    PubMed

    Wimmers, Paul F; Fung, Cha-Chi

    2008-06-01

    The finding of case or content specificity in medical problem solving moved the focus of research away from generalisable skills towards the importance of content knowledge. However, controversy about the content dependency of clinical performance and the generalisability of skills remains. This study aimed to explore the relative impact of both perspectives (case specificity and generalisable skills) on different components (history taking, physical examination, communication) of clinical performance within and across cases. Data from a clinical performance examination (CPX) taken by 350 Year 3 students were used in a correlated traits-correlated methods (CTCM) approach using confirmatory factor analysis, whereby 'traits' refers to generalisable skills and 'methods' to individual cases. The baseline CTCM model was analysed and compared with four nested models using structural equation modelling techniques. The CPX consisted of three skills components and five cases. Comparison of the four different models with the least-restricted baseline CTCM model revealed that a model with uncorrelated generalisable skills factors and correlated case-specific knowledge factors represented the data best. The generalisable processes found in history taking, physical examination and communication were responsible for half the explained variance, in comparison with the variance related to case specificity. Conclusions Pure knowledge-based and pure skill-based perspectives on clinical performance both seem too one-dimensional and new evidence supports the idea that a substantial amount of variance contributes to both aspects of performance. It could be concluded that generalisable skills and specialised knowledge go hand in hand: both are essential aspects of clinical performance.

  12. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  13. Distribution of a low dose compound within pharmaceutical tablet by using multivariate curve resolution on Raman hyperspectral images.

    PubMed

    Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2015-01-25

    In this work, Raman hyperspectral images and multivariate curve resolution-alternating least squares (MCR-ALS) are used to study the distribution of actives and excipients within a pharmaceutical drug product. This article is mainly focused on the distribution of a low dose constituent. Different approaches are compared, using initially filtered or non-filtered data, or using a column-wise augmented dataset before starting the MCR-ALS iterative process including appended information on the low dose component. In the studied formulation, magnesium stearate is used as a lubricant to improve powder flowability. With a theoretical concentration of 0.5% (w/w) in the drug product, the spectral variance contained in the data is weak. By using a principal component analysis (PCA) filtered dataset as a first step of the MCR-ALS approach, the lubricant information is lost in the non-explained variance and its associated distribution in the tablet cannot be highlighted. A sufficient number of components to generate the PCA noise-filtered matrix has to be used in order to keep the lubricant variability within the data set analyzed or, otherwise, work with the raw non-filtered data. Different models are built using an increasing number of components to perform the PCA reduction. It is shown that the magnesium stearate information can be extracted from a PCA model using a minimum of 20 components. In the last part, a column-wise augmented matrix, including a reference spectrum of the lubricant, is used before starting MCR-ALS process. PCA reduction is performed on the augmented matrix, so the magnesium stearate contribution is included within the MCR-ALS calculations. By using an appropriate PCA reduction, with a sufficient number of components, or by using an augmented dataset including appended information on the low dose component, the distribution of the two actives, the two main excipients and the low dose lubricant are correctly recovered. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  15. Enhancing target variance in personality impressions: highlighting the person in person perception.

    PubMed

    Paulhus, D L; Reynolds, S

    1995-12-01

    D. A. Kenny (1994) estimated the components of personality rating variance to be 15, 20, and 20% for target, rater, and relationship, respectively. To enhance trait variance and minimize rater variance, we designed a series of studies of personality perception in discussion groups (N = 79, 58, and 59). After completing a Big Five questionnaire, participants met 7 times in small groups. After Meetings 1 and 7, group members rated each other. By applying the Social Relations Model (D. A. Kenny and L. La Voie, 1984) to each Big Five dimension at each point in time, we were able to evaluate 6 rating effects as well as rating validity. Among the findings were that (a) target variance was the largest component (almost 30%), whereas rater variance was small (less than 11%); (b) rating validity improved significantly with acquaintance, although target variance did not; and (c) no reciprocity was found, but projection was significant for Agreeableness.

  16. It Depends on the Partner: Person-Related Sources of Efficacy Beliefs and Performance for Athlete Pairs.

    PubMed

    Habeeb, Christine M; Eklund, Robert C; Coffee, Pete

    2017-06-01

    This study explored person-related sources of variance in athletes' efficacy beliefs and performances when performing in pairs with distinguishable roles differing in partner dependence. College cheerleaders (n = 102) performed their role in repeated performance trials of two low- and two high-difficulty paired-stunt tasks with three different partners. Data were obtained on self-, other-, and collective efficacy beliefs and subjective performances, and objective performance assessments were obtained from digital recordings. Using the social relations model framework, total variance in each belief/assessment was partitioned, for each role, into numerical components of person-related variance relative to the self, the other, and the collective. Variance component by performance role by task-difficulty repeated-measures analysis of variances revealed that the largest person-related variance component differed by athlete role and increased in size in high-difficulty tasks. Results suggest that the extent the athlete's performance depends on a partner relates to the extent the partner is a source of self-, other-, and collective efficacy beliefs.

  17. Discrimination of various paper types using diffuse reflectance ultraviolet-visible near-infrared (UV-Vis-NIR) spectroscopy: forensic application to questioned documents.

    PubMed

    Kumar, Raj; Kumar, Vinay; Sharma, Vishal

    2015-06-01

    Diffuse reflectance ultraviolet-visible-near-infrared (UV-Vis-NIR) spectroscopy is applied as a means of differentiating various types of writing, office, and photocopy papers (collected from stationery shops in India) on the basis of reflectance and absorbance spectra that otherwise seem to be almost alike in different illumination conditions. In order to minimize bias, spectra from both sides of paper were obtained. In addition, three spectra from three different locations (from one side) were recorded covering the upper, middle, and bottom portions of the paper sample, and the mean average reflectivity of both the sides was calculated. A significant difference was observed in mean average reflectivity of Side A and Side B of the paper using Student's pair >t-test. Three different approaches were used for discrimination: (1) qualitative features of the whole set of samples, (2) principal component analysis, and (3) a combination of both approaches. On the basis of the first approach, i.e., qualitative features, 96.49% discriminating power (DP) was observed, which shows highly significant results with the UV-Vis-NIR technique. In the second approach the discriminating power is further enhanced by incorporating the principal component analysis (PCA) statistical method, where this method describes each UV-Vis spectrum in a group through numerical loading values connected to the first few principal components. All components described 100% variance of the samples, but only the first three PCs are good enough to explain the variance (PC1 = 51.64%, PC2 = 47.52%, and PC3 = 0.54%) of the samples; i.e., the first three PCs described 99.70% of the data, whereas in the third approach, the four samples, C, G, K, and N, out of a total 19 samples, which were not differentiated using qualitative features (approach no. 1), were therefore subjected to PCA. The first two PCs described 99.37% of the spectral features. The discrimination was achieved by using a loading plot between PC1 and PC2. It is therefore concluded that maximum discrimination of writing, office, and photocopy paper could be achieved on the basis of the second approach. Hence, the present inexpensive analytical method can be appropriate for application to routine questioned document examination work in forensic laboratories because it provides nondestructive, quantitative, reliable, and repeatable results.

  18. Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs

    PubMed Central

    Heuven, Henri CM; van Wijk, Rik HJ; Dibbits, Bert; van Kampen, Tony A; Knol, Egbert F; Bovenhuis, Henk

    2009-01-01

    Quantitative trait loci (QTL) affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A) for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA), several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham) and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only. PMID:19284675

  19. Biochemical Phenotypes to Discriminate Microbial Subpopulations and Improve Outbreak Detection

    PubMed Central

    Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F.; Stelling, John

    2013-01-01

    Background Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Methodology/Principal Findings Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. Results: 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as “nuisance” biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. Conclusions The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events. PMID:24391936

  20. Use of a Principal Components Analysis for the Generation of Daily Time Series.

    NASA Astrophysics Data System (ADS)

    Dreveton, Christine; Guillou, Yann

    2004-07-01

    A new approach for generating daily time series is considered in response to the weather-derivatives market. This approach consists of performing a principal components analysis to create independent variables, the values of which are then generated separately with a random process. Weather derivatives are financial or insurance products that give companies the opportunity to cover themselves against adverse climate conditions. The aim of a generator is to provide a wider range of feasible situations to be used in an assessment of risk. Generation of a temperature time series is required by insurers or bankers for pricing weather options. The provision of conditional probabilities and a good representation of the interannual variance are the main challenges of a generator when used for weather derivatives. The generator was developed according to this new approach using a principal components analysis and was applied to the daily average temperature time series of the Paris-Montsouris station in France. The observed dataset was homogenized and the trend was removed to represent correctly the present climate. The results obtained with the generator show that it represents correctly the interannual variance of the observed climate; this is the main result of the work, because one of the main discrepancies of other generators is their inability to represent accurately the observed interannual climate variance—this discrepancy is not acceptable for an application to weather derivatives. The generator was also tested to calculate conditional probabilities: for example, the knowledge of the aggregated value of heating degree-days in the middle of the heating season allows one to estimate the probability if reaching a threshold at the end of the heating season. This represents the main application of a climate generator for use with weather derivatives.


  1. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  2. Neutral Evolution of Multiple Quantitative Characters: A Genealogical Approach

    PubMed Central

    Griswold, Cortland K.; Logsdon, Benjamin; Gomulkiewicz, Richard

    2007-01-01

    The G matrix measures the components of phenotypic variation that are genetically heritable. The structure of G, that is, its principal components and their associated variances, determines, in part, the direction and speed of multivariate trait evolution. In this article we present a framework and results that give the structure of G under the assumption of neutrality. We suggest that a neutral expectation of the structure of G is important because it gives a null expectation for the structure of G from which the unique consequences of selection can be determined. We demonstrate how the processes of mutation, recombination, and drift shape the structure of G. Furthermore, we demonstrate how shared common ancestry between segregating alleles shapes the structure of G. Our results show that shared common ancestry, which manifests itself in the form of a gene genealogy, causes the structure of G to be nonuniform in that the variances associated with the principal components of G decline at an approximately exponential rate. Furthermore we show that the extent of the nonuniformity in the structure of G is enhanced with declines in mutation rates, recombination rates, and numbers of loci and is dependent on the pattern and modality of mutation. PMID:17339224

  3. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  4. Genetic basis of between-individual and within-individual variance of docility.

    PubMed

    Martin, J G A; Pirotta, E; Petelle, M B; Blumstein, D T

    2017-04-01

    Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  5. Modeling Heterogeneous Variance-Covariance Components in Two-Level Models

    ERIC Educational Resources Information Center

    Leckie, George; French, Robert; Charlton, Chris; Browne, William

    2014-01-01

    Applications of multilevel models to continuous outcomes nearly always assume constant residual variance and constant random effects variances and covariances. However, modeling heterogeneity of variance can prove a useful indicator of model misspecification, and in some educational and behavioral studies, it may even be of direct substantive…

  6. Insights into the diurnal cycle of global Earth outgoing radiation using a numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Gristey, Jake J.; Chiu, J. Christine; Gurney, Robert J.; Morcrette, Cyril J.; Hill, Peter G.; Russell, Jacqueline E.; Brindley, Helen E.

    2018-04-01

    A globally complete, high temporal resolution and multiple-variable approach is employed to analyse the diurnal cycle of Earth's outgoing energy flows. This is made possible via the use of Met Office model output for September 2010 that is assessed alongside regional satellite observations throughout. Principal component analysis applied to the long-wave component of modelled outgoing radiation reveals dominant diurnal patterns related to land surface heating and convective cloud development, respectively explaining 68.5 and 16.0 % of the variance at the global scale. The total variance explained by these first two patterns is markedly less than previous regional estimates from observations, and this analysis suggests that around half of the difference relates to the lack of global coverage in the observations. The first pattern is strongly and simultaneously coupled to the land surface temperature diurnal variations. The second pattern is strongly coupled to the cloud water content and height diurnal variations, but lags the cloud variations by several hours. We suggest that the mechanism controlling the delay is a moistening of the upper troposphere due to the evaporation of anvil cloud. The short-wave component of modelled outgoing radiation, analysed in terms of albedo, exhibits a very dominant pattern explaining 88.4 % of the variance that is related to the angle of incoming solar radiation, and a second pattern explaining 6.7 % of the variance that is related to compensating effects from convective cloud development and marine stratocumulus cloud dissipation. Similar patterns are found in regional satellite observations, but with slightly different timings due to known model biases. The first pattern is controlled by changes in surface and cloud albedo, and Rayleigh and aerosol scattering. The second pattern is strongly coupled to the diurnal variations in both cloud water content and height in convective regions but only cloud water content in marine stratocumulus regions, with substantially shorter lag times compared with the long-wave counterpart. This indicates that the short-wave radiation response to diurnal cloud development and dissipation is more rapid, which is found to be robust in the regional satellite observations. These global, diurnal radiation patterns and their coupling with other geophysical variables demonstrate the process-level understanding that can be gained using this approach and highlight a need for global, diurnal observing systems for Earth outgoing radiation in the future.

  7. Covariate analysis of bivariate survival data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, L.E.

    1992-01-01

    The methods developed are used to analyze the effects of covariates on bivariate survival data when censoring and ties are present. The proposed method provides models for bivariate survival data that include differential covariate effects and censored observations. The proposed models are based on an extension of the univariate Buckley-James estimators which replace censored data points by their expected values, conditional on the censoring time and the covariates. For the bivariate situation, it is necessary to determine the expectation of the failure times for one component conditional on the failure or censoring time of the other component. Two different methodsmore » have been developed to estimate these expectations. In the semiparametric approach these expectations are determined from a modification of Burke's estimate of the bivariate empirical survival function. In the parametric approach censored data points are also replaced by their conditional expected values where the expected values are determined from a specified parametric distribution. The model estimation will be based on the revised data set, comprised of uncensored components and expected values for the censored components. The variance-covariance matrix for the estimated covariate parameters has also been derived for both the semiparametric and parametric methods. Data from the Demographic and Health Survey was analyzed by these methods. The two outcome variables are post-partum amenorrhea and breastfeeding; education and parity were used as the covariates. Both the covariate parameter estimates and the variance-covariance estimates for the semiparametric and parametric models will be compared. In addition, a multivariate test statistic was used in the semiparametric model to examine contrasts. The significance of the statistic was determined from a bootstrap distribution of the test statistic.« less

  8. A Partial Least-Squares Analysis of Health-Related Quality-of-Life Outcomes After Aneurysmal Subarachnoid Hemorrhage.

    PubMed

    Young, Julia M; Morgan, Benjamin R; Mišić, Bratislav; Schweizer, Tom A; Ibrahim, George M; Macdonald, R Loch

    2015-12-01

    Individuals who have aneurysmal subarachnoid hemorrhages (SAHs) experience decreased health-related qualities of life (HRQoLs) that persist after the primary insult. To identify clinical variables that concurrently associate with HRQoL outcomes by using a partial least-squares approach, which has the distinct advantage of explaining multidimensional variance where predictor variables may be highly collinear. Data collected from the CONSCIOUS-1 trial was used to extract 29 clinical variables including SAH presentation, hospital procedures, and demographic information in addition to 5 HRQoL outcome variables for 256 individuals. A partial least-squares analysis was performed by calculating a heterogeneous correlation matrix and applying singular value decomposition to determine components that best represent the correlations between the 2 sets of variables. Bootstrapping was used to estimate statistical significance. The first 2 components accounting for 81.6% and 7.8% of the total variance revealed significant associations between clinical predictors and HRQoL outcomes. The first component identified associations between disability in self-care with longer durations of critical care stay, invasive intracranial monitoring, ventricular drain time, poorer clinical grade on presentation, greater amounts of cerebral spinal fluid drainage, and a history of hypertension. The second component identified associations between disability due to pain and discomfort as well as anxiety and depression with greater body mass index, abnormal heart rate, longer durations of deep sedation and critical care, and higher World Federation of Neurosurgical Societies and Hijdra scores. By applying a data-driven, multivariate approach, we identified robust associations between SAH clinical presentations and HRQoL outcomes. EQ-VAS, EuroQoL visual analog scaleHRQoL, health-related quality of lifeICU, intensive care unitIVH, intraventricular hemorrhagePLS, partial least squaresSAH, subarachnoid hemorrhageSVD, singular value decompositionWFNS, World Federation of Neurosurgical Societies.

  9. Rank estimation and the multivariate analysis of in vivo fast-scan cyclic voltammetric data

    PubMed Central

    Keithley, Richard B.; Carelli, Regina M.; Wightman, R. Mark

    2010-01-01

    Principal component regression has been used in the past to separate current contributions from different neuromodulators measured with in vivo fast-scan cyclic voltammetry. Traditionally, a percent cumulative variance approach has been used to determine the rank of the training set voltammetric matrix during model development, however this approach suffers from several disadvantages including the use of arbitrary percentages and the requirement of extreme precision of training sets. Here we propose that Malinowski’s F-test, a method based on a statistical analysis of the variance contained within the training set, can be used to improve factor selection for the analysis of in vivo fast-scan cyclic voltammetric data. These two methods of rank estimation were compared at all steps in the calibration protocol including the number of principal components retained, overall noise levels, model validation as determined using a residual analysis procedure, and predicted concentration information. By analyzing 119 training sets from two different laboratories amassed over several years, we were able to gain insight into the heterogeneity of in vivo fast-scan cyclic voltammetric data and study how differences in factor selection propagate throughout the entire principal component regression analysis procedure. Visualizing cyclic voltammetric representations of the data contained in the retained and discarded principal components showed that using Malinowski’s F-test for rank estimation of in vivo training sets allowed for noise to be more accurately removed. Malinowski’s F-test also improved the robustness of our criterion for judging multivariate model validity, even though signal-to-noise ratios of the data varied. In addition, pH change was the majority noise carrier of in vivo training sets while dopamine prediction was more sensitive to noise. PMID:20527815

  10. Management Accounting in School Food Service.

    ERIC Educational Resources Information Center

    Bryan, E. Lewis; Friedlob, G. Thomas

    1982-01-01

    Describes a model for establishing control of school food services through analysis of the aggregate variances of quantity, collection, and price, and of their separate components. The separable component variances are identified, measured, and compared monthly to help supervisors identify exactly where plans and operations vary. (Author/MLF)

  11. Evaluating in Vitro Culture Medium of Gut Microbiome with Orthogonal Experimental Design and a Metaproteomics Approach.

    PubMed

    Li, Leyuan; Zhang, Xu; Ning, Zhibin; Mayne, Janice; Moore, Jasmine I; Butcher, James; Chiang, Cheng-Kang; Mack, David; Stintzi, Alain; Figeys, Daniel

    2018-01-05

    In vitro culture based approaches are time- and cost-effective solutions for rapidly evaluating the effects of drugs or natural compounds against microbiomes. The nutritional composition of the culture medium is an important determinant for effectively maintaining the gut microbiome in vitro. This study combines orthogonal experimental design and a metaproteomics approach to obtaining functional insights into the effects of different medium components on the microbiome. Our results show that the metaproteomic profile respond differently to medium components, including inorganic salts, bile salts, mucin, and short-chain fatty acids. Multifactor analysis of variance further revealed significant main and interaction effects of inorganic salts, bile salts, and mucin on the different functional groups of gut microbial proteins. While a broad regulating effect was observed on basic metabolic pathways, different medium components also showed significant modulations on cell wall, membrane, and envelope biogenesis and cell motility related functions. In particular, flagellar assembly related proteins were significantly responsive to the presence of mucin. This study provides information on the functional influences of medium components on the in vitro growth of microbiome communities and gives insight on the key components that must be considered when selecting and optimizing media for culturing ex vivo microbiotas.

  12. Analysis of components of variance in multiple-reader studies of computer-aided diagnosis with different tasks

    NASA Astrophysics Data System (ADS)

    Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei

    2001-06-01

    In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.

  13. Analysis of conditional genetic effects and variance components in developmental genetics.

    PubMed

    Zhu, J

    1995-12-01

    A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.

  14. Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics

    PubMed Central

    Zhu, J.

    1995-01-01

    A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500

  15. Microarchitecture and Bone Quality in the Human Calcaneus; Local Variations of Fabric Anisotropy

    PubMed Central

    Souzanchi, M F; Palacio-Mancheno, P E; Borisov, Y; Cardoso, L; Cowin, SC

    2012-01-01

    The local variability of microarchitecture of human trabecular calcaneus bone is investigated using high resolution microCT scanning. The fabric tensor is employed as the measure of the microarchitecture of the pore structure of a porous medium. It is hypothesized that a fabric tensor-dependent poroelastic ultrasound approach will more effectively predict the data variance than will porosity alone. The specific aims of the present study are i) to quantify the morphology and local anisotropy of the calcaneus microarchitecture with respect to anatomical directions, ii) to determine the interdependence, or lack thereof, of microarchitecture parameters, fabric, and volumetric bone mineral density (vBMD), and iii) to determine the relative ability of vBMD and fabric measurements in evaluating the variance in ultrasound wave velocity measurements along orthogonal directions in the human calcaneus. Our results show that the microarchitecture in the analyzed regions of human calcanei is anisotropic, with a preferred alignment along the posterior-anterior direction. Strong correlation was found between most scalar architectural parameters and vBMD. However, no statistical correlation was found between vBMD and the fabric components, the measures of the pore microstructure orientation. Therefore, among the parameters usually considered for cancellous bone (i.e., classic histomorphometric parameters such as porosity, trabecular thickness, number and separation), only fabric components explain the data variance that cannot be explained by vBMD, a global mass measurement, which lacks the sensitivity and selectivity to distinguish osteoporotic from healthy subjects because it is insensitive to directional changes in bone architecture. This study demonstrates that a multi-directional, fabric-dependent poroelastic ultrasound approach has the capability of characterizing anisotropic bone properties (bone quality) beyond bone mass, and could help to better understand anisotropic changes in bone architecture using ultrasound. PMID:22807141

  16. Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

    PubMed

    Park, Yoon Soo; Hyderi, Abbas; Heine, Nancy; May, Win; Nevins, Andrew; Lee, Ming; Bordage, Georges; Yudkowsky, Rachel

    2017-11-01

    To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs). Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores. Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems. This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.

  17. Matrix approach to uncertainty assessment and reduction for modeling terrestrial carbon cycle

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Xia, J.; Ahlström, A.; Zhou, S.; Huang, Y.; Shi, Z.; Wang, Y.; Du, Z.; Lu, X.

    2017-12-01

    Terrestrial ecosystems absorb approximately 30% of the anthropogenic carbon dioxide emissions. This estimate has been deduced indirectly: combining analyses of atmospheric carbon dioxide concentrations with ocean observations to infer the net terrestrial carbon flux. In contrast, when knowledge about the terrestrial carbon cycle is integrated into different terrestrial carbon models they make widely different predictions. To improve the terrestrial carbon models, we have recently developed a matrix approach to uncertainty assessment and reduction. Specifically, the terrestrial carbon cycle has been commonly represented by a series of carbon balance equations to track carbon influxes into and effluxes out of individual pools in earth system models. This representation matches our understanding of carbon cycle processes well and can be reorganized into one matrix equation without changing any modeled carbon cycle processes and mechanisms. We have developed matrix equations of several global land C cycle models, including CLM3.5, 4.0 and 4.5, CABLE, LPJ-GUESS, and ORCHIDEE. Indeed, the matrix equation is generic and can be applied to other land carbon models. This matrix approach offers a suite of new diagnostic tools, such as the 3-dimensional (3-D) parameter space, traceability analysis, and variance decomposition, for uncertainty analysis. For example, predictions of carbon dynamics with complex land models can be placed in a 3-D parameter space (carbon input, residence time, and storage potential) as a common metric to measure how much model predictions are different. The latter can be traced to its source components by decomposing model predictions to a hierarchy of traceable components. Then, variance decomposition can help attribute the spread in predictions among multiple models to precisely identify sources of uncertainty. The highly uncertain components can be constrained by data as the matrix equation makes data assimilation computationally possible. We will illustrate various applications of this matrix approach to uncertainty assessment and reduction for terrestrial carbon cycle models.

  18. The effects of r- and K-selection on components of variance for two quantitative traits.

    PubMed

    Long, T; Long, G

    1974-03-01

    The genetic and environmental components of variance for two quantitative characters were measured in the descendants of Drosophila melanogaster populations which had been grown for several generations at densities of 100, 200, 300, and 400 eggs per vial. Populations subject to intermediate densities had a greater proportion of phenotypic variance available for selection than populations from either extreme. Selection on either character would be least effective under pure r-selection, a frequent attribute of selection programs.

  19. Unbiased Estimates of Variance Components with Bootstrap Procedures

    ERIC Educational Resources Information Center

    Brennan, Robert L.

    2007-01-01

    This article provides general procedures for obtaining unbiased estimates of variance components for any random-model balanced design under any bootstrap sampling plan, with the focus on designs of the type typically used in generalizability theory. The results reported here are particularly helpful when the bootstrap is used to estimate standard…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  1. The comparison of various approach to evaluation erosion risks and design control erosion measures

    NASA Astrophysics Data System (ADS)

    Kapicka, Jiri

    2015-04-01

    In the present is in the Czech Republic one methodology how to compute and compare erosion risks. This methodology contain also method to design erosion control measures. The base of this methodology is Universal Soil Loss Equation (USLE) and their result long-term average annual rate of erosion (G). This methodology is used for landscape planners. Data and statistics from database of erosion events in the Czech Republic shows that many troubles and damages are from local episodes of erosion events. An extent of these events and theirs impact are conditional to local precipitation events, current plant phase and soil conditions. These erosion events can do troubles and damages on agriculture land, municipally property and hydro components and even in a location is from point of view long-term average annual rate of erosion in good conditions. Other way how to compute and compare erosion risks is episodes approach. In this paper is presented the compare of various approach to compute erosion risks. The comparison was computed to locality from database of erosion events on agricultural land in the Czech Republic where have been records two erosion events. The study area is a simple agriculture land without any barriers that can have high influence to water flow and soil sediment transport. The computation of erosion risks (for all methodology) was based on laboratory analysis of soil samples which was sampled on study area. Results of the methodology USLE, MUSLE and results from mathematical model Erosion 3D have been compared. Variances of the results in space distribution of the places with highest soil erosion where compared and discussed. Other part presents variances of design control erosion measures where their design was done on based different methodology. The results shows variance of computed erosion risks which was done by different methodology. These variances can start discussion about different approach how compute and evaluate erosion risks in areas with different importance.

  2. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  3. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  4. Recovering Wood and McCarthy's ERP-prototypes by means of ERP-specific procrustes-rotation.

    PubMed

    Beauducel, André

    2018-02-01

    The misallocation of treatment-variance on the wrong component has been discussed in the context of temporal principal component analysis of event-related potentials. There is, until now, no rotation-method that can perfectly recover Wood and McCarthy's prototypes without making use of additional information on treatment-effects. In order to close this gap, two new methods: for component rotation were proposed. After Varimax-prerotation, the first method identifies very small slopes of successive loadings. The corresponding loadings are set to zero in a target-matrix for event-related orthogonal partial Procrustes- (EPP-) rotation. The second method generates Gaussian normal distributions around the peaks of the Varimax-loadings and performs orthogonal Procrustes-rotation towards these Gaussian distributions. Oblique versions of this Gaussian event-related Procrustes- (GEP) rotation and of EPP-rotation are based on Promax-rotation. A simulation study revealed that the new orthogonal rotations recover Wood and McCarthy's prototypes and eliminate misallocation of treatment-variance. In an additional simulation study with a more pronounced overlap of the prototypes GEP Promax-rotation reduced the variance misallocation slightly more than EPP Promax-rotation. Comparison with Existing Method(s): Varimax- and conventional Promax-rotations resulted in substantial misallocations of variance in simulation studies when components had temporal overlap. A substantially reduced misallocation of variance occurred with the EPP-, EPP Promax-, GEP-, and GEP Promax-rotations. Misallocation of variance can be minimized by means of the new rotation methods: Making use of information on the temporal order of the loadings may allow for improvements of the rotation of temporal PCA components. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Dimensionality and noise in energy selective x-ray imaging

    PubMed Central

    Alvarez, Robert E.

    2013-01-01

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging. Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator. Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 103. With the soft tissue component, it is 2.7 × 104. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB. Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems. PMID:24320442

  6. Designing a Robust Micromixer Based on Fluid Stretching

    NASA Astrophysics Data System (ADS)

    Mott, David; Gautam, Dipesh; Voth, Greg; Oran, Elaine

    2010-11-01

    A metric for measuring fluid stretching based on finite-time Lyapunov exponents is described, and the use of this metric for optimizing mixing in microfluidic components is explored. The metric is implemented within an automated design approach called the Computational Toolbox (CTB). The CTB designs components by adding geometric features, such a grooves of various shapes, to a microchannel. The transport produced by each of these features in isolation was pre-computed and stored as an "advection map" for that feature, and the flow through a composite geometry that combines these features is calculated rapidly by applying the corresponding maps in sequence. A genetic algorithm search then chooses the feature combination that optimizes a user-specified metric. Metrics based on the variance of concentration generally require the user to specify the fluid distributions at inflow, which leads to different mixer designs for different inflow arrangements. The stretching metric is independent of the fluid arrangement at inflow. Mixers designed using the stretching metric are compared to those designed using a variance of concentration metric and show excellent performance across a variety of inflow distributions and diffusivities.

  7. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    NASA Astrophysics Data System (ADS)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  8. Image informative maps for component-wise estimating parameters of signal-dependent noise

    NASA Astrophysics Data System (ADS)

    Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem

    2013-01-01

    We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.

  9. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Genetic control of residual variance of yearling weight in Nellore beef cattle.

    PubMed

    Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R

    2017-04-01

    There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (<0.007). Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting its presence beyond the scale effect. The DHGLM showed higher predictive ability of EBV for residual variance and therefore should be preferred over the two-step approach.

  11. Use of a threshold animal model to estimate calving ease and stillbirth (co)variance components for US Holsteins

    USDA-ARS?s Scientific Manuscript database

    (Co)variance components for calving ease and stillbirth in US Holsteins were estimated using a single-trait threshold animal model and two different sets of data edits. Six sets of approximately 250,000 records each were created by randomly selecting herd codes without replacement from the data used...

  12. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  13. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  14. A Short Interspersed Nuclear Element (SINE)-Based Real-Time PCR Approach to Detect and Quantify Porcine Component in Meat Products.

    PubMed

    Zhang, Chi; Fang, Xin; Qiu, Haopu; Li, Ning

    2015-01-01

    Real-time PCR amplification of mitochondria gene could not be used for DNA quantification, and that of single copy DNA did not allow an ideal sensitivity. Moreover, cross-reactions among similar species were commonly observed in the published methods amplifying repetitive sequence, which hindered their further application. The purpose of this study was to establish a short interspersed nuclear element (SINE)-based real-time PCR approach having high specificity for species detection that could be used in DNA quantification. After massive screening of candidate Sus scrofa SINEs, one optimal combination of primers and probe was selected, which had no cross-reaction with other common meat species. LOD of the method was 44 fg DNA/reaction. Further, quantification tests showed this approach was practical in DNA estimation without tissue variance. Thus, this study provided a new tool for qualitative detection of porcine component, which could be promising in the QC of meat products.

  15. Correlational structure of ‘frontal’ tests and intelligence tests indicates two components with asymmetrical neurostructural correlates in old age

    PubMed Central

    Cox, Simon R.; MacPherson, Sarah E.; Ferguson, Karen J.; Nissan, Jack; Royle, Natalie A.; MacLullich, Alasdair M.J.; Wardlaw, Joanna M.; Deary, Ian J.

    2014-01-01

    Both general fluid intelligence (gf) and performance on some ‘frontal tests’ of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and gf share neurobiological variance. To this end, we examined relationships between gf and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). gf correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for gf. Principal component analysis of test scores identified units of gf-common and gf-independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact. PMID:25278641

  16. Correlational structure of 'frontal' tests and intelligence tests indicates two components with asymmetrical neurostructural correlates in old age.

    PubMed

    Cox, Simon R; MacPherson, Sarah E; Ferguson, Karen J; Nissan, Jack; Royle, Natalie A; MacLullich, Alasdair M J; Wardlaw, Joanna M; Deary, Ian J

    2014-09-01

    Both general fluid intelligence ( g f ) and performance on some 'frontal tests' of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and g f share neurobiological variance. To this end, we examined relationships between g f and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). g f correlated significantly and positively (.24 ≤  r  ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for g f . Principal component analysis of test scores identified units of g f -common and g f -independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact.

  17. Predictors of burnout among correctional mental health professionals.

    PubMed

    Gallavan, Deanna B; Newman, Jody L

    2013-02-01

    This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.

  18. Multi-allelic haplotype model based on genetic partition for genomic prediction and variance component estimation using SNP markers.

    PubMed

    Da, Yang

    2015-12-18

    The amount of functional genomic information has been growing rapidly but remains largely unused in genomic selection. Genomic prediction and estimation using haplotypes in genome regions with functional elements such as all genes of the genome can be an approach to integrate functional and structural genomic information for genomic selection. Towards this goal, this article develops a new haplotype approach for genomic prediction and estimation. A multi-allelic haplotype model treating each haplotype as an 'allele' was developed for genomic prediction and estimation based on the partition of a multi-allelic genotypic value into additive and dominance values. Each additive value is expressed as a function of h - 1 additive effects, where h = number of alleles or haplotypes, and each dominance value is expressed as a function of h(h - 1)/2 dominance effects. For a sample of q individuals, the limit number of effects is 2q - 1 for additive effects and is the number of heterozygous genotypes for dominance effects. Additive values are factorized as a product between the additive model matrix and the h - 1 additive effects, and dominance values are factorized as a product between the dominance model matrix and the h(h - 1)/2 dominance effects. Genomic additive relationship matrix is defined as a function of the haplotype model matrix for additive effects, and genomic dominance relationship matrix is defined as a function of the haplotype model matrix for dominance effects. Based on these results, a mixed model implementation for genomic prediction and variance component estimation that jointly use haplotypes and single markers is established, including two computing strategies for genomic prediction and variance component estimation with identical results. The multi-allelic genetic partition fills a theoretical gap in genetic partition by providing general formulations for partitioning multi-allelic genotypic values and provides a haplotype method based on the quantitative genetics model towards the utilization of functional and structural genomic information for genomic prediction and estimation.

  19. Gaussian statistics for palaeomagnetic vectors

    USGS Publications Warehouse

    Love, J.J.; Constable, C.G.

    2003-01-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimoda) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Re??union, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  20. Gaussian statistics for palaeomagnetic vectors

    NASA Astrophysics Data System (ADS)

    Love, J. J.; Constable, C. G.

    2003-03-01

    With the aim of treating the statistics of palaeomagnetic directions and intensities jointly and consistently, we represent the mean and the variance of palaeomagnetic vectors, at a particular site and of a particular polarity, by a probability density function in a Cartesian three-space of orthogonal magnetic-field components consisting of a single (unimodal) non-zero mean, spherically-symmetrical (isotropic) Gaussian function. For palaeomagnetic data of mixed polarities, we consider a bimodal distribution consisting of a pair of such symmetrical Gaussian functions, with equal, but opposite, means and equal variances. For both the Gaussian and bi-Gaussian distributions, and in the spherical three-space of intensity, inclination, and declination, we obtain analytical expressions for the marginal density functions, the cumulative distributions, and the expected values and variances for each spherical coordinate (including the angle with respect to the axis of symmetry of the distributions). The mathematical expressions for the intensity and off-axis angle are closed-form and especially manageable, with the intensity distribution being Rayleigh-Rician. In the limit of small relative vectorial dispersion, the Gaussian (bi-Gaussian) directional distribution approaches a Fisher (Bingham) distribution and the intensity distribution approaches a normal distribution. In the opposite limit of large relative vectorial dispersion, the directional distributions approach a spherically-uniform distribution and the intensity distribution approaches a Maxwell distribution. We quantify biases in estimating the properties of the vector field resulting from the use of simple arithmetic averages, such as estimates of the intensity or the inclination of the mean vector, or the variances of these quantities. With the statistical framework developed here and using the maximum-likelihood method, which gives unbiased estimates in the limit of large data numbers, we demonstrate how to formulate the inverse problem, and how to estimate the mean and variance of the magnetic vector field, even when the data consist of mixed combinations of directions and intensities. We examine palaeomagnetic secular-variation data from Hawaii and Réunion, and although these two sites are on almost opposite latitudes, we find significant differences in the mean vector and differences in the local vectorial variances, with the Hawaiian data being particularly anisotropic. These observations are inconsistent with a description of the mean field as being a simple geocentric axial dipole and with secular variation being statistically symmetrical with respect to reflection through the equatorial plane. Finally, our analysis of palaeomagnetic acquisition data from the 1960 Kilauea flow in Hawaii and the Holocene Xitle flow in Mexico, is consistent with the widely held suspicion that directional data are more accurate than intensity data.

  1. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.

    2015-11-01

    Several errors occur when a traditional Doppler-beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  2. Evaluation of three lidar scanning strategies for turbulence measurements

    NASA Astrophysics Data System (ADS)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; Sathe, Ameya; Bonin, Timothy A.; Chilson, Phillip B.; Muschinski, Andreas

    2016-05-01

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity-azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.

  3. Dimensionality and noise in energy selective x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez, Robert E.

    Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramèr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurementmore » noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 × 10{sup 3}. With the soft tissue component, it is 2.7 × 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.« less

  4. Planning additional drilling campaign using two-space genetic algorithm: A game theoretical approach

    NASA Astrophysics Data System (ADS)

    Kumral, Mustafa; Ozer, Umit

    2013-03-01

    Grade and tonnage are the most important technical uncertainties in mining ventures because of the use of estimations/simulations, which are mostly generated from drill data. Open pit mines are planned and designed on the basis of the blocks representing the entire orebody. Each block has different estimation/simulation variance reflecting uncertainty to some extent. The estimation/simulation realizations are submitted to mine production scheduling process. However, the use of a block model with varying estimation/simulation variances will lead to serious risk in the scheduling. In the medium of multiple simulations, the dispersion variances of blocks can be thought to regard technical uncertainties. However, the dispersion variance cannot handle uncertainty associated with varying estimation/simulation variances of blocks. This paper proposes an approach that generates the configuration of the best additional drilling campaign to generate more homogenous estimation/simulation variances of blocks. In other words, the objective is to find the best drilling configuration in such a way as to minimize grade uncertainty under budget constraint. Uncertainty measure of the optimization process in this paper is interpolation variance, which considers data locations and grades. The problem is expressed as a minmax problem, which focuses on finding the best worst-case performance i.e., minimizing interpolation variance of the block generating maximum interpolation variance. Since the optimization model requires computing the interpolation variances of blocks being simulated/estimated in each iteration, the problem cannot be solved by standard optimization tools. This motivates to use two-space genetic algorithm (GA) approach to solve the problem. The technique has two spaces: feasible drill hole configuration with minimization of interpolation variance and drill hole simulations with maximization of interpolation variance. Two-space interacts to find a minmax solution iteratively. A case study was conducted to demonstrate the performance of approach. The findings showed that the approach could be used to plan a new drilling campaign.

  5. The structure of cross-cultural musical diversity.

    PubMed

    Rzeszutek, Tom; Savage, Patrick E; Brown, Steven

    2012-04-22

    Human cultural traits, such as languages, musics, rituals and material objects, vary widely across cultures. However, the majority of comparative analyses of human cultural diversity focus on between-culture variation without consideration for within-culture variation. In contrast, biological approaches to genetic diversity, such as the analysis of molecular variance (AMOVA) framework, partition genetic diversity into both within- and between-population components. We attempt here for the first time to quantify both components of cultural diversity by applying the AMOVA model to music. By employing this approach with 421 traditional songs from 16 Austronesian-speaking populations, we show that the vast majority of musical variability is due to differences within populations rather than differences between. This demonstrates a striking parallel to the structure of genetic diversity in humans. A neighbour-net analysis of pairwise population musical divergence shows a large amount of reticulation, indicating the pervasive occurrence of borrowing and/or convergent evolution of musical features across populations.

  6. The structure of cross-cultural musical diversity

    PubMed Central

    Rzeszutek, Tom; Savage, Patrick E.; Brown, Steven

    2012-01-01

    Human cultural traits, such as languages, musics, rituals and material objects, vary widely across cultures. However, the majority of comparative analyses of human cultural diversity focus on between-culture variation without consideration for within-culture variation. In contrast, biological approaches to genetic diversity, such as the analysis of molecular variance (AMOVA) framework, partition genetic diversity into both within- and between-population components. We attempt here for the first time to quantify both components of cultural diversity by applying the AMOVA model to music. By employing this approach with 421 traditional songs from 16 Austronesian-speaking populations, we show that the vast majority of musical variability is due to differences within populations rather than differences between. This demonstrates a striking parallel to the structure of genetic diversity in humans. A neighbour-net analysis of pairwise population musical divergence shows a large amount of reticulation, indicating the pervasive occurrence of borrowing and/or convergent evolution of musical features across populations. PMID:22072606

  7. Multilevel Dynamic Generalized Structured Component Analysis for Brain Connectivity Analysis in Functional Neuroimaging Data.

    PubMed

    Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S

    2016-06-01

    We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.

  8. Multivariate and geo-spatial approach for seawater quality of Chidiyatappu Bay, south Andaman Islands, India.

    PubMed

    Jha, Dilip Kumar; Vinithkumar, Nambali Valsalan; Sahu, Biraja Kumar; Dheenan, Palaiya Sukumaran; Das, Apurba Kumar; Begum, Mehmuna; Devi, Marimuthu Prashanthi; Kirubagaran, Ramalingam

    2015-07-15

    Chidiyatappu Bay is one of the least disturbed marine environments of Andaman & Nicobar Islands, the union territory of India. Oceanic flushing from southeast and northwest direction is prevalent in this bay. Further, anthropogenic activity is minimal in the adjoining environment. Considering the pristine nature of this bay, seawater samples collected from 12 sampling stations covering three seasons were analyzed. Principal Component Analysis (PCA) revealed 69.9% of total variance and exhibited strong factor loading for nitrite, chlorophyll a and phaeophytin. In addition, analysis of variance (ANOVA-one way), regression analysis, box-whisker plots and Geographical Information System based hot spot analysis further simplified and supported multivariate results. The results obtained are important to establish reference conditions for comparative study with other similar ecosystems in the region. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. EGSIEM combination service: combination of GRACE monthly K-band solutions on normal equation level

    NASA Astrophysics Data System (ADS)

    Meyer, Ulrich; Jean, Yoomin; Arnold, Daniel; Jäggi, Adrian

    2017-04-01

    The European Gravity Service for Improved Emergency Management (EGSIEM) project offers a scientific combination service, combining for the first time monthly GRACE gravity fields of different analysis centers (ACs) on normal equation (NEQ) level and thus taking all correlations between the gravity field coefficients and pre-eliminated orbit and instrument parameters correctly into account. Optimal weights for the individual NEQs are commonly derived by variance component estimation (VCE), as is the case for the products of the International VLBI Service (IVS) or the DTRF2008 reference frame realisation that are also derived by combination on NEQ-level. But variance factors are based on post-fit residuals and strongly depend on observation sampling and noise modeling, which both are very diverse in case of the individual EGSIEM ACs. These variance factors do not necessarily represent the true error levels of the estimated gravity field parameters that are still governed by analysis noise. We present a combination approach where weights are derived on solution level, thereby taking the analysis noise into account.

  10. Characterization, parameter estimation, and aircraft response statistics of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1981-01-01

    A nonGaussian three component model of atmospheric turbulence is postulated that accounts for readily observable features of turbulence velocity records, their autocorrelation functions, and their spectra. Methods for computing probability density functions and mean exceedance rates of a generic aircraft response variable are developed using nonGaussian turbulence characterizations readily extracted from velocity recordings. A maximum likelihood method is developed for optimal estimation of the integral scale and intensity of records possessing von Karman transverse of longitudinal spectra. Formulas for the variances of such parameter estimates are developed. The maximum likelihood and least-square approaches are combined to yield a method for estimating the autocorrelation function parameters of a two component model for turbulence.

  11. Modelling safety of multistate systems with ageing components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kołowrocki, Krzysztof; Soszyńska-Budny, Joanna

    An innovative approach to safety analysis of multistate ageing systems is presented. Basic notions of the ageing multistate systems safety analysis are introduced. The system components and the system multistate safety functions are defined. The mean values and variances of the multistate systems lifetimes in the safety state subsets and the mean values of their lifetimes in the particular safety states are defined. The multi-state system risk function and the moment of exceeding by the system the critical safety state are introduced. Applications of the proposed multistate system safety models to the evaluation and prediction of the safty characteristics ofmore » the consecutive “m out of n: F” is presented as well.« less

  12. Variance components of short-term biomarkers of manganese exposure in an inception cohort of welding trainees.

    PubMed

    Baker, Marissa G; Simpson, Christopher D; Sheppard, Lianne; Stover, Bert; Morton, Jackie; Cocker, John; Seixas, Noah

    2015-01-01

    Various biomarkers of exposure have been explored as a way to quantitatively estimate an internal dose of manganese (Mn) exposure, but given the tight regulation of Mn in the body, inter-individual variability in baseline Mn levels, and variability in timing between exposure and uptake into various biological tissues, identification of a valuable and useful biomarker for Mn exposure has been elusive. Thus, a mixed model estimating variance components using restricted maximum likelihood was used to assess the within- and between-subject variance components in whole blood, plasma, and urine (MnB, MnP, and MnU, respectively) in a group of nine newly-exposed apprentice welders, on whom baseline and subsequent longitudinal samples were taken over a three month period. In MnB, the majority of variance was found to be between subjects (94%), while in MnP and MnU the majority of variance was found to be within subjects (79% and 99%, respectively), even when controlling for timing of sample. While blood seemed to exhibit a homeostatic control of Mn, plasma and urine, with the majority of the variance within subjects, did not. Results presented here demonstrate the importance of repeat measure or longitudinal study designs when assessing biomarkers of Mn, and the spurious associations that could result from cross-sectional analyses. Copyright © 2014 Elsevier GmbH. All rights reserved.

  13. Contrast model for three-dimensional vehicles in natural lighting and search performance analysis

    NASA Astrophysics Data System (ADS)

    Witus, Gary; Gerhart, Grant R.; Ellis, R. Darin

    2001-09-01

    Ground vehicles in natural lighting tend to have significant and systematic variation in luminance through the presented area. This arises, in large part, from the vehicle surfaces having different orientations and shadowing relative to the source of illumination and the position of the observer. These systematic differences create the appearance of a structured 3D object. The 3D appearance is an important factor in search, figure-ground segregation, and object recognition. We present a contrast metric to predict search and detection performance that accounts for the 3D structure. The approach first computes the contrast of the front (or rear), side, and top surfaces. The vehicle contrast metric is the area-weighted sum of the absolute values of the contrasts of the component surfaces. The 3D structure contrast metric, together with target height, account for more than 80% of the variance in probability of detection and 75% of the variance in search time. When false alarm effects are discounted, they account for 89% of the variance in probability of detection and 95% of the variance in search time. The predictive power of the signature metric, when calibrated to half the data and evaluated against the other half, is 90% of the explanatory power.

  14. Genetic and environmental contributions to body mass index: comparative analysis of monozygotic twins, dizygotic twins and same-age unrelated siblings.

    PubMed

    Segal, N L; Feng, R; McGuire, S A; Allison, D B; Miller, S

    2009-01-01

    Earlier studies have established that a substantial percentage of variance in obesity-related phenotypes is explained by genetic components. However, only one study has used both virtual twins (VTs) and biological twins and was able to simultaneously estimate additive genetic, non-additive genetic, shared environmental and unshared environmental components in body mass index (BMI). Our current goal was to re-estimate four components of variance in BMI, applying a more rigorous model to biological and virtual multiples with additional data. Virtual multiples share the same family environment, offering unique opportunities to estimate common environmental influence on phenotypes that cannot be separated from the non-additive genetic component using only biological multiples. Data included 929 individuals from 164 monozygotic twin pairs, 156 dizygotic twin pairs, five triplet sets, one quadruplet set, 128 VT pairs, two virtual triplet sets and two virtual quadruplet sets. Virtual multiples consist of one biological child (or twins or triplets) plus one same-aged adoptee who are all raised together since infancy. We estimated the additive genetic, non-additive genetic, shared environmental and unshared random components in BMI using a linear mixed model. The analysis was adjusted for age, age(2), age(3), height, height(2), height(3), gender and race. Both non-additive genetic and common environmental contributions were significant in our model (P-values<0.0001). No significant additive genetic contribution was found. In all, 63.6% (95% confidence interval (CI) 51.8-75.3%) of the total variance of BMI was explained by a non-additive genetic component, 25.7% (95% CI 13.8-37.5%) by a common environmental component and the remaining 10.7% by an unshared component. Our results suggest that genetic components play an essential role in BMI and that common environmental factors such as diet or exercise also affect BMI. This conclusion is consistent with our earlier study using a smaller sample and shows the utility of virtual multiples for separating non-additive genetic variance from common environmental variance.

  15. Adaptive Prior Variance Calibration in the Bayesian Continual Reassessment Method

    PubMed Central

    Zhang, Jin; Braun, Thomas M.; Taylor, Jeremy M.G.

    2012-01-01

    Use of the Continual Reassessment Method (CRM) and other model-based approaches to design in Phase I clinical trials has increased due to the ability of the CRM to identify the maximum tolerated dose (MTD) better than the 3+3 method. However, the CRM can be sensitive to the variance selected for the prior distribution of the model parameter, especially when a small number of patients are enrolled. While methods have emerged to adaptively select skeletons and to calibrate the prior variance only at the beginning of a trial, there has not been any approach developed to adaptively calibrate the prior variance throughout a trial. We propose three systematic approaches to adaptively calibrate the prior variance during a trial and compare them via simulation to methods proposed to calibrate the variance at the beginning of a trial. PMID:22987660

  16. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Multilevel covariance regression with correlated random effects in the mean and variance structure.

    PubMed

    Quintero, Adrian; Lesaffre, Emmanuel

    2017-09-01

    Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Modeling Menstrual Cycle Length and Variability at the Approach of Menopause Using Hierarchical Change Point Models

    PubMed Central

    Huang, Xiaobi; Elliott, Michael R.; Harlow, Siobán D.

    2013-01-01

    SUMMARY As women approach menopause, the patterns of their menstrual cycle lengths change. To study these changes, we need to jointly model both the mean and variability of cycle length. Our proposed model incorporates separate mean and variance change points for each woman and a hierarchical model to link them together, along with regression components to include predictors of menopausal onset such as age at menarche and parity. Additional complexity arises from the fact that the calendar data have substantial missingness due to hormone use, surgery, and failure to report. We integrate multiple imputation and time-to event modeling in a Bayesian estimation framework to deal with different forms of the missingness. Posterior predictive model checks are applied to evaluate the model fit. Our method successfully models patterns of women’s menstrual cycle trajectories throughout their late reproductive life and identifies change points for mean and variability of segment length, providing insight into the menopausal process. More generally, our model points the way toward increasing use of joint mean-variance models to predict health outcomes and better understand disease processes. PMID:24729638

  19. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10 16 electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area.

  20. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed method is better than the ordinary Kriging and polynomial interpolation by about 1.2 TECU and 0.7 TECU, respectively. The root mean squared error of the proposed new Kriging with variance components is within 1.5 TECU and is smaller than those from other methods under comparison by about 1 TECU. When compared with ionospheric grid points, the mean squared error of the proposed method is within 6 TECU and smaller than Kriging, indicating that the proposed method can produce more accurate ionospheric delays and better estimation accuracy over China regional area. PMID:28264424

  1. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE PAGES

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia; ...

    2016-05-03

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  2. Evaluation of three lidar scanning strategies for turbulence measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Klein, Petra M.; Wharton, Sonia

    Several errors occur when a traditional Doppler beam swinging (DBS) or velocity–azimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar, and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers.Results indicate that the six-beam strategy mitigates some of the errors caused bymore » VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.« less

  3. An apparent contradiction: increasing variability to achieve greater precision?

    PubMed

    Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D

    2014-02-01

    To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.

  4. Observations of the scale-dependent turbulence and evaluation of the flux-gradient relationship for sensible heat for a closed Douglas-Fir canopy in very weak wind conditions

    DOE PAGES

    Vickers, D.; Thomas, C.

    2014-05-13

    Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less

  5. Observations of the scale-dependent turbulence and evaluation of the flux-gradient relationship for sensible heat for a closed Douglas-Fir canopy in very weak wind conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vickers, D.; Thomas, C.

    Observations of the scale-dependent turbulent fluxes and variances above, within and beneath a tall closed Douglas-Fir canopy in very weak winds are examined. The daytime subcanopy vertical velocity spectra exhibit a double-peak structure with peaks at time scales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime subcanopy heat flux cospectra. The daytime momentum flux cospectra inside the canopy and in the subcanopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of a mean wind direction, and subsequent partitioning of themore » momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the subcanopy contribute to upward transfer of momentum, consistent with the observed mean wind speed profile. In the canopy at night at the smallest resolved scales, we find relatively large momentum fluxes (compared to at larger scales), and increasing vertical velocity variance with decreasing time scale, consistent with very small eddies likely generated by wake shedding from the canopy elements that transport momentum but not heat. We find unusually large values of the velocity aspect ratio within the canopy, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the canopy. The flux-gradient approach for sensible heat flux is found to be valid for the subcanopy and above-canopy layers when considered separately; however, single source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the subcanopy and above-canopy layers. Modeled sensible heat fluxes above dark warm closed canopies are likely underestimated using typical values of the Stanton number.« less

  6. A combined approach of self-referencing and Principle Component Thermography for transient, steady, and selective heating scenarios

    NASA Astrophysics Data System (ADS)

    Omar, M. A.; Parvataneni, R.; Zhou, Y.

    2010-09-01

    Proposed manuscript describes the implementation of a two step processing procedure, composed of the self-referencing and the Principle Component Thermography (PCT). The combined approach enables the processing of thermograms from transient (flash), steady (halogen) and selective (induction) thermal perturbations. Firstly, the research discusses the three basic processing schemes typically applied for thermography; namely mathematical transformation based processing, curve-fitting processing, and direct contrast based calculations. Proposed algorithm utilizes the self-referencing scheme to create a sub-sequence that contains the maximum contrast information and also compute the anomalies' depth values. While, the Principle Component Thermography operates on the sub-sequence frames by re-arranging its data content (pixel values) spatially and temporally then it highlights the data variance. The PCT is mainly used as a mathematical mean to enhance the defects' contrast thus enabling its shape and size retrieval. The results show that the proposed combined scheme is effective in processing multiple size defects in sandwich steel structure in real-time (<30 Hz) and with full spatial coverage, without the need for a priori defect-free area.

  7. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  8. Genus- and species-level identification of dermatophyte fungi by surface-enhanced Raman spectroscopy.

    PubMed

    Witkowska, Evelin; Jagielski, Tomasz; Kamińska, Agnieszka

    2018-03-05

    This paper demonstrates that surface-enhanced Raman spectroscopy (SERS) coupled with principal component analysis (PCA) can serve as a fast and reliable technique for detection and identification of dermatophyte fungi at both genus and species level. Dermatophyte infections are the most common mycotic diseases worldwide, affecting a quarter of the human population. Currently, there is no optimal method for detection and identification of fungal diseases, as each has certain limitations. Here, for the first time, we have achieved with a high accuracy, differentiation of dermatophytes representing three major genera, i.e. Trichophyton, Microsporum, and Epidermophyton. Two first principal components (PC), namely PC-1 and PC-2, gave together 97% of total variance. Additionally, species-level identification within the Trichophyton genus has been performed. PC-1 and PC-2, which are the most diagnostically significant, explain 98% of the variance in the data obtained from spectra of: Trichophyton rubrum, Trichophyton menatgrophytes, Trichophyton interdigitale and Trichophyton tonsurans. This study offers a new diagnostic approach for the identification of dermatophytes. Being fast, reliable and cost-effective, it has the potential to be incorporated in the clinical practice to improve diagnostics of medically important fungi. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Genus- and species-level identification of dermatophyte fungi by surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Witkowska, Evelin; Jagielski, Tomasz; Kamińska, Agnieszka

    2018-03-01

    This paper demonstrates that surface-enhanced Raman spectroscopy (SERS) coupled with principal component analysis (PCA) can serve as a fast and reliable technique for detection and identification of dermatophyte fungi at both genus and species level. Dermatophyte infections are the most common mycotic diseases worldwide, affecting a quarter of the human population. Currently, there is no optimal method for detection and identification of fungal diseases, as each has certain limitations. Here, for the first time, we have achieved with a high accuracy, differentiation of dermatophytes representing three major genera, i.e. Trichophyton, Microsporum, and Epidermophyton. Two first principal components (PC), namely PC-1 and PC-2, gave together 97% of total variance. Additionally, species-level identification within the Trichophyton genus has been performed. PC-1 and PC-2, which are the most diagnostically significant, explain 98% of the variance in the data obtained from spectra of: Trichophyton rubrum, Trichophyton menatgrophytes, Trichophyton interdigitale and Trichophyton tonsurans. This study offers a new diagnostic approach for the identification of dermatophytes. Being fast, reliable and cost-effective, it has the potential to be incorporated in the clinical practice to improve diagnostics of medically important fungi.

  10. Multifactorial inheritance with cultural transmission and assortative mating. II. a general model of combined polygenic and cultural inheritance.

    PubMed Central

    Cloninger, C R; Rice, J; Reich, T

    1979-01-01

    A general linear model of combined polygenic-cultural inheritance is described. The model allows for phenotypic assortative mating, common environment, maternal and paternal effects, and genic-cultural correlation. General formulae for phenotypic correlation between family members in extended pedigrees are given for both primary and secondary assortative mating. A FORTRAN program BETA, available upon request, is used to provide maximum likelihood estimates of the parameters from reported correlations. American data about IQ and Burks' culture index are analyzed. Both cultural and genetic components of phenotypic variance are observed to make significant and substantial contributions to familial resemblance in IQ. The correlation between the environments of DZ twins is found to equal that of singleton sibs, not that of MZ twins. Burks' culture index is found to be an imperfect measure of midparent IQ rather than an index of home environment as previously assumed. Conditions under which the parameters of the model may be uniquely and precisely estimated are discussed. Interpretation of variance components in the presence of assortative mating and genic-cultural covariance is reviewed. A conservative, but robust, approach to the use of environmental indices is described. PMID:453202

  11. Untargeted MS-based small metabolite identification from the plant leaves and stems of Impatiens balsamina.

    PubMed

    Chua, Lee Suan

    2016-09-01

    The identification of plant metabolites is very important for the understanding of plant physiology including plant growth, development and defense mechanism, particularly for herbal medicinal plants. The metabolite profile could possibly be used for future drug discovery since the pharmacological activities of the indigenous herbs have been proven for centuries. An untargeted mass spectrometric approach was used to identify metabolites from the leaves and stems of Impatiens balsamina using LC-DAD-MS/MS. The putative compounds are mostly from the groups of phenolic, organic and amino acids which are essential for plant growth and as intermediates for other compounds. Alanine appeared to be the main amino acid in the plant because many alanine derived metabolites were detected. There are also several secondary metabolites from the groups of benzopyrones, benzofuranones, naphthoquinones, alkaloids and flavonoids. The widely reported bioactive components such as kaempferol, quercetin and their glycosylated, lawsone and its derivatives were detected in this study. The results also revealed that aqueous methanol could extract flavonoids better than water, and mostly, flavonoids were detected from the leaf samples. The score plots of component analysis show that there is a minor variance in the metabolite profiles of water and aqueous methanolic extracts with 21.5 and 30.5% of the total variance for the first principal component at the positive and negative ion modes, respectively. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  12. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach

    PubMed Central

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447–2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8–30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics. PMID:26452043

  13. Estimating Sampling Selection Bias in Human Genetics: A Phenomenological Approach.

    PubMed

    Risso, Davide; Taglioli, Luca; De Iasio, Sergio; Gueresi, Paola; Alfani, Guido; Nelli, Sergio; Rossi, Paolo; Paoli, Giorgio; Tofanelli, Sergio

    2015-01-01

    This research is the first empirical attempt to calculate the various components of the hidden bias associated with the sampling strategies routinely-used in human genetics, with special reference to surname-based strategies. We reconstructed surname distributions of 26 Italian communities with different demographic features across the last six centuries (years 1447-2001). The degree of overlapping between "reference founding core" distributions and the distributions obtained from sampling the present day communities by probabilistic and selective methods was quantified under different conditions and models. When taking into account only one individual per surname (low kinship model), the average discrepancy was 59.5%, with a peak of 84% by random sampling. When multiple individuals per surname were considered (high kinship model), the discrepancy decreased by 8-30% at the cost of a larger variance. Criteria aimed at maximizing locally-spread patrilineages and long-term residency appeared to be affected by recent gene flows much more than expected. Selection of the more frequent family names following low kinship criteria proved to be a suitable approach only for historically stable communities. In any other case true random sampling, despite its high variance, did not return more biased estimates than other selective methods. Our results indicate that the sampling of individuals bearing historically documented surnames (founders' method) should be applied, especially when studying the male-specific genome, to prevent an over-stratification of ancient and recent genetic components that heavily biases inferences and statistics.

  14. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  15. Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity

    ERIC Educational Resources Information Center

    Schimmack, Ulrich; Lucas, Richard E.

    2010-01-01

    This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…

  16. Correlation between academic achievement goal orientation and the performance of Malaysian students in an Indian medical school.

    PubMed

    Barkur, Rajashekar Rao; Govindan, Sreejith; Kamath, Asha

    2013-01-01

    According to goal orientation theory, achievement goals are defined as the terminal point towards which one's efforts are directed. The four academic achievement goal orientations commonly recognised are mastery, performance approach, performance avoidance and work avoidance. The objective of this study was to understand the goal orientation of second year undergraduate medical students and how this correlates with their academic performance. The study population consisted of 244 second year Bachelor of Medicine and Bachelor of Surgery (MBBS) students of Melaka Manipal Medical College, Manipal campus, Manipal University, India. Students were categorised as high performers and low performers based on their first year university examination marks. Their goal orientations were assessed through a validated questionnaire developed by Was et al. These components were analysed by independent sample t-test and correlated to their first year university examination marks. Confirmatory component factor analysis extracted four factors, which accounted for 40.8% of the total variance in goal orientation. The performance approach goal orientation alone explained 16.7% of the variance followed by mastery (10.8%), performance avoidance (7.7%) and work avoidance (5.7%). The Cronbach's alpha for 19 items, which contributed to internal consistency of the tool, was observed to be 0.635. A strong positive correlation was shown between performance approach, performance avoidance and work avoidance orientations. Of the four goal orientations, only the mean scores in work avoidance orientation differed for low performers and high performers (5.0 vs. 4.3; P = 0.0003). Work avoidance type of goal orientation among the low performer group may account for their lower performance compared with high performer group. This indicates that academic achievement goal orientation may play a role in the performance of undergraduate medical students.

  17. Exploring individual differences in children's mathematical skills: a correlational and dimensional approach.

    PubMed

    Sigmundsson, H; Polman, R C J; Lorås, H

    2013-08-01

    Individual differences in mathematical skills are typically explained by an innate capability to solve mathematical tasks. At the behavioural level this implies a consistent level of mathematical achievement that can be captured by strong relationships between tasks, as well as by a single statistical dimension that underlies performance on all mathematical tasks. To investigate this general assumption, the present study explored interrelations and dimensions of mathematical skills. For this purpose, 68 ten-year-old children from two schools were tested using nine mathematics tasks from the Basic Knowledge in Mathematics Test. Relatively low-to-moderate correlations between the mathematics tasks indicated most tasks shared less than 25% of their variance. There were four principal components, accounting for 70% of the variance in mathematical skill across tasks and participants. The high specificity in mathematical skills was discussed in relation to the principle of task specificity of learning.

  18. Rapid Communication: Large exploitable genetic variability exists to shorten age at slaughter in cattle.

    PubMed

    Berry, D P; Cromie, A R; Judge, M M

    2017-10-01

    Apprehension among consumers is mounting on the efficiency by which cattle convert feedstuffs into human edible protein and energy as well as the consequential effects on the environment. Most (genetic) studies that attempt to address these issues have generally focused on efficiency metrics defined over a certain time period of an animal's life cycle, predominantly the period representing the linear phase of growth. The age at which an animal reaches the carcass specifications for slaughter, however, is also known to vary between breeds; less is known on the extent of the within-breed variability in age at slaughter. Therefore, the objective of the present study was to quantify the phenotypic and genetic variability in the age at which cattle reach a predefined carcass weight and subcutaneous fat cover. A novel trait, labeled here as the deviation in age at slaughter (DAGE), was represented by the unexplained variability from a statistical model, with age at slaughter as the dependent variable and with the fixed effects, among others, of carcass weight and fat score (scale 1 to 15 scored by video image analysis of the carcass at slaughter). Variance components for DAGE were estimated using either a 2-step approach (i.e., the DAGE phenotype derived first and then variance components estimated) or a 1-step approach (i.e., variance components for age at slaughter estimated directly in a mixed model that included the fixed effects of, among others, carcass weight and carcass fat score as well as a random direct additive genetic effect). The raw phenotypic SD in DAGE was 44.2 d. The genetic SD and heritability for DAGE estimated using the 1-step or 2-step models varied from 14.2 to 15.1 d and from 0.23 to 0.26 (SE 0.02), respectively. Assuming the (genetic) variability in the number of days from birth to reaching a desired carcass specifications can be exploited without any associated unfavorable repercussions, considerable potential exists to improve not only the (feed) efficiency of the animal and farm system but also the environmental footprint of the system. The beauty of the approach proposed, relative to strategies that select directly for the feed intake complex and enteric methane emissions, is that data on age at slaughter are generally readily available. Of course, faster gains may potentially be achieved if a dual objective of improving animal efficiency per day coupled with reduced days to slaughter was embarked on.

  19. Variance approach for multi-objective linear programming with fuzzy random of objective function coefficients

    NASA Astrophysics Data System (ADS)

    Indarsih, Indrati, Ch. Rini

    2016-02-01

    In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.

  20. Identifying the interferences of irrigation on evapotranspiration variability over the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Zeng, R.; Cai, X.

    2016-12-01

    Irrigation has considerably interfered with hydrological processes in arid and semi-arid areas with heavy irrigated agriculture. With the increasing demand for food production and evaporative demand due to climate change, irrigation water consumption is expected to increase, which would aggravate the interferences to hydrologic processes. Current studies focus on the impact of irrigation on the mean value of evapotranspiration (ET) at either local or regional scale, however, how irrigation changes the variability of ET has not been well understood. This study analyzes the impact of extensive irrigation on ET variability in the Northern High Plains. We apply an ET variance decomposition framework developed from our previous work to quantify the effects of both climate and irrigation on ET variance in the Northern High Plains watersheds. Based on climate and water table observations, we assess the monthly ET variance and its components for two periods: 1930s-1960s with less irrigation development 970s-2010s with more development. It is found that irrigation not only caused the well-recognized groundwater drawdown and stream depletion problems in the region, but also buffered ET variance from climatic fluctuations. In addition to increasing food productivity, irrigation also stabilizes crop yield by mitigating the impact of hydroclimatic variability. With complementary water supply from irrigation, ET often approaches to the potential ET, and thus the observed ET variance is more attributed to climatic variables especially temperature; meanwhile irrigation causes significant seasonal fluctuations to groundwater storage. For sustainable water resources management in the Northern High Plains, we argue that both the mean value and the variance of ET should be considered together for the regulation of irrigation in this region.

  1. Robust Means Modeling: An Alternative for Hypothesis Testing of Independent Means under Variance Heterogeneity and Nonnormality

    ERIC Educational Resources Information Center

    Fan, Weihua; Hancock, Gregory R.

    2012-01-01

    This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…

  2. Selective impact of disease on short-term and long-term components of self-reported memory: a population-based HUNT study

    PubMed Central

    Almkvist, Ove; Bosnes, Ole; Bosnes, Ingunn; Stordal, Eystein

    2017-01-01

    Background Subjective memory is commonly considered to be a unidimensional measure. However, theories of performance-based memory suggest that subjective memory could be divided into more than one dimension. Objective To divide subjective memory into theoretically related components of memory and explore the relationship to disease. Methods In this study, various aspects of self-reported memory were studied with respect to demographics and diseases in the third wave of the HUNT epidemiological study in middle Norway. The study included all individuals 55 years of age or older, who responded to a nine-item questionnaire on subjective memory and questionnaires on health (n=18 633). Results A principle component analysis of the memory items resulted in two memory components; the criterion used was an eigenvalue above 1, which accounted for 54% of the total variance. The components were interpreted as long-term memory (LTM; the first component; 43% of the total variance) and short-term memory (STM; the second component; 11% of the total variance). Memory impairment was significantly related to all diseases (except Bechterew’s disease), most strongly to brain infarction, heart failure, diabetes, cancer, chronic obstructive pulmonary disease and whiplash. For most diseases, the STM component was more affected than the LTM component; however, in cancer, the opposite pattern was seen. Conclusions Subjective memory impairment as measured in HUNT contained two components, which were differentially associated with diseases. PMID:28490551

  3. Least Squares Solution of Small Sample Multiple-Master PSInSAR System

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ding, Xiao Li; Lu, Zhong

    2010-03-01

    In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.

  4. Locating anger in the hierarchical structure of affect: comment on Carver and Harmon-Jones (2009).

    PubMed

    Watson, David

    2009-03-01

    C. S. Carver and E. Harmon-Jones (2009) have presented considerable evidence to support their argument that "anger relates to an appetitive or approach motivational system, whereas anxiety relates to an aversive or avoidance motivational system" (p. 183). However, they have failed to take sufficient account of the extensive psychometric data indicating that anger is strongly related to anxiety (and other negative affects) and more weakly associated with the positive affects. Considering all of the available evidence, the most accurate conclusion is that anger shows both approach and avoidance properties. Moreover, viewed in the context of the hierarchical structure of affect, some evidence suggests that the nonspecific component of anger (i.e., its shared variance with the other negative affects) is primarily related to the aversive or avoidance motivational system, whereas its specific component (i.e., its unique qualities that distinguish it from other negative affects) has a stronger link to the appetitive or approach system. The author concludes by considering the broader implications of these data for affective structure. (c) 2009 APA, all rights reserved.

  5. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  6. Economic sustainability assessment in semi-steppe rangelands.

    PubMed

    Mofidi Chelan, Morteza; Alijanpour, Ahmad; Barani, Hossein; Motamedi, Javad; Azadi, Hossein; Van Passel, Steven

    2018-05-08

    This study was conducted to determine indices and components of economic sustainability assessment in the pastoral units of Sahand summer rangelands. The method was based on descriptive-analytical survey (experts and researchers) with questionnaires. Analysis of variance showed that the mean values of economic components are significantly different from each other and the efficiency component has the highest mean value (0.57). The analysis of rangeland pastoral units with the technique for order-preference by similarity to ideal solution (TOPSIS) indicated that from an economic sustainability standpoint, Garehgol (Ci = 0.519) and Badir Khan (Ci = 0.129), pastoral units ranked first and last, respectively. This study provides a clear understanding of existing resources and opportunities for policy makers that is crucial to approach economic sustainable development. Accordingly, this study can help better define sustainable development goals and monitor the progress of achieving them. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Damping Effect of an Unsaturated-Saturated System on Tempospatial Variations of Pressure Head and Specific Flux

    NASA Astrophysics Data System (ADS)

    Yang, C.; Zhang, Y. K.; Liang, X.

    2014-12-01

    Damping effect of an unsaturated-saturated system on tempospatialvariations of pressurehead and specificflux was investigated. The variance and covariance of both pressure head and specific flux in such a system due to a white noise infiltration were obtained by solving the moment equations of water flow in the system and verified with Monte Carlo simulations. It was found that both the pressure head and specific flux in this case are temporally non-stationary. The variance is zero at early time due to a deterministic initial condition used, then increases with time, and approaches anasymptotic limit at late time.Both pressure head and specific flux arealso non-stationary in space since the variance decreases from source to sink. The unsaturated-saturated systembehavesasa noise filterand it damps both the pressure head and specific flux, i.e., reduces their variations and enhances their correlation. The effect is stronger in upper unsaturated zone than in lower unsaturated zone and saturated zone. As a noise filter, the unsaturated-saturated system is mainly a low pass filter, filtering out the high frequency components in the time series of hydrological variables. The damping effect is much stronger in the saturated zone than in the saturated zone.

  8. A classical regression framework for mediation analysis: fitting one model to estimate mediation effects.

    PubMed

    Saunders, Christina T; Blume, Jeffrey D

    2017-10-26

    Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.

  9. Additive-dominance genetic model analyses for late-maturity alpha-amylase activity in a bread wheat factorial crossing population.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Ibrahim, Amir M H

    2015-12-01

    Elevated level of late maturity α-amylase activity (LMAA) can result in low falling number scores, reduced grain quality, and downgrade of wheat (Triticum aestivum L.) class. A mating population was developed by crossing parents with different levels of LMAA. The F2 and F3 hybrids and their parents were evaluated for LMAA, and data were analyzed using the R software package 'qgtools' integrated with an additive-dominance genetic model and a mixed linear model approach. Simulated results showed high testing powers for additive and additive × environment variances, and comparatively low powers for dominance and dominance × environment variances. All variance components and their proportions to the phenotypic variance for the parents and hybrids were significant except for the dominance × environment variance. The estimated narrow-sense heritability and broad-sense heritability for LMAA were 14 and 54%, respectively. High significant negative additive effects for parents suggest that spring wheat cultivars 'Lancer' and 'Chester' can serve as good general combiners, and that 'Kinsman' and 'Seri-82' had negative specific combining ability in some hybrids despite of their own significant positive additive effects, suggesting they can be used as parents to reduce LMAA levels. Seri-82 showed very good general combining ability effect when used as a male parent, indicating the importance of reciprocal effects. High significant negative dominance effects and high-parent heterosis for hybrids demonstrated that the specific hybrid combinations; Chester × Kinsman, 'Lerma52' × Lancer, Lerma52 × 'LoSprout' and 'Janz' × Seri-82 could be generated to produce cultivars with significantly reduced LMAA level.

  10. Self-esteem, social participation, and quality of life in patients with multiple sclerosis.

    PubMed

    Mikula, Pavol; Nagyova, Iveta; Krokavcova, Martina; Vitkova, Marianna; Rosenberger, Jaroslav; Szilasiova, Jarmila; Gdovinova, Zuzana; Stewart, Roy E; Groothoff, Johan W; van Dijk, Jitse P

    2017-07-01

    The aim of this study is to explore whether self-esteem and social participation are associated with the physical and mental quality of life (Physical Component Summary, Mental Component Summary) and whether self-esteem can mediate the association between these variables. We collected information from 118 consecutive multiple sclerosis patients. Age, gender, disease duration, disability status, and participation were significant predictors of Physical Component Summary, explaining 55.4 percent of the total variance. Self-esteem fully mediated the association between social participation and Mental Component Summary (estimate/standard error = -4.872; p < 0.001) and along with disability status explained 48.3 percent of the variance in Mental Component Summary. These results can be used in intervention and educational programs.

  11. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Subacute casemix classification for stroke rehabilitation in Australia. How well does AN-SNAP v2 explain variance in outcomes?

    PubMed

    Kohler, Friedbert; Renton, Roger; Dickson, Hugh G; Estell, John; Connolly, Carol E

    2011-02-01

    We sought the best predictors for length of stay, discharge destination and functional improvement for inpatients undergoing rehabilitation following a stroke and compared these predictors against AN-SNAP v2. The Oxfordshire classification subgroup, sociodemographic data and functional data were collected for patients admitted between 1997 and 2007, with a diagnosis of recent stroke. The data were factor analysed using Principal Components Analysis for categorical data (CATPCA). Categorical regression analyses was performed to determine the best predictors of length of stay, discharge destination, and functional improvement. A total of 1154 patients were included in the study. Principal components analysis indicated that the data were effectively unidimensional, with length of stay being the most important component. Regression analysis demonstrated that the best predictor was the admission motor FIM score, explaining 38.9% of variance for length of stay, 37.4%.of variance for functional improvement and 16% of variance for discharge destination. The best explanatory variable in our inpatient rehabilitation service is the admission motor FIM. AN- SNAP v2 classification is a less effective explanatory variable. This needs to be taken into account when using AN-SNAP v2 classification for clinical or funding purposes.

  13. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  14. Variance computations for functional of absolute risk estimates.

    PubMed

    Pfeiffer, R M; Petracci, E

    2011-07-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates.

  15. Variance computations for functional of absolute risk estimates

    PubMed Central

    Pfeiffer, R.M.; Petracci, E.

    2011-01-01

    We present a simple influence function based approach to compute the variances of estimates of absolute risk and functions of absolute risk. We apply this approach to criteria that assess the impact of changes in the risk factor distribution on absolute risk for an individual and at the population level. As an illustration we use an absolute risk prediction model for breast cancer that includes modifiable risk factors in addition to standard breast cancer risk factors. Influence function based variance estimates for absolute risk and the criteria are compared to bootstrap variance estimates. PMID:21643476

  16. The relation between societal factors and different forms of prejudice: A cross-national approach on target-specific and generalized prejudice.

    PubMed

    Meeusen, Cecil; Kern, Anna

    2016-01-01

    The goal of this paper was to investigate the generalizability of prejudice across contexts by analyzing associations between different types of prejudice in a cross-national perspective and by investigating the relation between country-specific contextual factors and target-specific prejudices. Relying on the European Social Survey (2008), results indicated that prejudices were indeed positively associated, confirming the existence of a generalized prejudice component. Next to substantial cross-national differences in associational strength, also within country variance in target-specific associations was observed. This suggested that the motivations for prejudice largely vary according to the intergroup context. Two aspects of the intergroup context - economic conditions and cultural values - showed to be related to generalized and target-specific components of prejudice. Future research on prejudice and context should take an integrative approach that considers both the idea of generalized and specific prejudice simultaneously. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Using variance components to estimate power in a hierarchically nested sampling design improving monitoring of larval Devils Hole pupfish

    USGS Publications Warehouse

    Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey

    2013-01-01

    We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.

  18. [Determination and principal component analysis of mineral elements based on ICP-OES in Nitraria roborowskii fruits from different regions].

    PubMed

    Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng

    2017-06-01

    The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.

  19. A pattern recognition approach to transistor array parameter variance

    NASA Astrophysics Data System (ADS)

    da F. Costa, Luciano; Silva, Filipi N.; Comin, Cesar H.

    2018-06-01

    The properties of semiconductor devices, including bipolar junction transistors (BJTs), are known to vary substantially in terms of their parameters. In this work, an experimental approach, including pattern recognition concepts and methods such as principal component analysis (PCA) and linear discriminant analysis (LDA), was used to experimentally investigate the variation among BJTs belonging to integrated circuits known as transistor arrays. It was shown that a good deal of the devices variance can be captured using only two PCA axes. It was also verified that, though substantially small variation of parameters is observed for BJT from the same array, larger variation arises between BJTs from distinct arrays, suggesting the consideration of device characteristics in more critical analog designs. As a consequence of its supervised nature, LDA was able to provide a substantial separation of the BJT into clusters, corresponding to each transistor array. In addition, the LDA mapping into two dimensions revealed a clear relationship between the considered measurements. Interestingly, a specific mapping suggested by the PCA, involving the total harmonic distortion variation expressed in terms of the average voltage gain, yielded an even better separation between the transistor array clusters. All in all, this work yielded interesting results from both semiconductor engineering and pattern recognition perspectives.

  20. Genetic analysis of growth traits in Polled Nellore cattle raised on pasture in tropical region using Bayesian approaches.

    PubMed

    Lopes, Fernando Brito; Magnabosco, Cláudio Ulhôa; Paulini, Fernanda; da Silva, Marcelo Corrêa; Miyagi, Eliane Sayuri; Lôbo, Raysildo Barbosa

    2013-01-01

    Components of (co)variance and genetic parameters were estimated for adjusted weights at ages 120 (W120), 240 (W240), 365 (W365) and 450 (W450) days of Polled Nellore cattle raised on pasture and born between 1987 and 2010. Analyses were performed using an animal model, considering fixed effects: herd-year-season of birth and calf sex as contemporary groups and the age of cow as a covariate. Gibbs Samplers were used to estimate (co)variance components, genetic parameters and additive genetic effects, which accounted for great proportion of total variation in these traits. High direct heritability estimates for the growth traits were revealed and presented mean 0.43, 0.61, 0.72 and 0.67 for W120, W240, W365 and W450, respectively. Maternal heritabilities were 0.07 and 0.08 for W120 and W240, respectively. Direct additive genetic correlations between the weight at 120, 240, 365 and 450 days old were strong and positive. These estimates ranged from 0.68 to 0.98. Direct-maternal genetic correlations were negative for W120 and W240. The estimates ranged from -0.31 to -0.54. Estimates of maternal heritability ranged from 0.056 to 0.092 for W120 and from 0.064 to 0.096 for W240. This study showed that genetic progress is possible for the growth traits we studied, which is a novel and favorable indicator for an upcoming and promising Polled Zebu breed in Tropical regions. Maternal effects influenced the performance of weight at 120 and 240 days old. These effects should be taken into account in genetic analyses of growth traits by fitting them as a genetic or a permanent environmental effect, or even both. In general, due to a medium-high estimate of environmental (co)variance components, management and feeding conditions for Polled Nellore raised at pasture in tropical regions of Brazil needs improvement and growth performance can be enhanced.

  1. Modeling individual differences in text reading fluency: a different pattern of predictors for typically developing and dyslexic readers

    PubMed Central

    Zoccolotti, Pierluigi; De Luca, Maria; Marinelli, Chiara V.; Spinelli, Donatella

    2014-01-01

    This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading) and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN). Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds) and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37 to 52% of the explained variance) when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69%) and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colors as stimuli (both in the RAN task and in the discrete naming task) obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs. distal causes to predict reading fluency is discussed. PMID:25477856

  2. Selective impact of disease on short-term and long-term components of self-reported memory: a population-based HUNT study.

    PubMed

    Almkvist, Ove; Bosnes, Ole; Bosnes, Ingunn; Stordal, Eystein

    2017-05-09

    Subjective memory is commonly considered to be a unidimensional measure. However, theories of performance-based memory suggest that subjective memory could be divided into more than one dimension. To divide subjective memory into theoretically related components of memory and explore the relationship to disease. In this study, various aspects of self-reported memory were studied with respect to demographics and diseases in the third wave of the HUNT epidemiological study in middle Norway. The study included all individuals 55 years of age or older, who responded to a nine-item questionnaire on subjective memory and questionnaires on health (n=18 633). A principle component analysis of the memory items resulted in two memory components; the criterion used was an eigenvalue above 1, which accounted for 54% of the total variance. The components were interpreted as long-term memory (LTM; the first component; 43% of the total variance) and short-term memory (STM; the second component; 11% of the total variance). Memory impairment was significantly related to all diseases (except Bechterew's disease), most strongly to brain infarction, heart failure, diabetes, cancer, chronic obstructive pulmonary disease and whiplash. For most diseases, the STM component was more affected than the LTM component; however, in cancer, the opposite pattern was seen. Subjective memory impairment as measured in HUNT contained two components, which were differentially associated with diseases. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  4. Instrument Psychometrics: Parental Satisfaction and Quality Indicators of Perinatal Palliative Care.

    PubMed

    Wool, Charlotte

    2015-10-01

    Despite a life-limiting fetal diagnosis, prenatal attachment often occurs in varying degrees resulting in role identification by an individual as a parent. Parents recognize quality care and report their satisfaction when interfacing with health care providers. The aim was to test an instrument measuring parental satisfaction and quality indicators with parents electing to continue a pregnancy after learning of a life-limiting fetal diagnosis. A cross sectional survey design gathered data using a computer-mediated platform. Subjects were parents (n=405) who opted to continue a pregnancy affected by a life-limiting diagnosis. Factor analysis using principal component analysis with Varimax rotation was used to validate the instrument, evaluate components, and summarize the explained variance achieved among quality indicator items. The Prenatal Scale was reduced to 37 items with a three-component solution explaining 66.19% of the variance and internal consistency reliability of 0.98. The Intrapartum Scale included 37 items with a four-component solution explaining 66.93% of the variance and a Cronbach α of 0.977. The Postnatal Scale was reduced to 44 items with a six-component solution explaining 67.48% of the variance. Internal consistency reliability was 0.975. The Parental Satisfaction and Quality Indicators of Perinatal Palliative Care Instrument is a valid and reliable measure for parent-reported quality care and satisfaction. Use of this instrument will enable clinicians and researchers to measure quality indicators and parental satisfaction. The instrument is useful for assessing, analyzing, and reporting data on quality for care delivered during the prenatal, intrapartum, and postnatal periods.

  5. Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.

    2016-01-01

    Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

  6. Deconvolution of the vestibular evoked myogenic potential.

    PubMed

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. The variance modulation associated with the vestibular evoked myogenic potential.

    PubMed

    Lütkenhöner, Bernd; Rudack, Claudia; Basel, Türker

    2011-07-01

    Model considerations suggest that the sound-induced inhibition underlying the vestibular evoked myogenic potential (VEMP) briefly reduces the variance of the electromyogram (EMG) from which the VEMP is derived. Although more difficult to investigate, this inhibitory modulation of the variance promises to be a specific measure of the inhibition, in that respect being superior to the VEMP itself. This study aimed to verify the theoretical predictions. Archived data from 672 clinical VEMP investigations, comprising about 300,000 EMG records altogether, were pooled. Both the complete data pool and subsets of data representing VEMPs of varying degrees of distinctness were analyzed. The data were generally normalized so that the EMG had variance one. Regarding VEMP deflection p13, the data confirm the theoretical predictions. At the latency of deflection n23, however, an additional excitatory component, showing a maximal effect around 30 ms, appears to contribute. Studying the variance modulation may help to identify and characterize different components of the VEMP. In particular, it appears to be possible to distinguish between inhibition and excitation. The variance modulation provides information not being available in the VEMP itself. Thus, studying this measure may significantly contribute to our understanding of the VEMP phenomenon. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  8. Sampling hazelnuts for aflatoxin: uncertainty associated with sampling, sample preparation, and analysis.

    PubMed

    Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis

    2006-01-01

    The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.

  9. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors.

    PubMed

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus.

  10. Comparison of multipoint linkage analyses for quantitative traits in the CEPH data: parametric LOD scores, variance components LOD scores, and Bayes factors

    PubMed Central

    Sung, Yun Ju; Di, Yanming; Fu, Audrey Q; Rothstein, Joseph H; Sieh, Weiva; Tong, Liping; Thompson, Elizabeth A; Wijsman, Ellen M

    2007-01-01

    We performed multipoint linkage analyses with multiple programs and models for several gene expression traits in the Centre d'Etude du Polymorphisme Humain families. All analyses provided consistent results for both peak location and shape. Variance-components (VC) analysis gave wider peaks and Bayes factors gave fewer peaks. Among programs from the MORGAN package, lm_multiple performed better than lm_markers, resulting in less Markov-chain Monte Carlo (MCMC) variability between runs, and the program lm_twoqtl provided higher LOD scores by also including either a polygenic component or an additional quantitative trait locus. PMID:18466597

  11. Patient phenotypes associated with outcomes after aneurysmal subarachnoid hemorrhage: a principal component analysis.

    PubMed

    Ibrahim, George M; Morgan, Benjamin R; Macdonald, R Loch

    2014-03-01

    Predictors of outcome after aneurysmal subarachnoid hemorrhage have been determined previously through hypothesis-driven methods that often exclude putative covariates and require a priori knowledge of potential confounders. Here, we apply a data-driven approach, principal component analysis, to identify baseline patient phenotypes that may predict neurological outcomes. Principal component analysis was performed on 120 subjects enrolled in a prospective randomized trial of clazosentan for the prevention of angiographic vasospasm. Correlation matrices were created using a combination of Pearson, polyserial, and polychoric regressions among 46 variables. Scores of significant components (with eigenvalues>1) were included in multivariate logistic regression models with incidence of severe angiographic vasospasm, delayed ischemic neurological deficit, and long-term outcome as outcomes of interest. Sixteen significant principal components accounting for 74.6% of the variance were identified. A single component dominated by the patients' initial hemodynamic status, World Federation of Neurosurgical Societies score, neurological injury, and initial neutrophil/leukocyte counts was significantly associated with poor outcome. Two additional components were associated with angiographic vasospasm, of which one was also associated with delayed ischemic neurological deficit. The first was dominated by the aneurysm-securing procedure, subarachnoid clot clearance, and intracerebral hemorrhage, whereas the second had high contributions from markers of anemia and albumin levels. Principal component analysis, a data-driven approach, identified patient phenotypes that are associated with worse neurological outcomes. Such data reduction methods may provide a better approximation of unique patient phenotypes and may inform clinical care as well as patient recruitment into clinical trials. http://www.clinicaltrials.gov. Unique identifier: NCT00111085.

  12. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  13. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  14. Ethnic and socioeconomic differences in variability in nutritional biomarkers.

    PubMed

    Kant, Ashima K; Graubard, Barry I

    2008-05-01

    Several studies have reported ethnic, education, and income differentials in concentrations of selected nutritional biomarkers in the US population. Although biomarker measurements are not subject to biased self-reports, biologic variability due to individual characteristics and behaviors related to dietary exposures contributes to within-subject variability and measurement error. We aimed to establish whether the magnitude of components of variance for nutritional biomarkers also differs in these high-risk groups. We used data from 2 replicate measurements of serum concentrations of vitamins A, C, D, and E; folate; carotenoids; ferritin; and selenium in the third National Health and Nutrition Examination Survey second examination subsample (n = 948) to examine the within-subject and between-subject components of variance. We used multivariate regression methods with log-transformed analyte concentrations as outcomes to estimate the ratios of the within-subject to between-subject components of variance by categories of ethnicity, income, and education. In non-Hispanic blacks, the within-subject to between-subject variance ratio for beta-cryptoxanthin concentration was higher (0.23; 95% CI: 0.17, 0.29) relative to non-Hispanic whites (0.13; 0.11, 0.16) and Mexican Americans (0.11; 0.07, 0.14), and the lutein + zeaxanthin ratio was higher (0.29; 0.21, 0.38) relative to Mexican Americans (0.15; 0.10, 0.19). Higher income was associated with larger within-subject to between-subject variance ratios for serum vitamin C and red blood cell folate concentrations but smaller ratios for serum vitamin A. Overall, there were few consistent up- or down-trends in the direction of covariate-adjusted variability by ethnicity, income, or education. Population groups at high risk of adverse nutritional profiles did not have larger variance ratios for most of the examined biomarkers.

  15. Effect of multiple perfusion components on pseudo-diffusion coefficient in intravoxel incoherent motion imaging

    NASA Astrophysics Data System (ADS)

    Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min

    2017-11-01

    The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n  =  31), spleens (n  =  31) and kidneys (n  =  31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.

  16. Multilevel modelling of somatotype components: the Portuguese sibling study on growth, fitness, lifestyle and health.

    PubMed

    Pereira, Sara; Katzmarzyk, Peter T; Gomes, Thayse Natacha; Souza, Michele; Chaves, Raquel N; Santos, Fernanda K Dos; Santos, Daniel; Hedeker, Donald; Maia, José A R

    2017-06-01

    Somatotype is a complex trait influenced by different genetic and environmental factors as well as by other covariates whose effects are still unclear. To (1) estimate siblings' resemblance in their general somatotype; (2) identify sib-pair (brother-brother (BB), sister-sister (SS), brother-sister (BS)) similarities in individual somatotype components; (3) examine the degree to which between and within variances differ among sib-ships; and (4) investigate the effects of physical activity (PA) and family socioeconomic status (SES) on these relationships. The sample comprises 1058 Portuguese siblings (538 females) aged 9-20 years. Somatotype was calculated using the Health-Carter method, while PA and SES information was obtained by questionnaire. Multi-level modelling was done in SuperMix software. Older subjects showed the lowest values for endomorphy and mesomorphy, but the highest values for ectomorphy; and more physically active subjects showed the highest values for mesomorphy. In general, the familiality of somatotype was moderate (ρ = 0.35). Same-sex siblings had the strongest resemblance (endomorphy: ρ SS > ρ BB > ρ BS ; mesomorphy: ρ BB = ρ SS > ρ BS ; ectomorphy: ρ BB > ρ SS > ρ BS ). For the ectomorphy and mesomorphy components, BS pairs showed the highest between sib-ship variance, but the lowest within sib-ship variance; while for endomorphy BS showed the lowest between and within sib-ship variances. These results highlight the significant familial effects on somatotype and the complexity of the role of familial resemblance in explaining variance in somatotypes.

  17. The Genetic and Environmental Etiologies of the Relations between Cognitive Skills and Components of Reading Ability

    PubMed Central

    Christopher, Micaela E.; Keenan, Janice M.; Hulslander, Jacqueline; DeFries, John C.; Miyake, Akira; Wadsworth, Sally J.; Willcutt, Erik; Pennington, Bruce; Olson, Richard K.

    2016-01-01

    While previous research has shown cognitive skills to be important predictors of reading ability in children, the respective roles for genetic and environmental influences on these relations is an open question. The present study explored the genetic and environmental etiologies underlying the relations between selected executive functions and cognitive abilities (working memory, inhibition, processing speed, and naming speed) with three components of reading ability (word reading, reading comprehension, and listening comprehension). Twin pairs drawn from the Colorado Front Range (n = 676; 224 monozygotic pairs; 452 dizygotic pairs) between the ages of eight and 16 (M = 11.11) were assessed on multiple measures of each cognitive and reading-related skill. Each cognitive and reading-related skill was modeled as a latent variable, and behavioral genetic analyses estimated the portions of phenotypic variance on each latent variable due to genetic, shared environmental, and nonshared environmental influences. The covariance between the cognitive skills and reading-related skills was driven primarily by genetic influences. The cognitive skills also shared large amounts of genetic variance, as did the reading-related skills. The common cognitive genetic variance was highly correlated with the common reading genetic variance, suggesting that genetic influences involved in general cognitive processing are also important for reading ability. Skill-specific genetic variance in working memory and processing speed also predicted components of reading ability. Taken together, the present study supports a genetic association between children’s cognitive ability and reading ability. PMID:26974208

  18. Regional assessment of trends in vegetation change dynamics using principal component analysis

    NASA Astrophysics Data System (ADS)

    Osunmadewa, B. A.; Csaplovics, E.; R. A., Majdaldin; Adeofun, C. O.; Aralova, D.

    2016-10-01

    Vegetation forms the basis for the existence of animal and human. Due to changes in climate and human perturbation, most of the natural vegetation of the world has undergone some form of transformation both in composition and structure. Increased anthropogenic activities over the last decades had pose serious threat on the natural vegetation in Nigeria, many vegetated areas are either transformed to other land use such as deforestation for agricultural purpose or completely lost due to indiscriminate removal of trees for charcoal, fuelwood and timber production. This study therefore aims at examining the rate of change in vegetation cover, the degree of change and the application of Principal Component Analysis (PCA) in the dry sub-humid region of Nigeria using Normalized Difference Vegetation Index (NDVI) data spanning from 1983-2011. The method used for the analysis is the T-mode orientation approach also known as standardized PCA, while trends are examined using ordinary least square, median trend (Theil-Sen) and monotonic trend. The result of the trend analysis shows both positive and negative trend in vegetation change dynamics over the 29 years period examined. Five components were used for the Principal Component Analysis. The results of the first component explains about 98 % of the total variance of the vegetation (NDVI) while components 2-5 have lower variance percentage (< 1%). Two ancillary land use land cover data of 2000 and 2009 from European Space Agency (ESA) were used to further explain changes observed in the Normalized Difference Vegetation Index. The result of the land use data shows changes in land use pattern which can be attributed to anthropogenic activities such as cutting of trees for charcoal production, fuelwood and agricultural practices. The result of this study shows the ability of remote sensing data for monitoring vegetation change in the dry-sub humid region of Nigeria.

  19. Genomic estimation of additive and dominance effects and impact of accounting for dominance on accuracy of genomic evaluation in sheep populations.

    PubMed

    Moghaddar, N; van der Werf, J H J

    2017-12-01

    The objectives of this study were to estimate the additive and dominance variance component of several weight and ultrasound scanned body composition traits in purebred and combined cross-bred sheep populations based on single nucleotide polymorphism (SNP) marker genotypes and then to investigate the effect of fitting additive and dominance effects on accuracy of genomic evaluation. Additive and dominance variance components were estimated in a mixed model equation based on "average information restricted maximum likelihood" using additive and dominance (co)variances between animals calculated from 48,599 SNP marker genotypes. Genomic prediction was based on genomic best linear unbiased prediction (GBLUP), and the accuracy of prediction was assessed based on a random 10-fold cross-validation. Across different weight and scanned body composition traits, dominance variance ranged from 0.0% to 7.3% of the phenotypic variance in the purebred population and from 7.1% to 19.2% in the combined cross-bred population. In the combined cross-bred population, the range of dominance variance decreased to 3.1% and 9.9% after accounting for heterosis effects. Accounting for dominance effects significantly improved the likelihood of the fitting model in the combined cross-bred population. This study showed a substantial dominance genetic variance for weight and ultrasound scanned body composition traits particularly in cross-bred population; however, improvement in the accuracy of genomic breeding values was small and statistically not significant. Dominance variance estimates in combined cross-bred population could be overestimated if heterosis is not fitted in the model. © 2017 Blackwell Verlag GmbH.

  20. Attempts to Simulate Anisotropies of Solar Wind Fluctuations Using MHD with a Turning Magnetic Field

    NASA Technical Reports Server (NTRS)

    Ghosh, Sanjoy; Roberts, D. Aaron

    2010-01-01

    We examine a "two-component" model of the solar wind to see if any of the observed anisotropies of the fields can be explained in light of the need for various quantities, such as the magnetic minimum variance direction, to turn along with the Parker spiral. Previous results used a 3-D MHD spectral code to show that neither Q2D nor slab-wave components will turn their wave vectors in a turning Parker-like field, and that nonlinear interactions between the components are required to reproduce observations. In these new simulations we use higher resolution in both decaying and driven cases, and with and without a turning background field, to see what, if any, conditions lead to variance anisotropies similar to observations. We focus especially on the middle spectral range, and not the energy-containing scales, of the simulation for comparison with the solar wind. Preliminary results have shown that it is very difficult to produce the required variances with a turbulent cascade.

  1. Some variance reduction methods for numerical stochastic homogenization

    PubMed Central

    Blanc, X.; Le Bris, C.; Legoll, F.

    2016-01-01

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  2. Markowitz portfolio optimization model employing fuzzy measure

    NASA Astrophysics Data System (ADS)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  3. Isolating and Examining Sources of Suppression and Multicollinearity in Multiple Linear Regression.

    PubMed

    Beckstead, Jason W

    2012-03-30

    The presence of suppression (and multicollinearity) in multiple regression analysis complicates interpretation of predictor-criterion relationships. The mathematical conditions that produce suppression in regression analysis have received considerable attention in the methodological literature but until now nothing in the way of an analytic strategy to isolate, examine, and remove suppression effects has been offered. In this article such an approach, rooted in confirmatory factor analysis theory and employing matrix algebra, is developed. Suppression is viewed as the result of criterion-irrelevant variance operating among predictors. Decomposition of predictor variables into criterion-relevant and criterion-irrelevant components using structural equation modeling permits derivation of regression weights with the effects of criterion-irrelevant variance omitted. Three examples with data from applied research are used to illustrate the approach: the first assesses child and parent characteristics to explain why some parents of children with obsessive-compulsive disorder accommodate their child's compulsions more so than do others, the second examines various dimensions of personal health to explain individual differences in global quality of life among patients following heart surgery, and the third deals with quantifying the relative importance of various aptitudes for explaining academic performance in a sample of nursing students. The approach is offered as an analytic tool for investigators interested in understanding predictor-criterion relationships when complex patterns of intercorrelation among predictors are present and is shown to augment dominance analysis.

  4. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  5. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  6. VARIANCE ANISOTROPY IN KINETIC PLASMAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less

  7. Unraveling additive from nonadditive effects using genomic relationship matrices.

    PubMed

    Muñoz, Patricio R; Resende, Marcio F R; Gezan, Salvador A; Resende, Marcos Deon Vilela; de Los Campos, Gustavo; Kirst, Matias; Huber, Dudley; Peter, Gary F

    2014-12-01

    The application of quantitative genetics in plant and animal breeding has largely focused on additive models, which may also capture dominance and epistatic effects. Partitioning genetic variance into its additive and nonadditive components using pedigree-based models (P-genomic best linear unbiased predictor) (P-BLUP) is difficult with most commonly available family structures. However, the availability of dense panels of molecular markers makes possible the use of additive- and dominance-realized genomic relationships for the estimation of variance components and the prediction of genetic values (G-BLUP). We evaluated height data from a multifamily population of the tree species Pinus taeda with a systematic series of models accounting for additive, dominance, and first-order epistatic interactions (additive by additive, dominance by dominance, and additive by dominance), using either pedigree- or marker-based information. We show that, compared with the pedigree, use of realized genomic relationships in marker-based models yields a substantially more precise separation of additive and nonadditive components of genetic variance. We conclude that the marker-based relationship matrices in a model including additive and nonadditive effects performed better, improving breeding value prediction. Moreover, our results suggest that, for tree height in this population, the additive and nonadditive components of genetic variance are similar in magnitude. This novel result improves our current understanding of the genetic control and architecture of a quantitative trait and should be considered when developing breeding strategies. Copyright © 2014 by the Genetics Society of America.

  8. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  9. Concerns about a variance approach to X-ray diffractometric estimation of microfibril angle in wood

    Treesearch

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael C. Wiemann; Harry A. Alden

    2011-01-01

    In this article, we raise three technical concerns about Evans’ 1999 Appita Journal “variance approach” to estimating microfibril angle (MFA). The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the MFA and the natural variability of the MFA. The second concern is associated with the approximation...

  10. Concerns about a variance approach to the X-ray diffractometric estimation of microfibril angle in wood

    Treesearch

    Steve P. Verrill; David E. Kretschmann; Victoria L. Herian; Michael Wiemann; Harry A. Alden

    2010-01-01

    In this paper we raise three technical concerns about Evans’s 1999 Appita Journal “variance approach” to estimating microfibril angle. The first concern is associated with the approximation of the variance of an X-ray intensity half-profile by a function of the microfibril angle and the natural variability of the microfibril angle, S2...

  11. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  12. Architectural measures of the cancellous bone of the mandibular condyle identified by principal components analysis.

    PubMed

    Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J

    2003-09-01

    As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.

  13. Self-esteem Is Mostly Stable Across Young Adulthood: Evidence from Latent STARTS Models.

    PubMed

    Wagner, Jenny; Lüdtke, Oliver; Trautwein, Ulrich

    2016-08-01

    How stable is self-esteem? This long-standing debate has led to different conclusions across different areas of psychology. Longitudinal data and up-to-date statistical models have recently indicated that self-esteem has stable and autoregressive trait-like components and state-like components. We applied latent STARTS models with the goal of replicating previous findings in a longitudinal sample of young adults (N = 4,532; Mage  = 19.60, SD = 0.85; 55% female). In addition, we applied multigroup models to extend previous findings on different patterns of stability for men versus women and for people with high versus low levels of depressive symptoms. We found evidence for the general pattern of a major proportion of stable and autoregressive trait variance and a smaller yet substantial amount of state variance in self-esteem across 10 years. Furthermore, multigroup models suggested substantial differences in the variance components: Females showed more state variability than males. Individuals with higher levels of depressive symptoms showed more state and less autoregressive trait variance in self-esteem. Results are discussed with respect to the ongoing trait-state debate and possible implications of the group differences that we found in the stability of self-esteem. © 2015 Wiley Periodicals, Inc.

  14. Seasonal Predictability in a Model Atmosphere.

    NASA Astrophysics Data System (ADS)

    Lin, Hai

    2001-07-01

    The predictability of atmospheric mean-seasonal conditions in the absence of externally varying forcing is examined. A perfect-model approach is adopted, in which a global T21 three-level quasigeostrophic atmospheric model is integrated over 21 000 days to obtain a reference atmospheric orbit. The model is driven by a time-independent forcing, so that the only source of time variability is the internal dynamics. The forcing is set to perpetual winter conditions in the Northern Hemisphere (NH) and perpetual summer in the Southern Hemisphere.A significant temporal variability in the NH 90-day mean states is observed. The component of that variability associated with the higher-frequency motions, or climate noise, is estimated using a method developed by Madden. In the polar region, and to a lesser extent in the midlatitudes, the temporal variance of the winter means is significantly greater than the climate noise, suggesting some potential predictability in those regions.Forecast experiments are performed to see whether the presence of variance in the 90-day mean states that is in excess of the climate noise leads to some skill in the prediction of these states. Ensemble forecast experiments with nine members starting from slightly different initial conditions are performed for 200 different 90-day means along the reference atmospheric orbit. The serial correlation between the ensemble means and the reference orbit shows that there is skill in the 90-day mean predictions. The skill is concentrated in those regions of the NH that have the largest variance in excess of the climate noise. An EOF analysis shows that nearly all the predictive skill in the seasonal means is associated with one mode of variability with a strong axisymmetric component.

  15. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  16. A diffusion-based approach to stochastic individual growth and energy budget, with consequences to life-history optimization and population dynamics.

    PubMed

    Filin, I

    2009-06-01

    Using diffusion processes, I model stochastic individual growth, given exogenous hazards and starvation risk. By maximizing survival to final size, optimal life histories (e.g. switching size for habitat/dietary shift) are determined by two ratios: mean growth rate over growth variance (diffusion coefficient) and mortality rate over mean growth rate; all are size dependent. For example, switching size decreases with either ratio, if both are positive. I provide examples and compare with previous work on risk-sensitive foraging and the energy-predation trade-off. I then decompose individual size into reversibly and irreversibly growing components, e.g. reserves and structure. I provide a general expression for optimal structural growth, when reserves grow stochastically. I conclude that increased growth variance of reserves delays structural growth (raises threshold size for its commencement) but may eventually lead to larger structures. The effect depends on whether the structural trait is related to foraging or defence. Implications for population dynamics are discussed.

  17. Automatic Estimation of the Radiological Inventory for the Dismantling of Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Bermejo, R.; Felipe, A.; Gutierrez, S.

    The estimation of the radiological inventory of Nuclear Facilities to be dismantled is a process that included information related with the physical inventory of all the plant and radiological survey. Estimation of the radiological inventory for all the components and civil structure of the plant could be obtained with mathematical models with statistical approach. A computer application has been developed in order to obtain the radiological inventory in an automatic way. Results: A computer application that is able to estimate the radiological inventory from the radiological measurements or the characterization program has been developed. In this computer applications has beenmore » included the statistical functions needed for the estimation of the central tendency and variability, e.g. mean, median, variance, confidence intervals, variance coefficients, etc. This computer application is a necessary tool in order to be able to estimate the radiological inventory of a nuclear facility and it is a powerful tool for decision taken in future sampling surveys.« less

  18. Relationship between rice yield and climate variables in southwest Nigeria using multiple linear regression and support vector machine analysis

    NASA Astrophysics Data System (ADS)

    Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried

    2018-03-01

    This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P < 0.001) in rice yield, pan evaporation, solar radiation, and wind speed declined significantly. Eight principal components exhibited an eigenvalue > 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.

  19. Personality assessment and model comparison with behavioral data: A statistical framework and empirical demonstration with bonobos (Pan paniscus).

    PubMed

    Martin, Jordan S; Suarez, Scott A

    2017-08-01

    Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and sociability (Agreeableness) as well as activity level, social play, and neophilia toward non-threatening stimuli (Openness). These results underscore the utility of our framework for quantifying personality in primates and facilitating greater integration between the behavioral ecological and comparative psychological approaches to personality research. © 2017 Wiley Periodicals, Inc.

  20. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  1. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  2. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  3. Modeling rainfall-runoff relationship using multivariate GARCH model

    NASA Astrophysics Data System (ADS)

    Modarres, R.; Ouarda, T. B. M. J.

    2013-08-01

    The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.

  4. Flow-Cell-Induced Dispersion in Flow-through Absorbance Detection Systems: True Column Effluent Peak Variance.

    PubMed

    Dasgupta, Purnendu K; Shelor, Charles Phillip; Kadjo, Akinde Florence; Kraiczek, Karsten G

    2018-02-06

    Following a brief overview of the emergence of absorbance detection in liquid chromatography, we focus on the dispersion caused by the absorbance measurement cell and its inlet. A simple experiment is proposed wherein chromatographic flow and conditions are held constant but a variable portion of the column effluent is directed into the detector. The temporal peak variance (σ t,obs 2 ), which increases as the flow rate (F) through the detector decreases, is found to be well-described as a quadratic function of 1 / F . This allows the extrapolation of the results to zero residence time in the detector and thence the determination of the true variance of the peak prior to the detector (this includes contribution of all preceding components). This general approach should be equally applicable to detection systems other than absorbance. We also experiment where the inlet/outlet system remains the same but the path length is varied. This allows one to assess the individual contributions of the cell itself and the inlet/outlet system.to the total observed peak. The dispersion in the cell itself has often been modeled as a flow-independent parameter, dependent only on the cell volume. Except for very long path/large volume cells, this paradigm is simply incorrect.

  5. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  6. Genetic variance in micro-environmental sensitivity for milk and milk quality in Walloon Holstein cattle.

    PubMed

    Vandenplas, J; Bastin, C; Gengler, N; Mulder, H A

    2013-09-01

    Animals that are robust to environmental changes are desirable in the current dairy industry. Genetic differences in micro-environmental sensitivity can be studied through heterogeneity of residual variance between animals. However, residual variance between animals is usually assumed to be homogeneous in traditional genetic evaluations. The aim of this study was to investigate genetic heterogeneity of residual variance by estimating variance components in residual variance for milk yield, somatic cell score, contents in milk (g/dL) of 2 groups of milk fatty acids (i.e., saturated and unsaturated fatty acids), and the content in milk of one individual fatty acid (i.e., oleic acid, C18:1 cis-9), for first-parity Holstein cows in the Walloon Region of Belgium. A total of 146,027 test-day records from 26,887 cows in 747 herds were available. All cows had at least 3 records and a known sire. These sires had at least 10 cows with records and each herd × test-day had at least 5 cows. The 5 traits were analyzed separately based on fixed lactation curve and random regression test-day models for the mean. Estimation of variance components was performed by running iteratively expectation maximization-REML algorithm by the implementation of double hierarchical generalized linear models. Based on fixed lactation curve test-day mean models, heritability for residual variances ranged between 1.01×10(-3) and 4.17×10(-3) for all traits. The genetic standard deviation in residual variance (i.e., approximately the genetic coefficient of variation of residual variance) ranged between 0.12 and 0.17. Therefore, some genetic variance in micro-environmental sensitivity existed in the Walloon Holstein dairy cattle for the 5 studied traits. The standard deviations due to herd × test-day and permanent environment in residual variance ranged between 0.36 and 0.45 for herd × test-day effect and between 0.55 and 0.97 for permanent environmental effect. Therefore, nongenetic effects also contributed substantially to micro-environmental sensitivity. Addition of random regressions to the mean model did not reduce heterogeneity in residual variance and that genetic heterogeneity of residual variance was not simply an effect of an incomplete mean model. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Improvement of Prediction Ability for Genomic Selection of Dairy Cattle by Including Dominance Effects

    PubMed Central

    Sun, Chuanyu; VanRaden, Paul M.; Cole, John B.; O'Connell, Jeffrey R.

    2014-01-01

    Dominance may be an important source of non-additive genetic variance for many traits of dairy cattle. However, nearly all prediction models for dairy cattle have included only additive effects because of the limited number of cows with both genotypes and phenotypes. The role of dominance in the Holstein and Jersey breeds was investigated for eight traits: milk, fat, and protein yields; productive life; daughter pregnancy rate; somatic cell score; fat percent and protein percent. Additive and dominance variance components were estimated and then used to estimate additive and dominance effects of single nucleotide polymorphisms (SNPs). The predictive abilities of three models with both additive and dominance effects and a model with additive effects only were assessed using ten-fold cross-validation. One procedure estimated dominance values, and another estimated dominance deviations; calculation of the dominance relationship matrix was different for the two methods. The third approach enlarged the dataset by including cows with genotype probabilities derived using genotyped ancestors. For yield traits, dominance variance accounted for 5 and 7% of total variance for Holsteins and Jerseys, respectively; using dominance deviations resulted in smaller dominance and larger additive variance estimates. For non-yield traits, dominance variances were very small for both breeds. For yield traits, including additive and dominance effects fit the data better than including only additive effects; average correlations between estimated genetic effects and phenotypes showed that prediction accuracy increased when both effects rather than just additive effects were included. No corresponding gains in prediction ability were found for non-yield traits. Including cows with derived genotype probabilities from genotyped ancestors did not improve prediction accuracy. The largest additive effects were located on chromosome 14 near DGAT1 for yield traits for both breeds; those SNPs also showed the largest dominance effects for fat yield (both breeds) as well as for Holstein milk yield. PMID:25084281

  8. Genetic influences on the difference in variability of height, weight and body mass index between Caucasian and East Asian adolescent twins.

    PubMed

    Hur, Y-M; Kaprio, J; Iacono, W G; Boomsma, D I; McGue, M; Silventoinen, K; Martin, N G; Luciano, M; Visscher, P M; Rose, R J; He, M; Ando, J; Ooki, S; Nonaka, K; Lin, C C H; Lajunen, H R; Cornes, B K; Bartels, M; van Beijsterveldt, C E M; Cherny, S S; Mitchell, K

    2008-10-01

    Twin studies are useful for investigating the causes of trait variation between as well as within a population. The goals of the present study were two-fold: First, we aimed to compare the total phenotypic, genetic and environmental variances of height, weight and BMI between Caucasians and East Asians using twins. Secondly, we intended to estimate the extent to which genetic and environmental factors contribute to differences in variability of height, weight and BMI between Caucasians and East Asians. Height and weight data from 3735 Caucasian and 1584 East Asian twin pairs (age: 13-15 years) from Australia, China, Finland, Japan, the Netherlands, South Korea, Taiwan and the United States were used for analyses. Maximum likelihood twin correlations and variance components model-fitting analyses were conducted to fulfill the goals of the present study. The absolute genetic variances for height, weight and BMI were consistently greater in Caucasians than in East Asians with corresponding differences in total variances for all three body measures. In all 80 to 100% of the differences in total variances of height, weight and BMI between the two population groups were associated with genetic differences. Height, weight and BMI were more variable in Caucasian than in East Asian adolescents. Genetic variances for these three body measures were also larger in Caucasians than in East Asians. Variance components model-fitting analyses indicated that genetic factors contributed to the difference in variability of height, weight and BMI between the two population groups. Association studies for these body measures should take account of our findings of differences in genetic variances between the two population groups.

  9. Genetic influences on the difference in variability of height, weight and body mass index between Caucasian and East Asian adolescent twins

    PubMed Central

    Hur, Y-M; Kaprio, J; Iacono, WG; Boomsma, DI; McGue, M; Silventoinen, K; Martin, NG; Luciano, M; Visscher, PM; Rose, RJ; He, M; Ando, J; Ooki, S; Nonaka, K; Lin, CCH; Lajunen, HR; Cornes, BK; Bartels, M; van Beijsterveldt, CEM; Cherny, SS; Mitchell, K

    2008-01-01

    Objective Twin studies are useful for investigating the causes of trait variation between as well as within a population. The goals of the present study were two-fold: First, we aimed to compare the total phenotypic, genetic and environmental variances of height, weight and BMI between Caucasians and East Asians using twins. Secondly, we intended to estimate the extent to which genetic and environmental factors contribute to differences in variability of height, weight and BMI between Caucasians and East Asians. Design Height and weight data from 3735 Caucasian and 1584 East Asian twin pairs (age: 13–15 years) from Australia, China, Finland, Japan, the Netherlands, South Korea, Taiwan and the United States were used for analyses. Maximum likelihood twin correlations and variance components model-fitting analyses were conducted to fulfill the goals of the present study. Results The absolute genetic variances for height, weight and BMI were consistently greater in Caucasians than in East Asians with corresponding differences in total variances for all three body measures. In all 80 to 100% of the differences in total variances of height, weight and BMI between the two population groups were associated with genetic differences. Conclusion Height, weight and BMI were more variable in Caucasian than in East Asian adolescents. Genetic variances for these three body measures were also larger in Caucasians than in East Asians. Variance components model-fitting analyses indicated that genetic factors contributed to the difference in variability of height, weight and BMI between the two population groups. Association studies for these body measures should take account of our findings of differences in genetic variances between the two population groups. PMID:18779828

  10. Modeling additive and non-additive effects in a hybrid population using genome-wide genotyping: prediction accuracy implications

    PubMed Central

    Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph

    2016-01-01

    Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760

  11. Combining the Hanning windowed interpolated FFT in both directions

    NASA Astrophysics Data System (ADS)

    Chen, Kui Fu; Li, Yan Feng

    2008-06-01

    The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.

  12. Bayesian generalized least squares regression with application to log Pearson type 3 regional skew estimation

    NASA Astrophysics Data System (ADS)

    Reis, D. S.; Stedinger, J. R.; Martins, E. S.

    2005-10-01

    This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.

  13. Clustering analysis strategies for electron energy loss spectroscopy (EELS).

    PubMed

    Torruella, Pau; Estrader, Marta; López-Ortega, Alberto; Baró, Maria Dolors; Varela, Maria; Peiró, Francesca; Estradé, Sònia

    2018-02-01

    In this work, the use of cluster analysis algorithms, widely applied in the field of big data, is proposed to explore and analyze electron energy loss spectroscopy (EELS) data sets. Three different data clustering approaches have been tested both with simulated and experimental data from Fe 3 O 4 /Mn 3 O 4 core/shell nanoparticles. The first method consists on applying data clustering directly to the acquired spectra. A second approach is to analyze spectral variance with principal component analysis (PCA) within a given data cluster. Lastly, data clustering on PCA score maps is discussed. The advantages and requirements of each approach are studied. Results demonstrate how clustering is able to recover compositional and oxidation state information from EELS data with minimal user input, giving great prospects for its usage in EEL spectroscopy. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  15. A clinical economics workstation for risk-adjusted health care cost management.

    PubMed Central

    Eisenstein, E. L.; Hales, J. W.

    1995-01-01

    This paper describes a healthcare cost accounting system which is under development at Duke University Medical Center. Our approach differs from current practice in that this system will dynamically adjust its resource usage estimates to compensate for variations in patient risk levels. This adjustment is made possible by introducing a new cost accounting concept, Risk-Adjusted Quantity (RQ). RQ divides case-level resource usage variances into their risk-based component (resource consumption differences attributable to differences in patient risk levels) and their non-risk-based component (resource consumption differences which cannot be attributed to differences in patient risk levels). Because patient risk level is a factor in estimating resource usage, this system is able to simultaneously address the financial and quality dimensions of case cost management. In effect, cost-effectiveness analysis is incorporated into health care cost management. PMID:8563361

  16. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  17. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  18. Improved estimation of subject-level functional connectivity using full and partial correlation with empirical Bayes shrinkage.

    PubMed

    Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A

    2018-05-15

    Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.

  19. Doping Among Professional Athletes in Iran: A Test of Akers's Social Learning Theory.

    PubMed

    Kabiri, Saeed; Cochran, John K; Stewart, Bernadette J; Sharepour, Mahmoud; Rahmati, Mohammad Mahdi; Shadmanfaat, Syede Massomeh

    2018-04-01

    The use of performance-enhancing drugs (PED) is common among Iranian professional athletes. As this phenomenon is a social problem, the main purpose of this research is to explain why athletes engage in "doping" activity, using social learning theory. For this purpose, a sample of 589 professional athletes from Rasht, Iran, was used to test assumptions related to social learning theory. The results showed that there are positive and significant relationships between the components of social learning theory (differential association, differential reinforcement, imitation, and definitions) and doping behavior (past, present, and future use of PED). The structural modeling analysis indicated that the components of social learning theory accounts for 36% of the variance in past doping behavior, 35% of the variance in current doping behavior, and 32% of the variance in future use of PED.

  20. Study on nondestructive discrimination of genuine and counterfeit wild ginsengs using NIRS

    NASA Astrophysics Data System (ADS)

    Lu, Q.; Fan, Y.; Peng, Z.; Ding, H.; Gao, H.

    2012-07-01

    A new approach for the nondestructive discrimination between genuine wild ginsengs and the counterfeit ones by near infrared spectroscopy (NIRS) was developed. Both discriminant analysis and back propagation artificial neural network (BP-ANN) were applied to the model establishment for discrimination. Optimal modeling wavelengths were determined based on the anomalous spectral information of counterfeit samples. Through principal component analysis (PCA) of various wild ginseng samples, genuine and counterfeit, the cumulative percentages of variance of the principal components were obtained, serving as a reference for principal component (PC) factor determination. Discriminant analysis achieved an identification ratio of 88.46%. With sample' truth values as its outputs, a three-layer BP-ANN model was built, which yielded a higher discrimination accuracy of 100%. The overall results sufficiently demonstrate that NIRS combined with BP-ANN classification algorithm performs better on ginseng discrimination than discriminant analysis, and can be used as a rapid and nondestructive method for the detection of counterfeit wild ginsengs in food and pharmaceutical industry.

  1. Statistical modelling of thermal annealing of fission tracks in apatite

    NASA Astrophysics Data System (ADS)

    Laslett, G. M.; Galbraith, R. F.

    1996-12-01

    We develop an improved methodology for modelling the relationship between mean track length, temperature, and time in fission track annealing experiments. We consider "fanning Arrhenius" models, in which contours of constant mean length on an Arrhenius plot are straight lines meeting at a common point. Features of our approach are explicit use of subject matter knowledge, treating mean length as the response variable, modelling of the mean-variance relationship with two components of variance, improved modelling of the control sample, and using information from experiments in which no tracks are seen. This approach overcomes several weaknesses in previous models and provides a robust six parameter model that is widely applicable. Estimation is via direct maximum likelihood which can be implemented using a standard numerical optimisation package. Because the model is highly nonlinear, some reparameterisations are needed to achieve stable estimation and calculation of precisions. Experience suggests that precisions are more convincingly estimated from profile log-likelihood functions than from the information matrix. We apply our method to the B-5 and Sr fluorapatite data of Crowley et al. (1991) and obtain well-fitting models in both cases. For the B-5 fluorapatite, our model exhibits less fanning than that of Crowley et al. (1991), although fitted mean values above 12 μm are fairly similar. However, predictions can be different, particularly for heavy annealing at geological time scales, where our model is less retentive. In addition, the refined error structure of our model results in tighter prediction errors, and has components of error that are easier to verify or modify. For the Sr fluorapatite, our fitted model for mean lengths does not differ greatly from that of Crowley et al. (1991), but our error structure is quite different.

  2. A longitudinal model for functional connectivity networks using resting-state fMRI.

    PubMed

    Hart, Brian; Cribben, Ivor; Fiecas, Mark

    2018-06-04

    Many neuroimaging studies collect functional magnetic resonance imaging (fMRI) data in a longitudinal manner. However, the current fMRI literature lacks a general framework for analyzing functional connectivity (FC) networks in fMRI data obtained from a longitudinal study. In this work, we build a novel longitudinal FC model using a variance components approach. First, for all subjects' visits, we account for the autocorrelation inherent in the fMRI time series data using a non-parametric technique. Second, we use a generalized least squares approach to estimate 1) the within-subject variance component shared across the population, 2) the baseline FC strength, and 3) the FC's longitudinal trend. Our novel method for longitudinal FC networks seeks to account for the within-subject dependence across multiple visits, the variability due to the subjects being sampled from a population, and the autocorrelation present in fMRI time series data, while restricting the number of parameters in order to make the method computationally feasible and stable. We develop a permutation testing procedure to draw valid inference on group differences in the baseline FC network and change in FC over longitudinal time between a set of patients and a comparable set of controls. To examine performance, we run a series of simulations and apply the model to longitudinal fMRI data collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Overall, we found no difference in the global FC network between Alzheimer's disease patients and healthy controls, but did find differing local aging patterns in the FC between the left hippocampus and the posterior cingulate cortex. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Haplotype-Based Association Analysis via Variance-Components Score Test

    PubMed Central

    Tzeng, Jung-Ying ; Zhang, Daowen 

    2007-01-01

    Haplotypes provide a more informative format of polymorphisms for genetic association analysis than do individual single-nucleotide polymorphisms. However, the practical efficacy of haplotype-based association analysis is challenged by a trade-off between the benefits of modeling abundant variation and the cost of the extra degrees of freedom. To reduce the degrees of freedom, several strategies have been considered in the literature. They include (1) clustering evolutionarily close haplotypes, (2) modeling the level of haplotype sharing, and (3) smoothing haplotype effects by introducing a correlation structure for haplotype effects and studying the variance components (VC) for association. Although the first two strategies enjoy a fair extent of power gain, empirical evidence showed that VC methods may exhibit only similar or less power than the standard haplotype regression method, even in cases of many haplotypes. In this study, we report possible reasons that cause the underpowered phenomenon and show how the power of the VC strategy can be improved. We construct a score test based on the restricted maximum likelihood or the marginal likelihood function of the VC and identify its nontypical limiting distribution. Through simulation, we demonstrate the validity of the test and investigate the power performance of the VC approach and that of the standard haplotype regression approach. With suitable choices for the correlation structure, the proposed method can be directly applied to unphased genotypic data. Our method is applicable to a wide-ranging class of models and is computationally efficient and easy to implement. The broad coverage and the fast and easy implementation of this method make the VC strategy an effective tool for haplotype analysis, even in modern genomewide association studies. PMID:17924336

  4. Gene set analysis using variance component tests.

    PubMed

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  5. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    PubMed

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  6. Reconstruction of Local Sea Levels at South West Pacific Islands—A Multiple Linear Regression Approach (1988-2014)

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J.

    2018-02-01

    Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models.

  7. Using landscape typologies to model socioecological systems: Application to agriculture of the United States Gulf Coast

    DOE PAGES

    Preston, Benjamin L.; King, Anthony Wayne; Mei, Rui; ...

    2016-02-11

    Agricultural enterprises are vulnerable to the effects of climate variability and change. Improved understanding of the determinants of vulnerability and adaptive capacity in agricultural systems is important for projecting and managing future climate risk. At present, three analytical tools dominate methodological approaches to understanding agroecological vulnerability to climate: process-based crop models, empirical crop models, and integrated assessment models. A common weakness of these approaches is their limited treatment of socio-economic conditions and human agency in modeling agroecological processes and outcomes. This study proposes a framework that uses spatial cluster analysis to generate regional socioecological typologies that capture geographic variance inmore » regional agricultural production and enable attribution of that variance to climatic, topographic, edaphic, and socioeconomic components. This framework was applied to historical corn production (1986-2010) in the U.S. Gulf of Mexico region as a testbed. The results demonstrate that regional socioeconomic heterogeneity is an important driving force in human dominated ecosystems, which we hypothesize, is a function of the link between socioeconomic conditions and the adaptive capacity of agricultural systems. Meaningful representation of future agricultural responses to climate variability and change is contingent upon understanding interactions among biophysical conditions, socioeconomic conditions, and human agency their incorporation in predictive models.« less

  8. Using landscape typologies to model socioecological systems: Application to agriculture of the United States Gulf Coast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L.; King, Anthony Wayne; Mei, Rui

    Agricultural enterprises are vulnerable to the effects of climate variability and change. Improved understanding of the determinants of vulnerability and adaptive capacity in agricultural systems is important for projecting and managing future climate risk. At present, three analytical tools dominate methodological approaches to understanding agroecological vulnerability to climate: process-based crop models, empirical crop models, and integrated assessment models. A common weakness of these approaches is their limited treatment of socio-economic conditions and human agency in modeling agroecological processes and outcomes. This study proposes a framework that uses spatial cluster analysis to generate regional socioecological typologies that capture geographic variance inmore » regional agricultural production and enable attribution of that variance to climatic, topographic, edaphic, and socioeconomic components. This framework was applied to historical corn production (1986-2010) in the U.S. Gulf of Mexico region as a testbed. The results demonstrate that regional socioeconomic heterogeneity is an important driving force in human dominated ecosystems, which we hypothesize, is a function of the link between socioeconomic conditions and the adaptive capacity of agricultural systems. Meaningful representation of future agricultural responses to climate variability and change is contingent upon understanding interactions among biophysical conditions, socioeconomic conditions, and human agency their incorporation in predictive models.« less

  9. Assessing the Structure of the Ways of Coping Questionnaire in Fibromyalgia Patients Using Common Factor Analytic Approaches.

    PubMed

    Van Liew, Charles; Santoro, Maya S; Edwards, Larissa; Kang, Jeremy; Cronan, Terry A

    2016-01-01

    The Ways of Coping Questionnaire (WCQ) is a widely used measure of coping processes. Despite its use in a variety of populations, there has been concern about the stability and structure of the WCQ across different populations. This study examines the factor structure of the WCQ in a large sample of individuals diagnosed with fibromyalgia. The participants were 501 adults (478 women) who were part of a larger intervention study. Participants completed the WCQ at their 6-month assessment. Foundational factoring approaches were performed on the data (i.e., maximum likelihood factoring [MLF], iterative principal factoring [IPF], principal axis factoring (PAF), and principal components factoring [PCF]) with oblique oblimin rotation. Various criteria were evaluated to determine the number of factors to be extracted, including Kaiser's rule, Scree plot visual analysis, 5 and 10% unique variance explained, 70 and 80% communal variance explained, and Horn's parallel analysis (PA). It was concluded that the 4-factor PAF solution was the preferable solution, based on PA extraction and the fact that this solution minimizes nonvocality and multivocality. The present study highlights the need for more research focused on defining the limits of the WCQ and the degree to which population-specific and context-specific subscale adjustments are needed.

  10. Assessing the Structure of the Ways of Coping Questionnaire in Fibromyalgia Patients Using Common Factor Analytic Approaches

    PubMed Central

    Edwards, Larissa; Kang, Jeremy

    2016-01-01

    The Ways of Coping Questionnaire (WCQ) is a widely used measure of coping processes. Despite its use in a variety of populations, there has been concern about the stability and structure of the WCQ across different populations. This study examines the factor structure of the WCQ in a large sample of individuals diagnosed with fibromyalgia. The participants were 501 adults (478 women) who were part of a larger intervention study. Participants completed the WCQ at their 6-month assessment. Foundational factoring approaches were performed on the data (i.e., maximum likelihood factoring [MLF], iterative principal factoring [IPF], principal axis factoring (PAF), and principal components factoring [PCF]) with oblique oblimin rotation. Various criteria were evaluated to determine the number of factors to be extracted, including Kaiser's rule, Scree plot visual analysis, 5 and 10% unique variance explained, 70 and 80% communal variance explained, and Horn's parallel analysis (PA). It was concluded that the 4-factor PAF solution was the preferable solution, based on PA extraction and the fact that this solution minimizes nonvocality and multivocality. The present study highlights the need for more research focused on defining the limits of the WCQ and the degree to which population-specific and context-specific subscale adjustments are needed. PMID:28070160

  11. A perspective on interaction effects in genetic association studies

    PubMed Central

    2016-01-01

    ABSTRACT The identification of gene–gene and gene–environment interaction in human traits and diseases is an active area of research that generates high expectation, and most often lead to high disappointment. This is partly explained by a misunderstanding of the inherent characteristics of standard regression‐based interaction analyses. Here, I revisit and untangle major theoretical aspects of interaction tests in the special case of linear regression; in particular, I discuss variables coding scheme, interpretation of effect estimate, statistical power, and estimation of variance explained in regard of various hypothetical interaction patterns. Linking this components it appears first that the simplest biological interaction models—in which the magnitude of a genetic effect depends on a common exposure—are among the most difficult to identify. Second, I highlight the demerit of the current strategy to evaluate the contribution of interaction effects to the variance of quantitative outcomes and argue for the use of new approaches to overcome this issue. Finally, I explore the advantages and limitations of multivariate interaction models, when testing for interaction between multiple SNPs and/or multiple exposures, over univariate approaches. Together, these new insights can be leveraged for future method development and to improve our understanding of the genetic architecture of multifactorial traits. PMID:27390122

  12. Geochemistry of sediments in the Northern and Central Adriatic Sea

    NASA Astrophysics Data System (ADS)

    De Lazzari, A.; Rampazzo, G.; Pavoni, B.

    2004-03-01

    Major, minor and trace elements, loss of ignition, specific surface area, quantities of calcite and dolomite, qualitative mineralogical composition, grain-size distribution and organic micropollutants (PAH, PCB, DDT) were determined on surficial marine sediments sampled during the 1990 ASCOP (Adriatic Scientific Cooperative Program) cruise. Mineralogical composition and carbonate content of the samples were found to be comparable with data previously reported in the literature, whereas geochemical composition and distribution of major, minor and trace elements for samples in international waters and in the central basin have never been reported before. The large amount of information contained in the variables of different origin has been processed by means of a comprehensive approach which establishes the relations among the components through the mathematical-statistical calculation of principal components (factors). These account for the major part of data variance loosing only marginal parts of information and are independent from the units of measure. The sample descriptors concerning natural components and contamination load are discussed by means of a statistical model based on an R-mode Factor analysis calculating four significant factors which explain 86.8% of the total variance, and represent important relationships between grain size, mineralogy, geochemistry and organic micropollutants. A description and an interpretation of factor composition is discussed on the basis of pollution inputs, basin geology and hydrodynamics. The areal distribution of the factors showed that it is the fine grain-size fraction, with oxides and hydroxides of colloidal origin, which are the main means of transport and thus the principal link between chemical, physical and granulometric elements in the Adriatic.

  13. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    USGS Publications Warehouse

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  14. Distribution of lod scores in oligogenic linkage analysis.

    PubMed

    Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J

    2001-01-01

    In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.

  15. Excitation variance matching with limited configuration interaction expansions in variational Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric

    In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less

  16. Excitation variance matching with limited configuration interaction expansions in variational Monte Carlo

    DOE PAGES

    Robinson, Paul J.; Pineda Flores, Sergio D.; Neuscamman, Eric

    2017-10-28

    In the regime where traditional approaches to electronic structure cannot afford to achieve accurate energy differences via exhaustive wave function flexibility, rigorous approaches to balancing different states’ accuracies become desirable. As a direct measure of a wave function’s accuracy, the energy variance offers one route to achieving such a balance. Here, we develop and test a variance matching approach for predicting excitation energies within the context of variational Monte Carlo and selective configuration interaction. In a series of tests on small but difficult molecules, we demonstrate that the approach it is effective at delivering accurate excitation energies when the wavemore » function is far from the exhaustive flexibility limit. Results in C3, where we combine this approach with variational Monte Carlo orbital optimization, are especially encouraging.« less

  17. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen‐Loève‐based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  18. Dominance Genetic Variance for Traits Under Directional Selection in Drosophila serrata

    PubMed Central

    Sztepanacz, Jacqueline L.; Blows, Mark W.

    2015-01-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait–fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. PMID:25783700

  19. A Practical Methodology for Quantifying Random and Systematic Components of Unexplained Variance in a Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.

    2012-01-01

    This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.

  20. Variance Estimation for NAEP Data Using a Resampling-Based Approach: An Application of Cognitive Diagnostic Models. Research Report. ETS RR-10-26

    ERIC Educational Resources Information Center

    Hsieh, Chueh-an; Xu, Xueli; von Davier, Matthias

    2010-01-01

    This paper presents an application of a jackknifing approach to variance estimation of ability inferences for groups of students, using a multidimensional discrete model for item response data. The data utilized to demonstrate the approach come from the National Assessment of Educational Progress (NAEP). In contrast to the operational approach…

  1. Environmental rather than genetic factors determine the variation in the age of the infancy to childhood transition: a twins study.

    PubMed

    German, Alina; Livshits, Gregory; Peter, Inga; Malkin, Ida; Dubnov, Jonathan; Akons, Hannah; Shmoish, Michael; Hochberg, Ze'ev

    2015-03-01

    Using a twins study, we sought to assess the contribution of genetic against environmental factor as they affect the age at transition from infancy to childhood (ICT). The subjects were 56 pairs of monozygotic twins, 106 pairs of dizygotic twins, and 106 pairs of regular siblings (SBs), for a total of 536 children. Their ICT was determined, and a variance component analysis was implemented to estimate components of the familial variance, with simultaneous adjustment for potential covariates. We found substantial contribution of the common environment shared by all types of SBs that explained 27.7% of the total variance in ICT, whereas the common twin environment explained 9.2% of the variance, gestational age 3.5%, and birth weight 1.8%. In addition, 8.7% was attributable to sex difference, but we found no detectable contribution of genetic factors to inter-individual variation in ICT age. Developmental plasticity impacts much of human growth. Here we show that of the ∼50% of the variance provided to adult height by the ICT, 42.2% is attributable to adaptive cues represented by shared twin and SB environment, with no detectable genetic involvement. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients

    PubMed Central

    Krause, Bryan M.

    2013-01-01

    The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279

  3. Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior

    NASA Astrophysics Data System (ADS)

    Dumitru, Mircea; Djafari, Ali-Mohammad

    2015-01-01

    The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.

  4. A Bayesian approach to estimating variance components within a multivariate generalizability theory framework.

    PubMed

    Jiang, Zhehan; Skorupski, William

    2017-12-12

    In many behavioral research areas, multivariate generalizability theory (mG theory) has been typically used to investigate the reliability of certain multidimensional assessments. However, traditional mG-theory estimation-namely, using frequentist approaches-has limits, leading researchers to fail to take full advantage of the information that mG theory can offer regarding the reliability of measurements. Alternatively, Bayesian methods provide more information than frequentist approaches can offer. This article presents instructional guidelines on how to implement mG-theory analyses in a Bayesian framework; in particular, BUGS code is presented to fit commonly seen designs from mG theory, including single-facet designs, two-facet crossed designs, and two-facet nested designs. In addition to concrete examples that are closely related to the selected designs and the corresponding BUGS code, a simulated dataset is provided to demonstrate the utility and advantages of the Bayesian approach. This article is intended to serve as a tutorial reference for applied researchers and methodologists conducting mG-theory studies.

  5. On the Fallibility of Principal Components in Research

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Li, Tenglong

    2017-01-01

    The measurement error in principal components extracted from a set of fallible measures is discussed and evaluated. It is shown that as long as one or more measures in a given set of observed variables contains error of measurement, so also does any principal component obtained from the set. The error variance in any principal component is shown…

  6. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.

  7. Waveform-based spaceborne GNSS-R wind speed observation: Demonstration and analysis using UK TechDemoSat-1 data

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Yang, Dongkai; Zhang, Bo; Li, Weiqiang

    2018-03-01

    This paper explores two types of mathematical functions to fit single- and full-frequency waveform of spaceborne Global Navigation Satellite System-Reflectometry (GNSS-R), respectively. The metrics of the waveforms, such as the noise floor, peak magnitude, mid-point position of the leading edge, leading edge slope and trailing edge slope, can be derived from the parameters of the proposed models. Because the quality of the UK TDS-1 data is not at the level required by remote sensing mission, the waveforms buried in noise or from ice/land are removed by defining peak-to-mean ratio, cosine similarity of the waveform before wind speed are retrieved. The single-parameter retrieval models are developed by comparing the peak magnitude, leading edge slope and trailing edge slope derived from the parameters of the proposed models with in situ wind speed from the ASCAT scatterometer. To improve the retrieval accuracy, three types of multi-parameter observations based on the principle component analysis (PCA), minimum variance (MV) estimator and Back Propagation (BP) network are implemented. The results indicate that compared to the best results of the single-parameter observation, the approaches based on the principle component analysis and minimum variance could not significantly improve retrieval accuracy, however, the BP networks obtain improvement with the RMSE of 2.55 m/s and 2.53 m/s for single- and full-frequency waveform, respectively.

  8. Integrating Nonadditive Genomic Relationship Matrices into the Study of Genetic Architecture of Complex Traits.

    PubMed

    Nazarian, Alireza; Gezan, Salvador A

    2016-03-01

    The study of genetic architecture of complex traits has been dramatically influenced by implementing genome-wide analytical approaches during recent years. Of particular interest are genomic prediction strategies which make use of genomic information for predicting phenotypic responses instead of detecting trait-associated loci. In this work, we present the results of a simulation study to improve our understanding of the statistical properties of estimation of genetic variance components of complex traits, and of additive, dominance, and genetic effects through best linear unbiased prediction methodology. Simulated dense marker information was used to construct genomic additive and dominance matrices, and multiple alternative pedigree- and marker-based models were compared to determine if including a dominance term into the analysis may improve the genetic analysis of complex traits. Our results showed that a model containing a pedigree- or marker-based additive relationship matrix along with a pedigree-based dominance matrix provided the best partitioning of genetic variance into its components, especially when some degree of true dominance effects was expected to exist. Also, we noted that the use of a marker-based additive relationship matrix along with a pedigree-based dominance matrix had the best performance in terms of accuracy of correlations between true and estimated additive, dominance, and genetic effects. © The American Genetic Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Prediction of lethal/effective concentration/dose in the presence of multiple auxiliary covariates and components of variance

    USGS Publications Warehouse

    Gutreuter, S.; Boogaard, M.A.

    2007-01-01

    Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2???,5???-dichloro-4???- nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4- nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. ?? 2007 SETAC.

  10. Perturbative approach to covariance matrix of the matter power spectrum

    NASA Astrophysics Data System (ADS)

    Mohammed, Irshad; Seljak, Uroš; Vlah, Zvonimir

    2017-04-01

    We evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (supersample variance) and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10 per cent level up to k ˜ 1 h Mpc-1. We show that all the connected components are dominated by the large-scale modes (k < 0.1 h Mpc-1), regardless of the value of the wave vectors k, k΄ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher k, it is dominated by a single eigenmode. The full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.

  11. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  12. Classification of time-of-flight secondary ion mass spectrometry spectra from complex Cu-Fe sulphides by principal component analysis and artificial neural networks.

    PubMed

    Kalegowda, Yogesh; Harmer, Sarah L

    2013-01-08

    Artificial neural network (ANN) and a hybrid principal component analysis-artificial neural network (PCA-ANN) classifiers have been successfully implemented for classification of static time-of-flight secondary ion mass spectrometry (ToF-SIMS) mass spectra collected from complex Cu-Fe sulphides (chalcopyrite, bornite, chalcocite and pyrite) at different flotation conditions. ANNs are very good pattern classifiers because of: their ability to learn and generalise patterns that are not linearly separable; their fault and noise tolerance capability; and high parallelism. In the first approach, fragments from the whole ToF-SIMS spectrum were used as input to the ANN, the model yielded high overall correct classification rates of 100% for feed samples, 88% for conditioned feed samples and 91% for Eh modified samples. In the second approach, the hybrid pattern classifier PCA-ANN was integrated. PCA is a very effective multivariate data analysis tool applied to enhance species features and reduce data dimensionality. Principal component (PC) scores which accounted for 95% of the raw spectral data variance, were used as input to the ANN, the model yielded high overall correct classification rates of 88% for conditioned feed samples and 95% for Eh modified samples. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. An integrated approach of comparative genomics and heritability analysis of pig and human on obesity trait: evidence for candidate genes on human chromosome 2.

    PubMed

    Kim, Jaemin; Lee, Taeheon; Kim, Tae-Hun; Lee, Kyung-Tai; Kim, Heebal

    2012-12-19

    Traditional candidate gene approach has been widely used for the study of complex diseases including obesity. However, this approach is largely limited by its dependence on existing knowledge of presumed biology of the phenotype under investigation. Our combined strategy of comparative genomics and chromosomal heritability estimate analysis of obesity traits, subscapular skinfold thickness and back-fat thickness in Korean cohorts and pig (Sus scrofa), may overcome the limitations of candidate gene analysis and allow us to better understand genetic predisposition to human obesity. We found common genes including FTO, the fat mass and obesity associated gene, identified from significant SNPs by association studies of each trait. These common genes were related to blood pressure and arterial stiffness (P = 1.65E-05) and type 2 diabetes (P = 0.00578). Through the estimation of variance of genetic component (heritability) for each chromosome by SNPs, we observed a significant positive correlation (r = 0.479) between genetic contributions of human and pig to obesity traits. Furthermore, we noted that human chromosome 2 (syntenic to pig chromosomes 3 and 15) was most important in explaining the phenotypic variance for obesity. Obesity genetics still awaits further discovery. Navigating syntenic regions suggests obesity candidate genes on chromosome 2 that are previously known to be associated with obesity-related diseases: MRPL33, PARD3B, ERBB4, STK39, and ZNF385B.

  14. Geomagnetic field model for the last 5 My: time-averaged field and secular variation

    NASA Astrophysics Data System (ADS)

    Hatakeyama, Tadahiro; Kono, Masaru

    2002-11-01

    Structure of the geomagnetic field has bee studied by using the paleomagetic direction data of the last 5 million years obtained from lava flows. The method we used is the nonlinear version, similar to the works of Gubbins and Kelly [Nature 365 (1993) 829], Johnson and Constable [Geophys. J. Int. 122 (1995) 488; Geophys. J. Int. 131 (1997) 643], and Kelly and Gubbins [Geophys. J. Int. 128 (1997) 315], but we determined the time-averaged field (TAF) and the paleosecular variation (PSV) simultaneously. As pointed out in our previous work [Earth Planet. Space 53 (2001) 31], the observed mean field directions are affected by the fluctuation of the field, as described by the PSV model. This effect is not excessively large, but cannot be neglected while considering the mean field. We propose that the new TAF+PSV model is a better representation of the ancient magnetic field, since both the average and fluctuation of the field are consistently explained. In the inversion procedure, we used direction cosines instead of inclinations and declinations, as the latter quantities show singularity or unstable behavior at the high latitudes. The obtained model gives reasonably good fit to the observed means and variances of direction cosines. In the TAF model, the geocentric axial dipole term ( g10) is the dominant component; it is much more pronounced than that in the present magnetic field. The equatorial dipole component is quite small, after averaging over time. The model shows a very smooth spatial variation; the nondipole components also seem to be averaged out quite effectively over time. Among the other coefficients, the geocentric axial quadrupole term ( g20) is significantly larger than the other components. On the other hand, the axial octupole term ( g30) is much smaller than that in a TAF model excluding the PSV effect. It is likely that the effect of PSV is most clearly seen in this term, which is consistent with the conclusion reached in our previous work. The PSV model shows large variance of the (2,1) component, which is in good agreement with the previous PSV models obtained by forward approaches. It is also indicated that the variance of the axial dipole term is very small. This is in conflict with the studies based on paleointensity data, but we show that this conclusion is not inconsistent with the paleointensity data because a substantial part of the apparent scatter in paleointensities may be attributable to effects other than the fluctuations in g10 itself.

  15. On Certain New Methodology for Reducing Sensor and Readout Electronics Circuitry Noise in Digital Domain

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Miko, Joseph; Bradley, Damon; Heinzen, Katherine

    2008-01-01

    NASA Hubble Space Telescope (HST) and upcoming cosmology science missions carry instruments with multiple focal planes populated with many large sensor detector arrays. These sensors are passively cooled to low temperatures for low-level light (L3) and near-infrared (NIR) signal detection, and the sensor readout electronics circuitry must perform at extremely low noise levels to enable new required science measurements. Because we are at the technological edge of enhanced performance for sensors and readout electronics circuitry, as determined by thermal noise level at given temperature in analog domain, we must find new ways of further compensating for the noise in the signal digital domain. To facilitate this new approach, state-of-the-art sensors are augmented at their array hardware boundaries by non-illuminated reference pixels, which can be used to reduce noise attributed to sensors. There are a few proposed methodologies of processing in the digital domain the information carried by reference pixels, as employed by the Hubble Space Telescope and the James Webb Space Telescope Projects. These methods involve using spatial and temporal statistical parameters derived from boundary reference pixel information to enhance the active (non-reference) pixel signals. To make a step beyond this heritage methodology, we apply the NASA-developed technology known as the Hilbert- Huang Transform Data Processing System (HHT-DPS) for reference pixel information processing and its utilization in reconfigurable hardware on-board a spaceflight instrument or post-processing on the ground. The methodology examines signal processing for a 2-D domain, in which high-variance components of the thermal noise are carried by both active and reference pixels, similar to that in processing of low-voltage differential signals and subtraction of a single analog reference pixel from all active pixels on the sensor. Heritage methods using the aforementioned statistical parameters in the digital domain (such as statistical averaging of the reference pixels themselves) zeroes out the high-variance components, and the counterpart components in the active pixels remain uncorrected. This paper describes how the new methodology was demonstrated through analysis of fast-varying noise components using the Hilbert-Huang Transform Data Processing System tool (HHT-DPS) developed at NASA and the high-level programming language MATLAB (Trademark of MathWorks Inc.), as well as alternative methods for correcting for the high-variance noise component, using an HgCdTe sensor data. The NASA Hubble Space Telescope data post-processing, as well as future deep-space cosmology projects on-board instrument data processing from all the sensor channels, would benefit from this effort.

  16. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  17. Genetic Analysis of Growth Traits in Polled Nellore Cattle Raised on Pasture in Tropical Region Using Bayesian Approaches

    PubMed Central

    Lopes, Fernando Brito; Magnabosco, Cláudio Ulhôa; Paulini, Fernanda; da Silva, Marcelo Corrêa; Miyagi, Eliane Sayuri; Lôbo, Raysildo Barbosa

    2013-01-01

    Components of (co)variance and genetic parameters were estimated for adjusted weights at ages 120 (W120), 240 (W240), 365 (W365) and 450 (W450) days of Polled Nellore cattle raised on pasture and born between 1987 and 2010. Analyses were performed using an animal model, considering fixed effects: herd-year-season of birth and calf sex as contemporary groups and the age of cow as a covariate. Gibbs Samplers were used to estimate (co)variance components, genetic parameters and additive genetic effects, which accounted for great proportion of total variation in these traits. High direct heritability estimates for the growth traits were revealed and presented mean 0.43, 0.61, 0.72 and 0.67 for W120, W240, W365 and W450, respectively. Maternal heritabilities were 0.07 and 0.08 for W120 and W240, respectively. Direct additive genetic correlations between the weight at 120, 240, 365 and 450 days old were strong and positive. These estimates ranged from 0.68 to 0.98. Direct-maternal genetic correlations were negative for W120 and W240. The estimates ranged from −0.31 to −0.54. Estimates of maternal heritability ranged from 0.056 to 0.092 for W120 and from 0.064 to 0.096 for W240. This study showed that genetic progress is possible for the growth traits we studied, which is a novel and favorable indicator for an upcoming and promising Polled Zebu breed in Tropical regions. Maternal effects influenced the performance of weight at 120 and 240 days old. These effects should be taken into account in genetic analyses of growth traits by fitting them as a genetic or a permanent environmental effect, or even both. In general, due to a medium-high estimate of environmental (co)variance components, management and feeding conditions for Polled Nellore raised at pasture in tropical regions of Brazil needs improvement and growth performance can be enhanced. PMID:24040412

  18. Capability and Development Risk Management in System-of-Systems Architectures: A Portfolio Approach to Decision-Making

    DTIC Science & Technology

    2012-04-30

    tool that provides a means of balancing capability development against cost and interdependent risks through the use of modern portfolio theory ...Focardi, 2007; Tutuncu & Cornuejols, 2007) that are extensions of modern portfolio and control theory . The reformulation allows for possible changes...Acquisition: Wave Model context • An Investment Portfolio Approach – Mean Variance Approach – Mean - Variance : A Robust Version • Concept

  19. Optical phase-locked loop (OPLL) for free-space laser communications with heterodyne detection

    NASA Technical Reports Server (NTRS)

    Win, Moe Z.; Chen, Chien-Chung; Scholtz, Robert A.

    1991-01-01

    Several advantages of coherent free-space optical communications are outlined. Theoretical analysis is formulated for an OPLL disturbed by shot noise, modulation noise, and frequency noise consisting of a white component, a 1/f component, and a 1/f-squared component. Each of the noise components is characterized by its associated power spectral density. It is shown that the effect of modulation depends only on the ratio of loop bandwidth and data rate, and is negligible for an OPLL with loop bandwidth smaller than one fourth the data rate. Total phase error variance as a function of loop bandwidth is displayed for several values of carrier signal to noise ratio. Optimal loop bandwidth is also calculated as a function of carrier signal to noise ratio. An OPLL experiment is performed, where it is shown that the measured phase error variance closely matches the theoretical predictions.

  20. Assessing factorial invariance of two-way rating designs using three-way methods

    PubMed Central

    Kroonenberg, Pieter M.

    2015-01-01

    Assessing the factorial invariance of two-way rating designs such as ratings of concepts on several scales by different groups can be carried out with three-way models such as the Parafac and Tucker models. By their definitions these models are double-metric factorially invariant. The differences between these models lie in their handling of the links between the concept and scale spaces. These links may consist of unrestricted linking (Tucker2 model), invariant component covariances but variable variances per group and per component (Parafac model), zero covariances and variances different per group but not per component (Replicated Tucker3 model) and strict invariance (Component analysis on the average matrix). This hierarchy of invariant models, and the procedures by which to evaluate the models against each other, is illustrated in some detail with an international data set from attachment theory. PMID:25620936

  1. Application of principal component analysis (PCA) as a sensory assessment tool for fermented food products.

    PubMed

    Ghosh, Debasree; Chattopadhyay, Parimal

    2012-06-01

    The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.

  2. Improved classification accuracy in 1- and 2-dimensional NMR metabolomics data using the variance stabilising generalised logarithm transformation

    PubMed Central

    Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R

    2007-01-01

    Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789

  3. Variance partitioning of stream diatom, fish, and invertebrate indicators of biological condition

    USGS Publications Warehouse

    Zuellig, Robert E.; Carlisle, Daren M.; Meador, Michael R.; Potapova, Marina

    2012-01-01

    Stream indicators used to make assessments of biological condition are influenced by many possible sources of variability. To examine this issue, we used multiple-year and multiple-reach diatom, fish, and invertebrate data collected from 20 least-disturbed and 46 developed stream segments between 1993 and 2004 as part of the US Geological Survey National Water Quality Assessment Program. We used a variance-component model to summarize the relative and absolute magnitude of 4 variance components (among-site, among-year, site × year interaction, and residual) in indicator values (observed/expected ratio [O/E] and regional multimetric indices [MMI]) among assemblages and between basin types (least-disturbed and developed). We used multiple-reach samples to evaluate discordance in site assessments of biological condition caused by sampling variability. Overall, patterns in variance partitioning were similar among assemblages and basin types with one exception. Among-site variance dominated the relative contribution to the total variance (64–80% of total variance), residual variance (sampling variance) accounted for more variability (8–26%) than interaction variance (5–12%), and among-year variance was always negligible (0–0.2%). The exception to this general pattern was for invertebrates at least-disturbed sites where variability in O/E indicators was partitioned between among-site and residual (sampling) variance (among-site  =  36%, residual  =  64%). This pattern was not observed for fish and diatom indicators (O/E and regional MMI). We suspect that unexplained sampling variability is what largely remained after the invertebrate indicators (O/E predictive models) had accounted for environmental differences among least-disturbed sites. The influence of sampling variability on discordance of within-site assessments was assemblage or basin-type specific. Discordance among assessments was nearly 2× greater in developed basins (29–31%) than in least-disturbed sites (15–16%) for invertebrates and diatoms, whereas discordance among assessments based on fish did not differ between basin types (least-disturbed  =  16%, developed  =  17%). Assessments made using invertebrate and diatom indicators from a single reach disagreed with other samples collected within the same stream segment nearly ⅓ of the time in developed basins, compared to ⅙ for all other cases.

  4. Quantifying the Influence of Dynamics Across Scales on Regional Climate Uncertainty in Western North America

    NASA Astrophysics Data System (ADS)

    Goldenson, Naomi L.

    Uncertainties in climate projections at the regional scale are inevitably larger than those for global mean quantities. Here, focusing on western North American regional climate, several approaches are taken to quantifying uncertainties starting with the output of global climate model projections. Internal variance is found to be an important component of the projection uncertainty up and down the west coast. To quantify internal variance and other projection uncertainties in existing climate models, we evaluate different ensemble configurations. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find internal variability can be quantified consistently using a large ensemble or an ensemble of opportunity that includes small ensembles from multiple models and climate scenarios. The latter offers the advantage of also producing estimates of uncertainty due to model differences. We conclude that climate projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible. We then conduct a small single-model ensemble of simulations using the Model for Prediction Across Scales with physics from the Community Atmosphere Model Version 5 (MPAS-CAM5) and prescribed historical sea surface temperatures. In the global variable resolution domain, the finest resolution (at 30 km) is in our region of interest over western North America and upwind over the northeast Pacific. In the finer-scale region, extreme precipitation from atmospheric rivers (ARs) is connected to tendencies in seasonal snowpack in mountains of the Northwest United States and California. In most of the Cascade Mountains, winters with more AR days are associated with less snowpack, in contrast to the northern Rockies and California's Sierra Nevadas. In snowpack observations and reanalysis of the atmospheric circulation, we find similar relationships between frequency of AR events and winter season snowpack in the western United States. In spring, however, there is not a clear relationship between number of AR days and seasonal mean snowpack across the model ensemble, so caution is urged in interpreting the historical record in the spring season. Finally, the representation of the El Nino Southern Oscillation (ENSO)--an important source of interannual climate predictability in some regions--is explored in a large single-model ensemble using ensemble Empirical Orthogonal Functions (EOFs) to find modes of variance across the entire ensemble at once. The leading EOF is ENSO. The principal components (PCs) of the next three EOFs exhibit a lead-lag relationship with the ENSO signal captured in the first PC. The second PC, with most of its variance in the summer season, is the most strongly cross-correlated with the first. This approach offers insight into how the model considered represents this important atmosphere-ocean interaction. Taken together these varied approaches quantify the implications of climate projections regionally, identify processes that make snowpack water resources vulnerable, and seek insight into how to better simulate the large-scale climate modes controlling regional variability.

  5. Attitudes toward and approaches to learning first-year university mathematics.

    PubMed

    Alkhateeb, Haitham M; Hammoudi, Lakhdar

    2006-08-01

    This study examined the relationship for 180 undergraduate students enrolled in a first-year university calculus course between attitudes toward mathematics and approaches to learning mathematics using the Mathematics Attitude Scale and the Approaches to Learning Mathematics Questionnaire, respectively. Regression analyses indicated that scores for the Mathematics Attitude Scale were negatively related to scores for the Surface Approach and accounted for 10.4% of the variance and scores for the Mathematics Attitude Scale were positively related to scores for the Deep Approach to learning mathematics and accounted for 31.7% of the variance.

  6. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  7. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  8. Asymptotic Effect of Misspecification in the Random Part of the Multilevel Model

    ERIC Educational Resources Information Center

    Berkhof, Johannes; Kampen, Jarl Kennard

    2004-01-01

    The authors examine the asymptotic effect of omitting a random coefficient in the multilevel model and derive expressions for the change in (a) the variance components estimator and (b) the estimated variance of the fixed effects estimator. They apply the method of moments, which yields a closed form expression for the omission effect. In…

  9. The Dissociation of Word Reading and Text Comprehension: Evidence from Component Skills.

    ERIC Educational Resources Information Center

    Oakhill, J. V.; Cain, K.; Bryant, P. E.

    2003-01-01

    Discusses the relative contribution of several theoretically relevant skills and abilities in accounting for variance in both word reading and text comprehension. Data is presented from two waves of a longitudinal study. Shows there is a dissociation between the skills and abilities that account for variance in word reading, and those that account…

  10. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  11. Dominance genetic variance for traits under directional selection in Drosophila serrata.

    PubMed

    Sztepanacz, Jacqueline L; Blows, Mark W

    2015-05-01

    In contrast to our growing understanding of patterns of additive genetic variance in single- and multi-trait combinations, the relative contribution of nonadditive genetic variance, particularly dominance variance, to multivariate phenotypes is largely unknown. While mechanisms for the evolution of dominance genetic variance have been, and to some degree remain, subject to debate, the pervasiveness of dominance is widely recognized and may play a key role in several evolutionary processes. Theoretical and empirical evidence suggests that the contribution of dominance variance to phenotypic variance may increase with the correlation between a trait and fitness; however, direct tests of this hypothesis are few. Using a multigenerational breeding design in an unmanipulated population of Drosophila serrata, we estimated additive and dominance genetic covariance matrices for multivariate wing-shape phenotypes, together with a comprehensive measure of fitness, to determine whether there is an association between directional selection and dominance variance. Fitness, a trait unequivocally under directional selection, had no detectable additive genetic variance, but significant dominance genetic variance contributing 32% of the phenotypic variance. For single and multivariate morphological traits, however, no relationship was observed between trait-fitness correlations and dominance variance. A similar proportion of additive and dominance variance was found to contribute to phenotypic variance for single traits, and double the amount of additive compared to dominance variance was found for the multivariate trait combination under directional selection. These data suggest that for many fitness components a positive association between directional selection and dominance genetic variance may not be expected. Copyright © 2015 by the Genetics Society of America.

  12. The influence of iliotibial band syndrome history on running biomechanics examined via principal components analysis.

    PubMed

    Foch, Eric; Milner, Clare E

    2014-01-03

    Iliotibial band syndrome (ITBS) is a common knee overuse injury among female runners. Atypical discrete trunk and lower extremity biomechanics during running may be associated with the etiology of ITBS. Examining discrete data points limits the interpretation of a waveform to a single value. Characterizing entire kinematic and kinetic waveforms may provide additional insight into biomechanical factors associated with ITBS. Therefore, the purpose of this cross-sectional investigation was to determine whether female runners with previous ITBS exhibited differences in kinematics and kinetics compared to controls using a principal components analysis (PCA) approach. Forty participants comprised two groups: previous ITBS and controls. Principal component scores were retained for the first three principal components and were analyzed using independent t-tests. The retained principal components accounted for 93-99% of the total variance within each waveform. Runners with previous ITBS exhibited low principal component one scores for frontal plane hip angle. Principal component one accounted for the overall magnitude in hip adduction which indicated that runners with previous ITBS assumed less hip adduction throughout stance. No differences in the remaining retained principal component scores for the waveforms were detected among groups. A smaller hip adduction angle throughout the stance phase of running may be a compensatory strategy to limit iliotibial band strain. This running strategy may have persisted after ITBS symptoms subsided. © 2013 Published by Elsevier Ltd.

  13. Voltage controlling mechanisms in low resistivity silicon solar cells: A unified approach

    NASA Technical Reports Server (NTRS)

    Weizer, V. G.; Swartz, C. K.; Hart, R. E.; Godlewski, M. P.

    1984-01-01

    An experimental technique capable of resolving the dark saturation current into its base and emitter components is used as the basis of an analysis in which the voltage limiting mechanisms were determined for a variety of high voltage, low resistivity silicon solar cells. The cells studied include the University of Florida hi-low emitter cell, the NASA and the COMSAT multi-step diffused cells, the Spire Corporation ion-implanted emitter cell, and the University of New South Wales MINMIS and MINP cells. The results proved to be, in general, at variance with prior expectations. Most surprising was the finding that the MINP and the MINMIS voltage improvements are due, to a considerable extent, to a previously unrecognized optimization of the base component of the saturation current. This result is substantiated by an independent analysis of the material used to fabricate these devices.

  14. Voltage controlling mechanisms in low resistivity silicon solar cells - A unified approach

    NASA Technical Reports Server (NTRS)

    Weizer, V. G.; Swartz, C. K.; Hart, R. E.; Godlewski, M. P.

    1984-01-01

    An experimental technique capable of resolving the dark saturation current into its base and emitter components is used as the basis of an analysis in which the voltage limiting mechanisms were determined for a variety of high voltage, low resistivity silicon solar cells. The cells studied include the University of Florida hi-low emitter cell, the NASA and the COMSAT multi-step diffused cells, the Spire Corporation ion-implanted emitter cell, and the University of New South Wales MINMIS and MINP cells. The results proved to be, in general, at variance with prior expectations. Most surprising was the finding that the MINP and the MINMIS voltage improvements are due, to a considerable extent, to a previously unrecognized optimization of the base component of the saturation current. This result is substantiated by an independent analysis of the material used to fabricate these devices.

  15. The infinitesimal model: Definition, derivation, and implications.

    PubMed

    Barton, N H; Etheridge, A M; Véber, A

    2017-12-01

    Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed within families as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. Thus, the variance that segregates within families is not perturbed by selection, and can be predicted from the variance components. This does not necessarily imply that the trait distribution across the whole population should be Gaussian, and indeed selection or population structure may have a substantial effect on the overall trait distribution. One of our main aims is to identify some general conditions on the allelic effects for the infinitesimal model to be accurate. We first review the long history of the infinitesimal model in quantitative genetics. Then we formulate the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, …. We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. We prove in particular that, within each family, the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order 1∕M. Simulations suggest that in some cases the convergence may be as fast as 1∕M. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Joint variability of global runoff and global sea surface temperatures

    USGS Publications Warehouse

    McCabe, G.J.; Wolock, D.M.

    2008-01-01

    Global land surface runoff and sea surface temperatures (SST) are analyzed to identify the primary modes of variability of these hydroclimatic data for the period 1905-2002. A monthly water-balance model first is used with global monthly temperature and precipitation data to compute time series of annual gridded runoff for the analysis period. The annual runoff time series data are combined with gridded annual sea surface temperature data, and the combined dataset is subjected to a principal components analysis (PCA) to identify the primary modes of variability. The first three components from the PCA explain 29% of the total variability in the combined runoff/SST dataset. The first component explains 15% of the total variance and primarily represents long-term trends in the data. The long-term trends in SSTs are evident as warming in all of the oceans. The associated long-term trends in runoff suggest increasing flows for parts of North America, South America, Eurasia, and Australia; decreasing runoff is most notable in western Africa. The second principal component explains 9% of the total variance and reflects variability of the El Ni??o-Southern Oscillation (ENSO) and its associated influence on global annual runoff patterns. The third component explains 5% of the total variance and indicates a response of global annual runoff to variability in North Aflantic SSTs. The association between runoff and North Atlantic SSTs may explain an apparent steplike change in runoff that occurred around 1970 for a number of continental regions.

  17. Correlation between Relatives given Complete Genotypes: from Identity by Descent to Identity by Function

    PubMed Central

    Sverdlov, Serge; Thompson, Elizabeth A.

    2013-01-01

    In classical quantitative genetics, the correlation between the phenotypes of individuals with unknown genotypes and a known pedigree relationship is expressed in terms of probabilities of IBD states. In existing approaches to the inverse problem where genotypes are observed but pedigree relationships are not, dependence between phenotypes is either modeled as Bayesian uncertainty or mapped to an IBD model via inferred relatedness parameters. Neither approach yields a relationship between genotypic similarity and phenotypic similarity with a probabilistic interpretation corresponding to a generative model. We introduce a generative model for diploid allele effect based on the classic infinite allele mutation process. This approach motivates the concept of IBF (Identity by Function). The phenotypic covariance between two individuals given their diploid genotypes is expressed in terms of functional identity states. The IBF parameters define a genetic architecture for a trait without reference to specific alleles or population. Given full genome sequences, we treat a gene-scale functional region, rather than a SNP, as a QTL, modeling patterns of dominance for multiple alleles. Applications demonstrated by simulation include phenotype and effect prediction and association, and estimation of heritability and classical variance components. A simulation case study of the Missing Heritability problem illustrates a decomposition of heritability under the IBF framework into Explained and Unexplained components. PMID:23851163

  18. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Estimating the Reliability of Single-Item Life Satisfaction Measures: Results from Four National Panel Studies

    ERIC Educational Resources Information Center

    Lucas, Richard E.; Donnellan, M. Brent

    2012-01-01

    Life satisfaction is often assessed using single-item measures. However, estimating the reliability of these measures can be difficult because internal consistency coefficients cannot be calculated. Existing approaches use longitudinal data to isolate occasion-specific variance from variance that is either completely stable or variance that…

  20. Spatial correlation of probabilistic earthquake ground motion and loss

    USGS Publications Warehouse

    Wesson, R.L.; Perkins, D.M.

    2001-01-01

    Spatial correlation of annual earthquake ground motions and losses can be used to estimate the variance of annual losses to a portfolio of properties exposed to earthquakes A direct method is described for the calculations of the spatial correlation of earthquake ground motions and losses. Calculations for the direct method can be carried out using either numerical quadrature or a discrete, matrix-based approach. Numerical results for this method are compared with those calculated from a simple Monte Carlo simulation. Spatial correlation of ground motion and loss is induced by the systematic attenuation of ground motion with distance from the source, by common site conditions, and by the finite length of fault ruptures. Spatial correlation is also strongly dependent on the partitioning of the variability, given an event, into interevent and intraevent components. Intraevent variability reduces the spatial correlation of losses. Interevent variability increases spatial correlation of losses. The higher the spatial correlation, the larger the variance in losses to a port-folio, and the more likely extreme values become. This result underscores the importance of accurately determining the relative magnitudes of intraevent and interevent variability in ground-motion studies, because of the strong impact in estimating earthquake losses to a portfolio. The direct method offers an alternative to simulation for calculating the variance of losses to a portfolio, which may reduce the amount of calculation required.

  1. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  2. The influence of acceleration loading curve characteristics on traumatic brain injury.

    PubMed

    Post, Andrew; Blaine Hoshizaki, T; Gilchrist, Michael D; Brien, Susan; Cusimano, Michael D; Marshall, Shawn

    2014-03-21

    To prevent brain trauma, understanding the mechanism of injury is essential. Once the mechanism of brain injury has been identified, prevention technologies could then be developed to aid in their prevention. The incidence of brain injury is linked to how the kinematics of a brain injury event affects the internal structures of the brain. As a result it is essential that an attempt be made to describe how the characteristics of the linear and rotational acceleration influence specific traumatic brain injury lesions. As a result, the purpose of this study was to examine the influence of the characteristics of linear and rotational acceleration pulses and how they account for the variance in predicting the outcome of TBI lesions, namely contusion, subdural hematoma (SDH), subarachnoid hemorrhage (SAH), and epidural hematoma (EDH) using a principal components analysis (PCA). Monorail impacts were conducted which simulated falls which caused the TBI lesions. From these reconstructions, the characteristics of the linear and rotational acceleration were determined and used for a PCA analysis. The results indicated that peak resultant acceleration variables did not account for any of the variance in predicting TBI lesions. The majority of the variance was accounted for by duration of the resultant and component linear and rotational acceleration. In addition, the components of linear and rotational acceleration characteristics on the x, y, and z axes accounted for the majority of the remainder of the variance after duration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle.

    PubMed

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-12-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.

  4. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle

    PubMed Central

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-01-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran. PMID:26954192

  5. Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects

    PubMed Central

    Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.

    2015-01-01

    The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135

  6. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Influence of mom and dad: quantitative genetic models for maternal effects and genomic imprinting.

    PubMed

    Santure, Anna W; Spencer, Hamish G

    2006-08-01

    The expression of an imprinted gene is dependent on the sex of the parent it was inherited from, and as a result reciprocal heterozygotes may display different phenotypes. In contrast, maternal genetic terms arise when the phenotype of an offspring is influenced by the phenotype of its mother beyond the direct inheritance of alleles. Both maternal effects and imprinting may contribute to resemblance between offspring of the same mother. We demonstrate that two standard quantitative genetic models for deriving breeding values, population variances and covariances between relatives, are not equivalent when maternal genetic effects and imprinting are acting. Maternal and imprinting effects introduce both sex-dependent and generation-dependent effects that result in differences in the way additive and dominance effects are defined for the two approaches. We use a simple example to demonstrate that both imprinting and maternal genetic effects add extra terms to covariances between relatives and that model misspecification may over- or underestimate true covariances or lead to extremely variable parameter estimation. Thus, an understanding of various forms of parental effects is essential in correctly estimating quantitative genetic variance components.

  8. Decoding the auditory brain with canonical component analysis.

    PubMed

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  10. Dopamine D1 Sensitivity in the Prefrontal Cortex Predicts General Cognitive Abilities and is Modulated by Working Memory Training

    ERIC Educational Resources Information Center

    Wass, Christopher; Pizzo, Alessandro; Sauce, Bruno; Kawasumi, Yushi; Sturzoiu, Tudor; Ree, Fred; Otto, Tim; Matzel, Louis D.

    2013-01-01

    A common source of variance (i.e., "general intelligence") underlies an individual's performance across diverse tests of cognitive ability, and evidence indicates that the processing efficacy of working memory may serve as one such source of common variance. One component of working memory, selective attention, has been reported to…

  11. Shifting patterns of variance in adolescent alcohol use: Testing consumption as a developing trait-state.

    PubMed

    Nealis, Logan J; Thompson, Kara D; Krank, Marvin D; Stewart, Sherry H

    2016-04-01

    While average rates of change in adolescent alcohol consumption are frequently studied, variability arising from situational and dispositional influences on alcohol use has been comparatively neglected. We used variance decomposition to test differences in variability resulting from year-to-year fluctuations in use (i.e., state-like) and from stable individual differences (i.e., trait-like) using data from the Project on Adolescent Trajectories and Health (PATH), a cohort-sequential study spanning grades 7 to 11 using three cohorts starting in grades seven, eight, and nine, respectively. We tested variance components for alcohol volume, frequency, and quantity in the overall sample, and changes in components over time within each cohort. Sex differences were tested. Most variability in alcohol use reflected state-like variation (47-76%), with a relatively smaller proportion of trait-like variation (19-36%). These proportions shifted across cohorts as youth got older, with increases in trait-like variance from early adolescence (14-30%) to later adolescence (30-50%). Trends were similar for males and females, although females showed higher trait-like variance in alcohol frequency than males throughout development (26-43% vs. 11-25%). For alcohol volume and frequency, males showed the greatest increase in trait-like variance earlier in development (i.e., grades 8-10) compared to females (i.e., grades 9-11). The relative strength of situational and dispositional influences on adolescent alcohol use has important implications for preventative interventions. Interventions should ideally target problematic alcohol use before it becomes more ingrained and trait-like. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Shape variation in the human pelvis and limb skeleton: Implications for obstetric adaptation.

    PubMed

    Kurki, Helen K; Decrausaz, Sarah-Louise

    2016-04-01

    Under the obstetrical dilemma (OD) hypothesis, selection acts on the human female pelvis to ensure a sufficiently sized obstetric canal for birthing a large-brained, broad shouldered neonate, while bipedal locomotion selects for a narrower and smaller pelvis. Despite this female-specific stabilizing selection, variability of linear dimensions of the pelvic canal and overall size are not reduced in females, suggesting shape may instead be variable among females of a population. Female canal shape has been shown to vary among populations, while male canal shape does not. Within this context, we examine within-population canal shape variation in comparison with that of noncanal aspects of the pelvis and the limbs. Nine skeletal samples (total female n = 101, male n = 117) representing diverse body sizes and shapes were included. Principal components analysis was applied to size-adjusted variables of each skeletal region. A multivariate variance was calculated using the weighted PC scores for all components in each model and F-ratios used to assess differences in within-population variances between sexes and skeletal regions. Within both sexes, multivariate canal shape variance is significantly greater than noncanal pelvis and limb variances, while limb variance is greater than noncanal pelvis variance in some populations. Multivariate shape variation is not consistently different between the sexes in any of the skeletal regions. Diverse selective pressures, including obstetrics, locomotion, load carrying, and others may act on canal shape, as well as genetic drift and plasticity, thus increasing variation in morphospace while protecting obstetric sufficiency. © 2015 Wiley Periodicals, Inc.

  13. Bias Correction and Random Error Characterization for the Assimilation of HRDI Line-of-Sight Wind Measurements

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Menard, Richard; Ortland, David; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A new approach to the analysis of systematic and random observation errors is presented in which the error statistics are obtained using forecast data rather than observations from a different instrument type. The analysis is carried out at an intermediate retrieval level, instead of the more typical state variable space. This method is carried out on measurements made by the High Resolution Doppler Imager (HRDI) on board the Upper Atmosphere Research Satellite (UARS). HRDI, a limb sounder, is the only satellite instrument measuring winds in the stratosphere, and the only instrument of any kind making global wind measurements in the upper atmosphere. HRDI measures doppler shifts in the two different O2 absorption bands (alpha and B) and the retrieved products are tangent point Line-of-Sight wind component (level 2 retrieval) and UV winds (level 3 retrieval). This analysis is carried out on a level 1.9 retrieval, in which the contributions from different points along the line-of-sight have not been removed. Biases are calculated from O-F (observed minus forecast) LOS wind components and are separated into a measurement parameter space consisting of 16 different values. The bias dependence on these parameters (plus an altitude dependence) is used to create a bias correction scheme carried out on the level 1.9 retrieval. The random error component is analyzed by separating the gamma and B band observations and locating observation pairs where both bands are very nearly looking at the same location at the same time. It is shown that the two observation streams are uncorrelated and that this allows the forecast error variance to be estimated. The bias correction is found to cut the effective observation error variance in half.

  14. Workspace location influences joint coordination during reaching in post-stroke hemiparesis

    PubMed Central

    Reisman, Darcy S.; Scholz, John P.

    2006-01-01

    The purpose of this study was to determine the influence of workspace location on joint coordination in persons with post-stroke hemiparesis when trunk motion was required to complete reaches beyond the arm’s functional reach length. Seven subjects with mild right hemiparesis following a stroke and seven age and gender matched control subjects participated. Joint motions and characteristics of hand and trunk movement were measured over multiple repetitions. The variance (across trials) of joint combinations was partitioned into two components at every point in the hand’s trajectory using the uncontrolled manifold approach; the first component is a measure of the extent to which equivalent joint combinations are used to control a given hand path, and reflects performance flexibility. The second component of joint variance reflects the use of non-equivalent joint combinations, which lead to hand path error. Compared to the control subjects, persons with hemiparesis demonstrated a significantly greater amount of non-equivalent joint variability related to control of the hand’s path and of the hand’s position relative to the trunk when reaching toward the hemiparetic side (ipsilaterally), but not when reaching to the less involved side. The relative timing of the hand and trunk was also altered when reaching ipsilaterally. The current findings support the idea that the previously proposed “arm compensatory synergy” may be deficient in subjects with hemiparesis. This deficiency may be due to one or a combination of factors: changes in central commands that are thought to set the gain of the arm compensatory synergy; a limited ability to combine shoulder abduction and elbow extension that limits the expression of an appropriately set arm compensatory synergy; or a reduction of the necessary degrees-of-freedom needed to adequately compensate for poor trunk control when reaching ipsilaterally. PMID:16328275

  15. Triggers in advanced neurological conditions: prediction and management of the terminal phase.

    PubMed

    Hussain, Jamilla; Adams, Debi; Allgar, Victoria; Campbell, Colin

    2014-03-01

    The challenge to provide a palliative care service for individuals with advanced neurological conditions is compounded by variability in disease trajectories and symptom profiles. The National End of Life Care Programme (2010) recommended seven 'triggers' for a palliative approach to care for patients with advanced neurological conditions. To establish the frequency of triggers in the palliative phase, and if they could be reduced to fewer components. Management of the terminal phase also was evaluated. Retrospective study of 62 consecutive patients under the care of a specialist palliative neurology service, who had died. Principle component analysis (PCA) was performed to establish the interrelationship between triggers. Frequency of triggers increased as each patient approached death. PCA found that four symptom components explained 76.8% of the variance. These represented: rapid physical decline; significant complex symptoms, including pain; infection in combination with cognitive impairment; and risk of aspiration. Median follow-up under the palliative care service was 336 days. In 56.5% of patients, the cause of death was pneumonia. The terminal phase was recognised in 72.6%. The duration of the terminal phase was 8.8 days on average, and the Liverpool Care of the dying Pathway was commenced in 33.9%. All carers were offered bereavement support. Referral criteria based on the triggers can facilitate appropriate and timely patient access to palliative care. The components deduced through PCA have face validity; however larger studies prospectively validating the triggers are required. Closer scrutiny of the terminal phase is necessary to optimise management.

  16. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    USGS Publications Warehouse

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  17. Short communication: Discrimination between retail bovine milks with different fat contents using chemometrics and fatty acid profiling.

    PubMed

    Vargas-Bello-Pérez, Einar; Toro-Mujica, Paula; Enriquez-Hidalgo, Daniel; Fellenberg, María Angélica; Gómez-Cortés, Pilar

    2017-06-01

    We used a multivariate chemometric approach to differentiate or associate retail bovine milks with different fat contents and non-dairy beverages, using fatty acid profiles and statistical analysis. We collected samples of bovine milk (whole, semi-skim, and skim; n = 62) and non-dairy beverages (n = 27), and we analyzed them using gas-liquid chromatography. Principal component analysis of the fatty acid data yielded 3 significant principal components, which accounted for 72% of the total variance in the data set. Principal component 1 was related to saturated fatty acids (C4:0, C6:0, C8:0, C12:0, C14:0, C17:0, and C18:0) and monounsaturated fatty acids (C14:1 cis-9, C16:1 cis-9, C17:1 cis-9, and C18:1 trans-11); whole milk samples were clearly differentiated from the rest using this principal component. Principal component 2 differentiated semi-skim milk samples by n-3 fatty acid content (C20:3n-3, C20:5n-3, and C22:6n-3). Principal component 3 was related to C18:2 trans-9,trans-12 and C20:4n-6, and its lower scores were observed in skim milk and non-dairy beverages. A cluster analysis yielded 3 groups: group 1 consisted of only whole milk samples, group 2 was represented mainly by semi-skim milks, and group 3 included skim milk and non-dairy beverages. Overall, the present study showed that a multivariate chemometric approach is a useful tool for differentiating or associating retail bovine milks and non-dairy beverages using their fatty acid profile. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Observations of the scale-dependent turbulence and evaluation of the flux–gradient relationship for sensible heat for a closed Douglas-fir canopy in very weak wind conditions

    DOE PAGES

    Vickers, D.; Thomas, C. K.

    2014-09-16

    Observations of the scale-dependent turbulent fluxes, variances, and the bulk transfer parameterization for sensible heat above, within, and beneath a tall closed Douglas-fir canopy in very weak winds are examined. The daytime sub-canopy vertical velocity spectra exhibit a double-peak structure with peaks at timescales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime sub-canopy heat flux co-spectra. The daytime momentum flux co-spectra in the upper bole space and in the sub-canopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of amore » mean wind direction, and subsequent partitioning of the momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the sub-canopy contribute to upward transfer of momentum, consistent with the observed sub-canopy secondary wind speed maximum. For the smallest resolved scales in the canopy at nighttime, we find increasing vertical velocity variance with decreasing timescale, consistent with very small eddies possibly generated by wake shedding from the canopy elements that transport momentum, but not heat. Unusually large values of the velocity aspect ratio within the canopy were observed, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the very dense canopy. The flux–gradient approach for sensible heat flux is found to be valid for the sub-canopy and above-canopy layers when considered separately in spite of the very small fluxes on the order of a few W m −2 in the sub-canopy. However, single-source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the sub-canopy and above-canopy layers. While sub-canopy Stanton numbers agreed well with values typically reported in the literature, our estimates for the above-canopy Stanton number were much larger, which likely leads to underestimated modeled sensible heat fluxes above dark warm closed canopies.« less

  19. Observations of the scale-dependent turbulence and evaluation of the flux–gradient relationship for sensible heat for a closed Douglas-fir canopy in very weak wind conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vickers, D.; Thomas, C. K.

    Observations of the scale-dependent turbulent fluxes, variances, and the bulk transfer parameterization for sensible heat above, within, and beneath a tall closed Douglas-fir canopy in very weak winds are examined. The daytime sub-canopy vertical velocity spectra exhibit a double-peak structure with peaks at timescales of 0.8 s and 51.2 s. A double-peak structure is also observed in the daytime sub-canopy heat flux co-spectra. The daytime momentum flux co-spectra in the upper bole space and in the sub-canopy are characterized by a relatively large cross-wind component, likely due to the extremely light and variable winds, such that the definition of amore » mean wind direction, and subsequent partitioning of the momentum flux into along- and cross-wind components, has little physical meaning. Positive values of both momentum flux components in the sub-canopy contribute to upward transfer of momentum, consistent with the observed sub-canopy secondary wind speed maximum. For the smallest resolved scales in the canopy at nighttime, we find increasing vertical velocity variance with decreasing timescale, consistent with very small eddies possibly generated by wake shedding from the canopy elements that transport momentum, but not heat. Unusually large values of the velocity aspect ratio within the canopy were observed, consistent with enhanced suppression of the horizontal wind components compared to the vertical by the very dense canopy. The flux–gradient approach for sensible heat flux is found to be valid for the sub-canopy and above-canopy layers when considered separately in spite of the very small fluxes on the order of a few W m −2 in the sub-canopy. However, single-source approaches that ignore the canopy fail because they make the heat flux appear to be counter-gradient when in fact it is aligned with the local temperature gradient in both the sub-canopy and above-canopy layers. While sub-canopy Stanton numbers agreed well with values typically reported in the literature, our estimates for the above-canopy Stanton number were much larger, which likely leads to underestimated modeled sensible heat fluxes above dark warm closed canopies.« less

  20. Spectral discrimination of bleached and healthy submerged corals based on principal components analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holden, H.; LeDrew, E.

    1997-06-01

    Remote discrimination of substrate types in relatively shallow coastal waters has been limited by the spatial and spectral resolution of available sensors. An additional limiting factor is the strong attenuating influence of the water column over the substrate. As a result, there have been limited attempts to map submerged ecosystems such as coral reefs based on spectral characteristics. Both healthy and bleached corals were measured at depth with a hand-held spectroradiometer, and their spectra compared. Two separate principal components analyses (PCA) were performed on two sets of spectral data. The PCA revealed that there is indeed a spectral difference basedmore » on health. In the first data set, the first component (healthy coral) explains 46.82%, while the second component (bleached coral) explains 46.35% of the variance. In the second data set, the first component (bleached coral) explained 46.99%; the second component (healthy coral) explained 36.55%; and the third component (healthy coral) explained 15.44 % of the total variance in the original data. These results are encouraging with respect to using an airborne spectroradiometer to identify areas of bleached corals thus enabling accurate monitoring over time.« less

  1. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  2. Genetic covariance between components of male reproductive success: within-pair vs. extra-pair paternity in song sparrows

    PubMed Central

    Reid, J M; Arcese, P; Losdat, S

    2014-01-01

    The evolutionary trajectories of reproductive systems, including both male and female multiple mating and hence polygyny and polyandry, are expected to depend on the additive genetic variances and covariances in and among components of male reproductive success achieved through different reproductive tactics. However, genetic covariances among key components of male reproductive success have not been estimated in wild populations. We used comprehensive paternity data from socially monogamous but genetically polygynandrous song sparrows (Melospiza melodia) to estimate additive genetic variance and covariance in the total number of offspring a male sired per year outside his social pairings (i.e. his total extra-pair reproductive success achieved through multiple mating) and his liability to sire offspring produced by his socially paired female (i.e. his success in defending within-pair paternity). Both components of male fitness showed nonzero additive genetic variance, and the estimated genetic covariance was positive, implying that males with high additive genetic value for extra-pair reproduction also have high additive genetic propensity to sire their socially paired female's offspring. There was consequently no evidence of a genetic or phenotypic trade-off between male within-pair paternity success and extra-pair reproductive success. Such positive genetic covariance might be expected to facilitate ongoing evolution of polygyny and could also shape the ongoing evolution of polyandry through indirect selection. PMID:25186454

  3. An application of the LC-LSTM framework to the self-esteem instability case.

    PubMed

    Alessandri, Guido; Vecchione, Michele; Donnellan, Brent M; Tisak, John

    2013-10-01

    The present research evaluates the stability of self-esteem as assessed by a daily version of the Rosenberg (Society and the adolescent self-image, Princeton University Press, Princeton, 1965) general self-esteem scale (RGSE). The scale was administered to 391 undergraduates for five consecutive days. The longitudinal data were analyzed using the integrated LC-LSTM framework that allowed us to evaluate: (1) the measurement invariance of the RGSE, (2) its stability and change across the 5-day assessment period, (3) the amount of variance attributable to stable and transitory latent factors, and (4) the criterion-related validity of these factors. Results provided evidence for measurement invariance, mean-level stability, and rank-order stability of daily self-esteem. Latent state-trait analyses revealed that variances in scores of the RGSE can be decomposed into six components: stable self-esteem (40 %), ephemeral (or temporal-state) variance (36 %), stable negative method variance (9 %), stable positive method variance (4 %), specific variance (1 %) and random error variance (10 %). Moreover, latent factors associated with daily self-esteem were associated with measures of depression, implicit self-esteem, and grade point average.

  4. Genetic Variance in Homophobia: Evidence from Self- and Peer Reports.

    PubMed

    Zapko-Willmes, Alexandra; Kandler, Christian

    2018-01-01

    The present twin study combined self- and peer assessments of twins' general homophobia targeting gay men in order to replicate previous behavior genetic findings across different rater perspectives and to disentangle self-rater-specific variance from common variance in self- and peer-reported homophobia (i.e., rater-consistent variance). We hypothesized rater-consistent variance in homophobia to be attributable to genetic and nonshared environmental effects, and self-rater-specific variance to be partially accounted for by genetic influences. A sample of 869 twins and 1329 peer raters completed a seven item scale containing cognitive, affective, and discriminatory homophobic tendencies. After correction for age and sex differences, we found most of the genetic contributions (62%) and significant nonshared environmental contributions (16%) to individual differences in self-reports on homophobia to be also reflected in peer-reported homophobia. A significant genetic component, however, was self-report-specific (38%), suggesting that self-assessments alone produce inflated heritability estimates to some degree. Different explanations are discussed.

  5. Heritability of Performance Deficit Accumulation During Acute Sleep Deprivation in Twins

    PubMed Central

    Kuna, Samuel T.; Maislin, Greg; Pack, Frances M.; Staley, Bethany; Hachadoorian, Robert; Coccaro, Emil F.; Pack, Allan I.

    2012-01-01

    Study Objectives: To determine if the large and highly reproducible interindividual differences in rates of performance deficit accumulation during sleep deprivation, as determined by the number of lapses on a sustained reaction time test, the Psychomotor Vigilance Task (PVT), arise from a heritable trait. Design: Prospective, observational cohort study. Setting: Academic medical center. Participants: There were 59 monozygotic (mean age 29.2 ± 6.8 [SD] yr; 15 male and 44 female pairs) and 41 dizygotic (mean age 26.6 ± 7.6 yr; 15 male and 26 female pairs) same-sex twin pairs with a normal polysomnogram. Interventions: Thirty-eight hr of monitored, continuous sleep deprivation. Measurements and Results: Patients performed the 10-min PVT every 2 hr during the sleep deprivation protocol. The primary outcome was change from baseline in square root transformed total lapses (response time ≥ 500 ms) per trial. Patient-specific linear rates of performance deficit accumulation were separated from circadian effects using multiple linear regression. Using the classic approach to assess heritability, the intraclass correlation coefficients for accumulating deficits resulted in a broad sense heritability (h2) estimate of 0.834. The mean within-pair and among-pair heritability estimates determined by analysis of variance-based methods was 0.715. When variance components of mixed-effect multilevel models were estimated by maximum likelihood estimation and used to determine the proportions of phenotypic variance explained by genetic and nongenetic factors, 51.1% (standard error = 8.4%, P < 0.0001) of twin variance was attributed to combined additive and dominance genetic effects. Conclusion: Genetic factors explain a large fraction of interindividual variance among rates of performance deficit accumulations on PVT during sleep deprivation. Citation: Kuna ST; Maislin G; Pack FM; Staley B; Hachadoorian R; Coccaro EF; Pack AI. Heritability of performance deficit accumulation during acute sleep deprivation in twins. SLEEP 2012;35(9):1223-1233. PMID:22942500

  6. Increasing Specificity of Correlate Research: Exploring Correlates of Children’s Lunchtime and After-School Physical Activity

    PubMed Central

    Stanley, Rebecca M.; Ridley, Kate; Olds, Timothy S.; Dollman, James

    2014-01-01

    Background The lunchtime and after-school contexts are critical windows in a school day for children to be physically active. While numerous studies have investigated correlates of children’s habitual physical activity, few have explored correlates of physical activity occurring at lunchtime and after-school from a social-ecological perspective. Exploring correlates that influence physical activity occurring in specific contexts can potentially improve the prediction and understanding of physical activity. Using a context-specific approach, this study investigated correlates of children’s lunchtime and after-school physical activity. Methods Cross-sectional data were collected from 423 South Australian children aged 10.0–13.9 years (200 boys; 223 girls) attending 10 different schools. Lunchtime and after-school physical activity was assessed using accelerometers. Correlates were assessed using purposely developed context-specific questionnaires. Correlated Component Regression analysis was conducted to derive correlates of context-specific physical activity and determine the variance explained by prediction equations. Results The model of boys’ lunchtime physical activity contained 6 correlates and explained 25% of the variance. For girls, the model explained 17% variance from 9 correlates. Enjoyment of walking during lunchtime was the strongest correlate for both boys and girls. Boys’ and girls’ after-school physical activity models explained 20% variance from 14 correlates and 7% variance from the single item correlate, “I do an organised sport or activity after-school because it gets you fit”, respectively. Conclusions Increasing specificity of correlate research has enabled the identification of unique features of, and a more in-depth interpretation of, lunchtime and after-school physical activity behaviour and is a potential strategy for advancing the physical activity correlate research field. The findings of this study could be used to inform and tailor gender-specific public health messages and interventions for promoting lunchtime and after-school physical activity in children. PMID:24809440

  7. On the Relation between the General Affective Meaning and the Basic Sublexical, Lexical, and Inter-lexical Features of Poetic Texts—A Case Study Using 57 Poems of H. M. Enzensberger

    PubMed Central

    Ullrich, Susann; Aryani, Arash; Kraxenberger, Maria; Jacobs, Arthur M.; Conrad, Markus

    2017-01-01

    The literary genre of poetry is inherently related to the expression and elicitation of emotion via both content and form. To explore the nature of this affective impact at an extremely basic textual level, we collected ratings on eight different general affective meaning scales—valence, arousal, friendliness, sadness, spitefulness, poeticity, onomatopoeia, and liking—for 57 German poems (“die verteidigung der wölfe”) which the contemporary author H. M. Enzensberger had labeled as either “friendly,” “sad,” or “spiteful.” Following Jakobson's (1960) view on the vivid interplay of hierarchical text levels, we used multiple regression analyses to explore the specific influences of affective features from three different text levels (sublexical, lexical, and inter-lexical) on the perceived general affective meaning of the poems using three types of predictors: (1) Lexical predictor variables capturing the mean valence and arousal potential of words; (2) Inter-lexical predictors quantifying peaks, ranges, and dynamic changes within the lexical affective content; (3) Sublexical measures of basic affective tone according to sound-meaning correspondences at the sublexical level (see Aryani et al., 2016). We find the lexical predictors to account for a major amount of up to 50% of the variance in affective ratings. Moreover, inter-lexical and sublexical predictors account for a large portion of additional variance in the perceived general affective meaning. Together, the affective properties of all used textual features account for 43–70% of the variance in the affective ratings and still for 23–48% of the variance in the more abstract aesthetic ratings. In sum, our approach represents a novel method that successfully relates a prominent part of variance in perceived general affective meaning in this corpus of German poems to quantitative estimates of affective properties of textual components at the sublexical, lexical, and inter-lexical level. PMID:28123376

  8. A Generalized DIF Effect Variance Estimator for Measuring Unsigned Differential Test Functioning in Mixed Format Tests

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Algina, James

    2006-01-01

    One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…

  9. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    PubMed

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  10. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  11. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    PubMed Central

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  12. Impacts of using inbred animals in studies for detection of quantitative trait loci.

    PubMed

    Freyer, G; Vukasinovic, N; Cassell, B

    2009-02-01

    Effects of utilizing inbred and noninbred family structures in experiments for detection of quantitative trait loci (QTL) were compared in this simulation study. Simulations were based on a general pedigree design originating from 2 unrelated sires. A variance component approach of mapping QTL was applied to simulated data that reflected common family structures from dairy populations. Five different family structures were considered: FS0 without inbreeding, FS1 with an inbred sire from an aunt-nephew mating, FS2 with an inbred sire originating from a half-sib mating, FS3 and FS4 based on FS2 but containing an increased number of offspring of the inbred sire (FS3), and another extremely inbred sire with its final offspring (FS4). Sixty replicates each of the 5 family structures in 2 simulation scenarios each were analyzed to provide a praxis-like situation of QTL analysis. The largest proportion of QTL position estimates within the correct interval of 3 cM, best test statistic profiles and the smallest average bias were obtained from the pedigrees described by FS4 and FS2. The approach does not depend on the kind and number of genetic markers. Inbreeding is not a recommended practice for commercial dairy production because of possible inbreeding depression, but inbred animals and their offspring that already exist could be advantageous for QTL mapping, because of reduced genetic variance in inbred parents.

  13. Assessing implementation difficulties in tobacco use prevention and cessation counselling among dental providers.

    PubMed

    Amemori, Masamitsu; Michie, Susan; Korhonen, Tellervo; Murtomaa, Heikki; Kinnunen, Taru H

    2011-05-26

    Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55.0 to 65.0). Three factors were extracted that explain 70.8% of the variance: motivation (47.6% of variance, α = 0.86), capability (13.3% of variance, α = 0.83), and opportunity (10.0% of variance, α = 0.71). This study demonstrated a theoretically informed approach to identifying possible implementation difficulties in TUPAC counselling among dental providers. This approach provides a method for moving from diagnosing implementation difficulties to designing and evaluating interventions.

  14. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  15. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  16. Tidal analysis of surface currents in the Porsanger fjord in northern Norway

    NASA Astrophysics Data System (ADS)

    Stramska, Malgorzata; Jankowski, Andrzej; Cieszyńska, Agata

    2016-04-01

    In this presentation we describe surface currents in the Porsanger fjord (Porsangerfjorden) located in the European Arctic in the vicinity of the Barents Sea. Our analysis is based on data collected in the summer of 2014 using High Frequency radar system. Our interest in this fjord comes from the fact that this is a region of high climatic sensitivity. One of our long-term goals is to develop an improved understanding of the undergoing changes and interactions between this fjord and the large-scale atmospheric and oceanic conditions. In order to derive a better understanding of the ongoing changes one must first improve the knowledge about the physical processes that create the environment of the fjord. The present study is the first step in this direction. Our main objective in this presentation is to evaluate the importance of tidal forcing. Tides in the Porsanger fjord are substantial, with tidal range on the order of about 3 meters. Tidal analysis attributes to tides about 99% of variance in sea level time series recorded in Honningsvåg. The most important tidal component based on sea level data is the M2 component (amplitude of ~90 cm). The S2 and N2 components (amplitude of ~ 20 cm) also play a significant role in the semidiurnal sea level oscillations. The most important diurnal component is K1 with amplitude of about 8 cm. Tidal analysis lead us to the conclusion that the most important tidal component in observed surface currents is also the M2 component. The second most important component is the S2 component. Our results indicate that in contrast to sea level, only about 10 - 20% of variance in surface currents can be attributed to tidal currents. This means that about 80-90% of variance can be credited to wind-induced and geostrophic currents. This work was funded by the Norway Grants (NCBR contract No. 201985, project NORDFLUX). Partial support for MS comes from the Institute of Oceanology (IO PAN).

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  18. Perturbative approach to covariance matrix of the matter power spectrum

    DOE PAGES

    Mohammed, Irshad; Seljak, Uros; Vlah, Zvonimir

    2016-12-14

    Here, we evaluate the covariance matrix of the matter power spectrum using perturbation theory up to dominant terms at 1-loop order and compare it to numerical simulations. We decompose the covariance matrix into the disconnected (Gaussian) part, trispectrum from the modes outside the survey (beat coupling or super-sample variance), and trispectrum from the modes inside the survey, and show how the different components contribute to the overall covariance matrix. We find the agreement with the simulations is at a 10\\% level up tomore » $$k \\sim 1 h {\\rm Mpc^{-1}}$$. We also show that all the connected components are dominated by the large-scale modes ($$k<0.1 h {\\rm Mpc^{-1}}$$), regardless of the value of the wavevectors $$k,\\, k'$$ of the covariance matrix, suggesting that one must be careful in applying the jackknife or bootstrap methods to the covariance matrix. We perform an eigenmode decomposition of the connected part of the covariance matrix, showing that at higher $k$ it is dominated by a single eigenmode. Furthermore, the full covariance matrix can be approximated as the disconnected part only, with the connected part being treated as an external nuisance parameter with a known scale dependence, and a known prior on its variance for a given survey volume. Finally, we provide a prescription for how to evaluate the covariance matrix from small box simulations without the need to simulate large volumes.« less

  19. ICA-based artefact and accelerated fMRI acquisition for improved Resting State Network imaging

    PubMed Central

    Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F.; Auerbach, Edward J.; Douaud, Gwenaëlle; Sexton, Claire E.; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E.; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L.; Smith, Stephen M.

    2014-01-01

    The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB’s ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures were assessed using timeseries (amplitude and spectra), network matrix and spatial map analyses. For timeseries and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. PMID:24657355

  20. ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging.

    PubMed

    Griffanti, Ludovica; Salimi-Khorshidi, Gholamreza; Beckmann, Christian F; Auerbach, Edward J; Douaud, Gwenaëlle; Sexton, Claire E; Zsoldos, Enikő; Ebmeier, Klaus P; Filippini, Nicola; Mackay, Clare E; Moeller, Steen; Xu, Junqian; Yacoub, Essa; Baselli, Giuseppe; Ugurbil, Kamil; Miller, Karla L; Smith, Stephen M

    2014-07-15

    The identification of resting state networks (RSNs) and the quantification of their functional connectivity in resting-state fMRI (rfMRI) are seriously hindered by the presence of artefacts, many of which overlap spatially or spectrally with RSNs. Moreover, recent developments in fMRI acquisition yield data with higher spatial and temporal resolutions, but may increase artefacts both spatially and/or temporally. Hence the correct identification and removal of non-neural fluctuations is crucial, especially in accelerated acquisitions. In this paper we investigate the effectiveness of three data-driven cleaning procedures, compare standard against higher (spatial and temporal) resolution accelerated fMRI acquisitions, and investigate the combined effect of different acquisitions and different cleanup approaches. We applied single-subject independent component analysis (ICA), followed by automatic component classification with FMRIB's ICA-based X-noiseifier (FIX) to identify artefactual components. We then compared two first-level (within-subject) cleaning approaches for removing those artefacts and motion-related fluctuations from the data. The effectiveness of the cleaning procedures was assessed using time series (amplitude and spectra), network matrix and spatial map analyses. For time series and network analyses we also tested the effect of a second-level cleaning (informed by group-level analysis). Comparing these approaches, the preferable balance between noise removal and signal loss was achieved by regressing out of the data the full space of motion-related fluctuations and only the unique variance of the artefactual ICA components. Using similar analyses, we also investigated the effects of different cleaning approaches on data from different acquisition sequences. With the optimal cleaning procedures, functional connectivity results from accelerated data were statistically comparable or significantly better than the standard (unaccelerated) acquisition, and, crucially, with higher spatial and temporal resolution. Moreover, we were able to perform higher dimensionality ICA decompositions with the accelerated data, which is very valuable for detailed network analyses. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Job satisfaction among a multigenerational nursing workforce.

    PubMed

    Wilson, Barbara; Squires, Mae; Widger, Kimberley; Cranley, Lisa; Tourangeau, Ann

    2008-09-01

    To explore generational differences in job satisfaction. Effective retention strategies are required to mitigate the international nursing shortage. Job satisfaction, a strong and consistent predictor of retention, may differ across generations. Understanding job satisfaction generational differences may lead to increasing clarity about generation-specific retention approaches. The Ontario Nurse Survey collected data from 6541 Registered Nurses. Participants were categorized as Baby Boomer, Generation X or Generation Y based on birth year. Multivariate analysis of variance explored generational differences for overall and specific satisfaction components. In overall job satisfaction and five specific satisfaction components, Baby Boomers were significantly more satisfied than Generations X and Y. It is imperative to improve job satisfaction for younger generations of nurses. Strategies to improve job satisfaction for younger generations of nurses may include creating a shared governance framework where nurses are empowered to make decisions. Implementing shared governance, through nurse-led unit-based councils, may lead to greater job satisfaction, particularly for younger nurses. Opportunities to self schedule or job share may be other potential approaches to increase job satisfaction, especially for younger generations of nurses. Another potential strategy would be to aggressively provide and support education and career-development opportunities.

  2. Traceability of Opuntia ficus-indica L. Miller by ICP-MS multi-element profile and chemometric approach.

    PubMed

    Mottese, Antonio Francesco; Naccari, Clara; Vadalà, Rossella; Bua, Giuseppe Daniel; Bartolomeo, Giovanni; Rando, Rossana; Cicero, Nicola; Dugo, Giacomo

    2018-01-01

    Opuntia ficus-indica L. Miller fruits, particularly 'Ficodindia dell'Etna' of Biancavilla (POD), 'Fico d'india tradizionale di Roccapalumba' with protected brand and samples from an experimental field in Pezzolo (Sicily) were analyzed by inductively coupled plasma mass spectrometry in order to determine the multi-element profile. A multivariate chemometric approach, specifically principal component analysis (PCA), was applied to individuate how mineral elements may represent a marker of geographic origin, which would be useful for traceability. PCA has allowed us to verify that the geographical origin of prickly pear fruits is significantly influenced by trace element content, and the results found in Biancavilla PDO samples were linked to the geological composition of this volcanic areas. It was observed that two principal components accounted for 72.03% of the total variance in the data and, in more detail, PC1 explains 45.51% and PC2 26.52%, respectively. This study demonstrated that PCA is an integrated tool for the traceability of food products and, at the same time, a useful method of authentication of typical local fruits such as prickly pear. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  3. Structured penalties for functional linear models-partially empirical eigenvectors for regression.

    PubMed

    Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding

    2012-01-01

    One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.

  4. A Novel Acoustic Sensor Approach to Classify Seeds Based on Sound Absorption Spectra

    PubMed Central

    Gasso-Tortajada, Vicent; Ward, Alastair J.; Mansur, Hasib; Brøchner, Torben; Sørensen, Claus G.; Green, Ole

    2010-01-01

    A non-destructive and novel in situ acoustic sensor approach based on the sound absorption spectra was developed for identifying and classifying different seed types. The absorption coefficient spectra were determined by using the impedance tube measurement method. Subsequently, a multivariate statistical analysis, i.e., principal component analysis (PCA), was performed as a way to generate a classification of the seeds based on the soft independent modelling of class analogy (SIMCA) method. The results show that the sound absorption coefficient spectra of different seed types present characteristic patterns which are highly dependent on seed size and shape. In general, seed particle size and sphericity were inversely related with the absorption coefficient. PCA presented reliable grouping capabilities within the diverse seed types, since the 95% of the total spectral variance was described by the first two principal components. Furthermore, the SIMCA classification model based on the absorption spectra achieved optimal results as 100% of the evaluation samples were correctly classified. This study contains the initial structuring of an innovative method that will present new possibilities in agriculture and industry for classifying and determining physical properties of seeds and other materials. PMID:22163455

  5. An improved K-means clustering algorithm in agricultural image segmentation

    NASA Astrophysics Data System (ADS)

    Cheng, Huifeng; Peng, Hui; Liu, Shanmei

    Image segmentation is the first important step to image analysis and image processing. In this paper, according to color crops image characteristics, we firstly transform the color space of image from RGB to HIS, and then select proper initial clustering center and cluster number in application of mean-variance approach and rough set theory followed by clustering calculation in such a way as to automatically segment color component rapidly and extract target objects from background accurately, which provides a reliable basis for identification, analysis, follow-up calculation and process of crops images. Experimental results demonstrate that improved k-means clustering algorithm is able to reduce the computation amounts and enhance precision and accuracy of clustering.

  6. Characterization of nonGaussian atmospheric turbulence for prediction of aircraft response statistics

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1977-01-01

    Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.

  7. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    PubMed

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  8. Analysis of the torsional storage modulus of human hair and its relation to hair morphology and cosmetic processing.

    PubMed

    Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf

    2014-01-01

    Through measurements of three different hair samples (virgin and treated) by the torsional pendulum method (22°C, 22% RH) a systematic decrease of the torsional storage modulus G' with increasing fiber diameter, i.e., polar moment of inertia, is observed. G' is therefore not a material constant for hair. This change of G' implies a systematic component of data variance, which significantly contributes to the limitations of the torsional method for cosmetic claim support. Fitting the data on the basis of a core/shell model for cortex and cuticle enables to separate this systematic component of variance and to greatly enhance the discriminative power of the test. The fitting procedure also provides values for the torsional storage moduli of the morphological components, confirming that the cuticle modulus is substantially higher than that of the cortex. The results give consistent insight into the changes imparted to the morphological components by the cosmetic treatments.

  9. An approach to the analysis of performance of quasi-optimum digital phase-locked loops.

    NASA Technical Reports Server (NTRS)

    Polk, D. R.; Gupta, S. C.

    1973-01-01

    An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.

  10. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  11. The IfE Global Gravity Field Model Recovered from GOCE Orbit and Gradiometer Data

    NASA Astrophysics Data System (ADS)

    Wu, Hu; Muiller, Jurgen; Brieden, Phillip

    2015-03-01

    An independent global gravity field model is computed from the GOCE orbit and gradiometer data using our own IfE software. We analysed the same data period that were considered for the first released GOCE models. The Acceleration Approach is applied to process the orbit data. The gravity gradients are processed in the framework of the remove-restore technique by which the low-frequency noise of the original gradients are removed. For the combined solution, the normal equations are summed by the Variance Component Estimation Approach. The result in terms of accumulated geoid height error calculated from the coefficient difference w.r.t. EGM2008 is about 11 cm at D/O 200, which corresponds to the accuracy level of the first released TIM and DIR solutions. This indicates that our IfE model has a comparable performance as the other official GOCE models.

  12. Comparing Between- and Within-Group Variances in a Two-Level Study: A Latent Variable Modeling Approach to Evaluating Their Relationship

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.; Akaeze, Hope O.

    2017-01-01

    This note is concerned with examining the relationship between within-group and between-group variances in two-level nested designs. A latent variable modeling approach is outlined that permits point and interval estimation of their ratio and allows their comparison in a multilevel study. The procedure can also be used to test various hypotheses…

  13. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  14. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  15. Stable “Trait” Variance of Temperament as a Predictor of the Temporal Course of Depression and Social Phobia

    PubMed Central

    Naragon-Gainey, Kristin; Gallagher, Matthew W.; Brown, Timothy A.

    2013-01-01

    A large body of research has found robust associations between dimensions of temperament (e.g., neuroticism, extraversion) and the mood and anxiety disorders. However, mood-state distortion (i.e., the tendency for current mood state to bias ratings of temperament) likely confounds these associations, rendering their interpretation and validity unclear. This issue is of particular relevance to clinical populations who experience elevated levels of general distress. The current study used the “trait-state-occasion” latent variable model (Cole, Martin, & Steiger, 2005) to separate the stable components of temperament from transient, situational influences such as current mood state. We examined the predictive power of the time-invariant components of temperament on the course of depression and social phobia in a large, treatment-seeking sample with mood and/or anxiety disorders (N = 826). Participants were assessed three times over the course of one year, using interview and self-report measures; most participants received treatment during this time. Results indicated that both neuroticism/behavioral inhibition (N/BI) and behavioral activation/positive affect (BA/P) consisted largely of stable, time-invariant variance (57% to 78% of total variance). Furthermore, the time-invariant components of N/BI and BA/P were uniquely and incrementally predictive of change in depression and social phobia, adjusting for initial symptom levels. These results suggest that the removal of state variance bolsters the effect of temperament on psychopathology among clinically distressed individuals. Implications for temperament-psychopathology models, psychopathology assessment, and the stability of traits are discussed. PMID:24016004

  16. Adaptive increase in force variance during fatigue in tasks with low redundancy.

    PubMed

    Singh, Tarkeshwar; S K M, Varadhan; Zatsiorsky, Vladimir M; Latash, Mark L

    2010-11-26

    We tested a hypothesis that fatigue of an element (a finger) leads to an adaptive neural strategy that involves an increase in force variability in the other finger(s) and an increase in co-variation of commands to fingers to keep total force variability relatively unchanged. We tested this hypothesis using a system with small redundancy (two fingers) and a marginally redundant system (with an additional constraint related to the total moment of force produced by the fingers, unstable condition). The subjects performed isometric accurate rhythmic force production tasks by the index (I) finger and two fingers (I and middle, M) pressing together before and after a fatiguing exercise by the I finger. Fatigue led to a large increase in force variance in the I-finger task and a smaller increase in the IM-task. We quantified two components of variance in the space of hypothetical commands to fingers, finger modes. Under both stable and unstable conditions, there was a large increase in the variance component that did not affect total force and a much smaller increase in the component that did. This resulted in an increase in an index of the force-stabilizing synergy. These results indicate that marginal redundancy is sufficient to allow the central nervous system to use adaptive increase in variability to shield important variables from effects of fatigue. We offer an interpretation of these results based on a recent development of the equilibrium-point hypothesis known as the referent configuration hypothesis. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. Stable "trait" variance of temperament as a predictor of the temporal course of depression and social phobia.

    PubMed

    Naragon-Gainey, Kristin; Gallagher, Matthew W; Brown, Timothy A

    2013-08-01

    A large body of research has found robust associations between dimensions of temperament (e.g., neuroticism, extraversion) and the mood and anxiety disorders. However, mood-state distortion (i.e., the tendency for current mood state to bias ratings of temperament) likely confounds these associations, rendering their interpretation and validity unclear. This issue is of particular relevance to clinical populations who experience elevated levels of general distress. The current study used the "trait-state-occasion" latent variable model (D. A. Cole, N. C. Martin, & J. H. Steiger, 2005) to separate the stable components of temperament from transient, situational influences such as current mood state. We examined the predictive power of the time-invariant components of temperament on the course of depression and social phobia in a large, treatment-seeking sample with mood and/or anxiety disorders (N = 826). Participants were assessed 3 times over the course of 1 year, using interview and self-report measures; most participants received treatment during this time. Results indicated that both neuroticism/behavioral inhibition (N/BI) and behavioral activation/positive affect (BA/P) consisted largely of stable, time-invariant variance (57% to 78% of total variance). Furthermore, the time-invariant components of N/BI and BA/P were uniquely and incrementally predictive of change in depression and social phobia, adjusting for initial symptom levels. These results suggest that the removal of state variance bolsters the effect of temperament on psychopathology among clinically distressed individuals. Implications for temperament-psychopathology models, psychopathology assessment, and the stability of traits are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. Estimating the number of pure chemical components in a mixture by X-ray absorption spectroscopy.

    PubMed

    Manceau, Alain; Marcus, Matthew; Lenoir, Thomas

    2014-09-01

    Principal component analysis (PCA) is a multivariate data analysis approach commonly used in X-ray absorption spectroscopy to estimate the number of pure compounds in multicomponent mixtures. This approach seeks to describe a large number of multicomponent spectra as weighted sums of a smaller number of component spectra. These component spectra are in turn considered to be linear combinations of the spectra from the actual species present in the system from which the experimental spectra were taken. The dimension of the experimental dataset is given by the number of meaningful abstract components, as estimated by the cascade or variance of the eigenvalues (EVs), the factor indicator function (IND), or the F-test on reduced EVs. It is shown on synthetic and real spectral mixtures that the performance of the IND and F-test critically depends on the amount of noise in the data, and may result in considerable underestimation or overestimation of the number of components even for a signal-to-noise (s/n) ratio of the order of 80 (σ = 20) in a XANES dataset. For a given s/n ratio, the accuracy of the component recovery from a random mixture depends on the size of the dataset and number of components, which is not known in advance, and deteriorates for larger datasets because the analysis picks up more noise components. The scree plot of the EVs for the components yields one or two values close to the significant number of components, but the result can be ambiguous and its uncertainty is unknown. A new estimator, NSS-stat, which includes the experimental error to XANES data analysis, is introduced and tested. It is shown that NSS-stat produces superior results compared with the three traditional forms of PCA-based component-number estimation. A graphical user-friendly interface for the calculation of EVs, IND, F-test and NSS-stat from a XANES dataset has been developed under LabVIEW for Windows and is supplied in the supporting information. Its possible application to EXAFS data is discussed, and several XANES and EXAFS datasets are also included for download.

  19. Genetic and environmental transmission of body mass index fluctuation.

    PubMed

    Bergin, Jocilyn E; Neale, Michael C; Eaves, Lindon J; Martin, Nicholas G; Heath, Andrew C; Maes, Hermine H

    2012-11-01

    This study sought to determine the relationship between body mass index (BMI) fluctuation and cardiovascular disease phenotypes, diabetes, and depression and the role of genetic and environmental factors in individual differences in BMI fluctuation using the extended twin-family model (ETFM). This study included 14,763 twins and their relatives. Health and Lifestyle Questionnaires were obtained from 28,492 individuals from the Virginia 30,000 dataset including twins, parents, siblings, spouses, and children of twins. Self-report cardiovascular disease, diabetes, and depression data were available. From self-reported height and weight, BMI fluctuation was calculated as the difference between highest and lowest BMI after age 18, for individuals 18-80 years. Logistic regression analyses were used to determine the relationship between BMI fluctuation and disease status. The ETFM was used to estimate the significance and contribution of genetic and environmental factors, cultural transmission, and assortative mating components to BMI fluctuation, while controlling for age. We tested sex differences in additive and dominant genetic effects, parental, non-parental, twin, and unique environmental effects. BMI fluctuation was highly associated with disease status, independent of BMI. Genetic effects accounted for ~34 % of variance in BMI fluctuation in males and ~43 % of variance in females. The majority of the variance was accounted for by environmental factors, about a third of which were shared among twins. Assortative mating, and cultural transmission accounted for only a small proportion of variance in this phenotype. Since there are substantial health risks associated with BMI fluctuation and environmental components of BMI fluctuation account for over 60 % of variance in males and over 50 % of variance in females, environmental risk factors may be appropriate targets to reduce BMI fluctuation.

  20. Phylogenomics of Lophotrochozoa with Consideration of Systematic Error.

    PubMed

    Kocot, Kevin M; Struck, Torsten H; Merkel, Julia; Waits, Damien S; Todt, Christiane; Brannock, Pamela M; Weese, David A; Cannon, Johanna T; Moroz, Leonid L; Lieb, Bernhard; Halanych, Kenneth M

    2017-03-01

    Phylogenomic studies have improved understanding of deep metazoan phylogeny and show promise for resolving incongruences among analyses based on limited numbers of loci. One region of the animal tree that has been especially difficult to resolve, even with phylogenomic approaches, is relationships within Lophotrochozoa (the animal clade that includes molluscs, annelids, and flatworms among others). Lack of resolution in phylogenomic analyses could be due to insufficient phylogenetic signal, limitations in taxon and/or gene sampling, or systematic error. Here, we investigated why lophotrochozoan phylogeny has been such a difficult question to answer by identifying and reducing sources of systematic error. We supplemented existing data with 32 new transcriptomes spanning the diversity of Lophotrochozoa and constructed a new set of Lophotrochozoa-specific core orthologs. Of these, 638 orthologous groups (OGs) passed strict screening for paralogy using a tree-based approach. In order to reduce possible sources of systematic error, we calculated branch-length heterogeneity, evolutionary rate, percent missing data, compositional bias, and saturation for each OG and analyzed increasingly stricter subsets of only the most stringent (best) OGs for these five variables. Principal component analysis of the values for each factor examined for each OG revealed that compositional heterogeneity and average patristic distance contributed most to the variance observed along the first principal component while branch-length heterogeneity and, to a lesser extent, saturation contributed most to the variance observed along the second. Missing data did not strongly contribute to either. Additional sensitivity analyses examined effects of removing taxa with heterogeneous branch lengths, large amounts of missing data, and compositional heterogeneity. Although our analyses do not unambiguously resolve lophotrochozoan phylogeny, we advance the field by reducing the list of viable hypotheses. Moreover, our systematic approach for dissection of phylogenomic data can be applied to explore sources of incongruence and poor support in any phylogenomic data set. [Annelida; Brachiopoda; Bryozoa; Entoprocta; Mollusca; Nemertea; Phoronida; Platyzoa; Polyzoa; Spiralia; Trochozoa.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Evolution in fluctuating environments: decomposing selection into additive components of the Robertson-Price equation.

    PubMed

    Engen, Steinar; Saether, Bernt-Erik

    2014-03-01

    We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  2. Phenomenology of mixed states: a principal component analysis study.

    PubMed

    Bertschy, G; Gervasoni, N; Favre, S; Liberek, C; Ragama-Pardos, E; Aubry, J-M; Gex-Fabry, M; Dayer, A

    2007-12-01

    To contribute to the definition of external and internal limits of mixed states and study the place of dysphoric symptoms in the psychopathology of mixed states. One hundred and sixty-five inpatients with major mood episodes were diagnosed as presenting with either pure depression, mixed depression (depression plus at least three manic symptoms), full mixed state (full depression and full mania), mixed mania (mania plus at least three depressive symptoms) or pure mania, using an adapted version of the Mini International Neuropsychiatric Interview (DSM-IV version). They were evaluated using a 33-item inventory of depressive, manic and mixed affective signs and symptoms. Principal component analysis without rotation yielded three components that together explained 43.6% of the variance. The first component (24.3% of the variance) contrasted typical depressive symptoms with typical euphoric, manic symptoms. The second component, labeled 'dysphoria', (13.8%) had strong positive loadings for irritability, distressing sensitivity to light and noise, impulsivity and inner tension. The third component (5.5%) included symptoms of insomnia. Median scores for the first component significantly decreased from the pure depression group to the pure mania group. For the dysphoria component, scores were highest among patients with full mixed states and decreased towards both patients with pure depression and those with pure mania. Principal component analysis revealed that dysphoria represents an important dimension of mixed states.

  3. Clinical Insight Into Latent Variables of Psychiatric Questionnaires for Mood Symptom Self-Assessment

    PubMed Central

    Saunders, Kate; Bilderbeck, Amy; Palmius, Niclas; Goodwin, Guy; De Vos, Maarten

    2017-01-01

    Background We recently described a new questionnaire to monitor mood called mood zoom (MZ). MZ comprises 6 items assessing mood symptoms on a 7-point Likert scale; we had previously used standard principal component analysis (PCA) to tentatively understand its properties, but the presence of multiple nonzero loadings obstructed the interpretation of its latent variables. Objective The aim of this study was to rigorously investigate the internal properties and latent variables of MZ using an algorithmic approach which may lead to more interpretable results than PCA. Additionally, we explored three other widely used psychiatric questionnaires to investigate latent variable structure similarities with MZ: (1) Altman self-rating mania scale (ASRM), assessing mania; (2) quick inventory of depressive symptomatology (QIDS) self-report, assessing depression; and (3) generalized anxiety disorder (7-item) (GAD-7), assessing anxiety. Methods We elicited responses from 131 participants: 48 bipolar disorder (BD), 32 borderline personality disorder (BPD), and 51 healthy controls (HC), collected longitudinally (median [interquartile range, IQR]: 363 [276] days). Participants were requested to complete ASRM, QIDS, and GAD-7 weekly (all 3 questionnaires were completed on the Web) and MZ daily (using a custom-based smartphone app). We applied sparse PCA (SPCA) to determine the latent variables for the four questionnaires, where a small subset of the original items contributes toward each latent variable. Results We found that MZ had great consistency across the three cohorts studied. Three main principal components were derived using SPCA, which can be tentatively interpreted as (1) anxiety and sadness, (2) positive affect, and (3) irritability. The MZ principal component comprising anxiety and sadness explains most of the variance in BD and BPD, whereas the positive affect of MZ explains most of the variance in HC. The latent variables in ASRM were identical for the patient groups but different for HC; nevertheless, the latent variables shared common items across both the patient group and HC. On the contrary, QIDS had overall very different principal components across groups; sleep was a key element in HC and BD but was absent in BPD. In GAD-7, nervousness was the principal component explaining most of the variance in BD and HC. Conclusions This study has important implications for understanding self-reported mood. MZ has a consistent, intuitively interpretable latent variable structure and hence may be a good instrument for generic mood assessment. Irritability appears to be the key distinguishing latent variable between BD and BPD and might be useful for differential diagnosis. Anxiety and sadness are closely interlinked, a finding that might inform treatment effects to jointly address these covarying symptoms. Anxiety and nervousness appear to be amongst the cardinal latent variable symptoms in BD and merit close attention in clinical practice. PMID:28546141

  4. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  5. Applying Rasch model analysis in the development of the cantonese tone identification test (CANTIT).

    PubMed

    Lee, Kathy Y S; Lam, Joffee H S; Chan, Kit T Y; van Hasselt, Charles Andrew; Tong, Michael C F

    2017-01-01

    Applying Rasch analysis to evaluate the internal structure of a lexical tone perception test known as the Cantonese Tone Identification Test (CANTIT). A 75-item pool (CANTIT-75) with pictures and sound tracks was developed. Respondents were required to make a four-alternative forced choice on each item. A short version of 30 items (CANTIT-30) was developed based on fit statistics, difficulty estimates, and content evaluation. Internal structure was evaluated by fit statistics and Rasch Factor Analysis (RFA). 200 children with normal hearing and 141 children with hearing impairment were recruited. For CANTIT-75, all infit and 97% of outfit values were < 2.0. RFA revealed 40.1% of total variance was explained by the Rasch measure. The first residual component explained 2.5% of total variance in an eigenvalue of 3.1. For CANTIT-30, all infit and outfit values were < 2.0. The Rasch measure explained 38.8% of total variance, the first residual component explained 3.9% of total variance in an eigenvalue of 1.9. The Rasch model provides excellent guidance for the development of short forms. Both CANTIT-75 and CANTIT-30 possess satisfactory internal structure as a construct validity evidence in measuring the lexical tone identification ability of the Cantonese speakers.

  6. Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.

    PubMed

    Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L

    2017-05-31

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.

  7. Repeatable source, site, and path effects on the standard deviation for empirical ground-motion prediction models

    USGS Publications Warehouse

    Lin, P.-S.; Chiou, B.; Abrahamson, N.; Walling, M.; Lee, C.-T.; Cheng, C.-T.

    2011-01-01

    In this study, we quantify the reduction in the standard deviation for empirical ground-motion prediction models by removing ergodic assumption.We partition the modeling error (residual) into five components, three of which represent the repeatable source-location-specific, site-specific, and path-specific deviations from the population mean. A variance estimation procedure of these error components is developed for use with a set of recordings from earthquakes not heavily clustered in space.With most source locations and propagation paths sampled only once, we opt to exploit the spatial correlation of residuals to estimate the variances associated with the path-specific and the source-location-specific deviations. The estimation procedure is applied to ground-motion amplitudes from 64 shallow earthquakes in Taiwan recorded at 285 sites with at least 10 recordings per site. The estimated variance components are used to quantify the reduction in aleatory variability that can be used in hazard analysis for a single site and for a single path. For peak ground acceleration and spectral accelerations at periods of 0.1, 0.3, 0.5, 1.0, and 3.0 s, we find that the singlesite standard deviations are 9%-14% smaller than the total standard deviation, whereas the single-path standard deviations are 39%-47% smaller.

  8. Analysis and interpretation of satellite fragmentation data

    NASA Technical Reports Server (NTRS)

    Tan, Arjun

    1987-01-01

    The velocity perturbations of the fragments of a satellite can shed valuable information regarding the nature and intensity of the fragmentation. A feasibility study on calculating the velocity perturbations from existing equations was carried out by analyzing 23 major documented fragmentation events. It was found that whereas the calculated values of the radial components of the velocity change were often unusually high, those in the two other orthogonal directions were mostly reasonable. Since the uncertainties in the radial component necessarily translate into uncertainties in the total velocity change, it is suggested that alternative expressions for the radial component of velocity be sought for the purpose of determining the cause of the fragmentation from the total velocity change. The calculated variances in the velocity perturbations in the two directions orthogonal to the radial vector indicate that they have the smallest values for collision induced breakups and the largest values for low-intensity explosion induced breakups. The corresponding variances for high-intensity explosion induced breakups generally have values intermediate between those of the two extreme categories. A three-dimensional plot of the variances in the two orthogonal velocity perturbations and the plane change angle shows a clear separation between the three major types of breakups. This information is used to reclassify a number of satellite fragmentation events of unknown category.

  9. A precipitation regionalization and regime for Iran based on multivariate analysis

    NASA Astrophysics Data System (ADS)

    Raziei, Tayeb

    2018-02-01

    Monthly precipitation time series of 155 synoptic stations distributed over Iran, covering 1990-2014 time period, were used to identify areas with different precipitation time variability and regimes utilizing S-mode principal component analysis (PCA) and cluster analysis (CA) preceded by T-mode PCA, respectively. Taking into account the maximum loading values of the rotated components, the first approach revealed five sub-regions characterized by different precipitation time variability, while the second method delineated eight sub-regions featured with different precipitation regimes. The sub-regions identified by the two used methods, although partly overlapping, are different considering their areal extent and complement each other as they are useful for different purposes and applications. Northwestern Iran and the Caspian Sea area were found as the two most distinctive Iranian precipitation sub-regions considering both time variability and precipitation regime since they were well captured with relatively identical areas by the two used approaches. However, the areal extents of the other three sub-regions identified by the first approach were not coincident with the coverage of their counterpart sub-regions defined by the second approach. Results suggest that the precipitation sub-region identified by the two methods would not be necessarily the same, as the first method which accounts for the variance of the data grouped stations with similar temporal variability while the second one which considers a fixed climatology defined by the average over the period 1990-2014 clusters stations having a similar march of monthly precipitation.

  10. Comparing the Performance of Approaches for Testing the Homogeneity of Variance Assumption in One-Factor ANOVA Models

    ERIC Educational Resources Information Center

    Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.

    2017-01-01

    Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…

  11. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  12. Comparing development of synaptic proteins in rat visual, somatosensory, and frontal cortex.

    PubMed

    Pinto, Joshua G A; Jones, David G; Murphy, Kathryn M

    2013-01-01

    Two theories have influenced our understanding of cortical development: the integrated network theory, where synaptic development is coordinated across areas; and the cascade theory, where the cortex develops in a wave-like manner from sensory to non-sensory areas. These different views on cortical development raise challenges for current studies aimed at comparing detailed maturation of the connectome among cortical areas. We have taken a different approach to compare synaptic development in rat visual, somatosensory, and frontal cortex by measuring expression of pre-synaptic (synapsin and synaptophysin) proteins that regulate vesicle cycling, and post-synaptic density (PSD-95 and Gephyrin) proteins that anchor excitatory or inhibitory (E-I) receptors. We also compared development of the balances between the pairs of pre- or post-synaptic proteins, and the overall pre- to post-synaptic balance, to address functional maturation and emergence of the E-I balance. We found that development of the individual proteins and the post-synaptic index overlapped among the three cortical areas, but the pre-synaptic index matured later in frontal cortex. Finally, we applied a neuroinformatics approach using principal component analysis and found that three components captured development of the synaptic proteins. The first component accounted for 64% of the variance in protein expression and reflected total protein expression, which overlapped among the three cortical areas. The second component was gephyrin and the E-I balance, it emerged as sequential waves starting in somatosensory, then frontal, and finally visual cortex. The third component was the balance between pre- and post-synaptic proteins, and this followed a different developmental trajectory in somatosensory cortex. Together, these results give the most support to an integrated network of synaptic development, but also highlight more complex patterns of development that vary in timing and end point among the cortical areas.

  13. The validity of a behavioural multiple-mini-interview within an assessment centre for selection into specialty training

    PubMed Central

    2014-01-01

    Background Entry into specialty training was determined by a National Assessment Centre (NAC) approach using a combination of a behavioural Multiple-Mini-Interview (MMI) and a written Situational Judgement Test (SJT). We wanted to know if interviewers could make reliable and valid decisions about the non-cognitive characteristics of candidates with the purpose of selecting them into general practice specialty training using the MMI. Second, we explored the concurrent validity of the MMI with the SJT. Methods A variance components analysis estimated the reliability and sources of measurement error. Further modelling estimated the optimal configurations for future MMI iterations. We calculated the relationship of the MMI with the SJT. Results Data were available from 1382 candidates, 254 interviewers, six MMI questions, five alternate forms of a 50-item SJT, and 11 assessment centres. For a single MMI question and one assessor, 28% of the variance between scores was due to candidate-to-candidate variation. Interviewer subjectivity, in particular the varying views that interviewer had for particular candidates accounted for 40% of the variance in scores. The generalisability co-efficient for a six question MMI was 0.7; to achieve 0.8 would require ten questions. A disattenuated correlation with the SJT (r = 0.35), and in particular a raw score correlation with the subdomain related to clinical knowledge (r = 0.25) demonstrated evidence for construct and concurrent validity. Less than two per cent of candidates would have failed the MMI. Conclusion The MMI is a moderately reliable method of assessment in the context of a National Assessment Centre approach. The largest source of error relates to aspects of interviewer subjectivity, suggesting enhanced interviewer training would be beneficial. MMIs need to be sufficiently long for precise comparison for ranking purposes. In order to justify long term sustainable use of the MMI in a postgraduate assessment centre approach, more theoretical work is required to understand how written and performance based test of non-cognitive attributes can be combined, in a way that achieves acceptable generalizability, and has validity. PMID:25123968

  14. Understanding the relative role of dispersion mechanisms across basin scales

    NASA Astrophysics Data System (ADS)

    Di Lazzaro, M.; Zarlenga, A.; Volpi, E.

    2016-05-01

    Different mechanisms are understood to represent the primary sources of the variance of travel time distribution in natural catchments. To quantify the fraction of variance introduced by each component, dispersion coefficients have been earlier defined in the framework of geomorphology-based rainfall-runoff models. In this paper we compare over a wide range of basin sizes and for a variety of runoff conditions the relative role of geomorphological dispersion, related to the heterogeneity of path lengths, and hillslope kinematic dispersion, generated by flow processes within the hillslopes. Unlike previous works, our approach does not focus on a specific study case; instead, we try to generalize results already obtained in previous literature stemming from the definition of a few significant parameters related to the metrics of the catchment and flow dynamics. We further extend this conceptual framework considering the effects of two additional variance-producing processes: the first covers the random variability of hillslope velocities (i.e. of travel times over hillslopes); the second deals with non-uniform production of runoff over the basin (specifically related to drainage density). Results are useful to clarify the role of hillslope kinematic dispersion and define under which conditions it counteracts or reinforces geomorphological dispersion. We show how its sign is ruled by the specific spatial distribution of hillslope lengths within the basin, as well as by flow conditions. Interestingly, while negative in a wide range of cases, kinematic dispersion is expected to become invariantly positive when the variability of hillslope velocity is large.

  15. Principal component analysis of Mn(salen) catalysts.

    PubMed

    Teixeira, Filipe; Mosquera, Ricardo A; Melo, André; Freire, Cristina; Cordeiro, M Natália D S

    2014-12-14

    The theoretical study of Mn(salen) catalysts has been traditionally performed under the assumption that Mn(acacen') (acacen' = 3,3'-(ethane-1,2-diylbis(azanylylidene))bis(prop-1-en-olate)) is an appropriate surrogate for the larger Mn(salen) complexes. In this work, the geometry and the electronic structure of several Mn(salen) and Mn(acacen') model complexes were studied using Density Functional Theory (DFT) at diverse levels of approximation, with the aim of understanding the effects of truncation, metal oxidation, axial coordination, substitution on the aromatic rings of the salen ligand and chirality of the diimine bridge, as well as the choice of the density functional and basis set. To achieve this goal, geometric and structural data, obtained from these calculations, were subjected to Principal Component Analysis (PCA) and PCA with orthogonal rotation of the components (rPCA). The results show the choice of basis set to be of paramount importance, accounting for up to 30% of the variance in the data, while the differences between salen and acacen' complexes account for about 9% of the variance in the data, and are mostly related to the conformation of the salen/acacen' ligand around the metal centre. Variations in the spin state and oxidation state of the metal centre also account for large fractions of the total variance (up to 10% and 9%, respectively). Other effects, such as the nature of the diimine bridge or the presence of an alkyl substituent in the 3,3 and 5,5 positions of the aldehyde moiety, were found to be less important in terms of explaining the variance within the data set. A matrix of discriminants was compiled using the loadings of the principal and rotated components that best performed in the classification of the entries in the data. The scores obtained from its application to the data set were used as independent variables for devising linear models of different properties, with satisfactory prediction capabilities.

  16. Construct validity of the abbreviated mental test in older medical inpatients.

    PubMed

    Antonelli Incalzi, R; Cesari, M; Pedone, C; Carosella, L; Carbonin, P U

    2003-01-01

    To evaluate validity and internal structure of the Abbreviated Mental Test (AMT), and to assess the dependence of the internal structure upon the characteristics of the patients examined. Cross-sectional examination using data from the Italian Group of Pharmacoepidemiology in the Elderly (GIFA) database. Twenty-four acute care wards of Geriatrics or General Medicine. Two thousand eight hundred and eight patients consecutively admitted over a 4-month period. Demographic characteristics, functional status, medical conditions and performance on AMT were collected at discharge. Sensitivity, specificity and predictive values of the AMT <7 versus a diagnosis of dementia made according to DSM-III-R criteria were computed. The internal structure of AMT was assessed by principal component analysis. The analysis was performed on the whole population and stratified for age (<65, 65-80 and >80 years), gender, education (<6 or >5 years) and presence of congestive heart failure (CHF). AMT achieved high sensitivity (81%), specificity (84%) and negative predictive value (99%), but a low positive predictive value of 25%. The principal component analysis isolated two components: the former component represents the orientation to time and space and explains 45% of AMT variance; the latter is linked to memory and attention and explains 13% of variance. Comparable results were obtained after stratification by age, gender or education. In patients with CHF, only 48.3% of the cumulative variance was explained; the factor accounting for most (34.6%) of the variance explained was mainly related to the three items assessing memory. AMT >6 rules out dementia very reliably, whereas AMT <7 requires a second level cognitive assessment to confirm dementia. AMT is bidimensional and maintains the same internal structure across classes defined by selected social and demographic characteristics, but not in CHF patients. It is likely that its internal structure depends on the type of patients. The use of a sum-score could conceal some part of the information provided by the AMT. Copyright 2003 S. Karger AG, Basel

  17. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  18. Local distributions of wealth to describe health inequalities in India: a new approach for analyzing nationally representative household survey data, 1992-2008.

    PubMed

    Bassani, Diego G; Corsi, Daniel J; Gaffey, Michelle F; Barros, Aluisio J D

    2014-01-01

    Worse health outcomes including higher morbidity and mortality are most often observed among the poorest fractions of a population. In this paper we present and validate national, regional and state-level distributions of national wealth index scores, for urban and rural populations, derived from household asset data collected in six survey rounds in India between 1992-3 and 2007-8. These new indices and their sub-national distributions allow for comparative analyses of a standardized measure of wealth across time and at various levels of population aggregation in India. Indices were derived through principal components analysis (PCA) performed using standardized variables from a correlation matrix to minimize differences in variance. Valid and simple indices were constructed with the minimum number of assets needed to produce scores with enough variability to allow definition of unique decile cut-off points in each urban and rural area of all states. For all indices, the first PCA components explained between 36% and 43% of the variance in household assets. Using sub-national distributions of national wealth index scores, mean height-for-age z-scores increased from the poorest to the richest wealth quintiles for all surveys, and stunting prevalence was higher among the poorest and lower among the wealthiest. Urban and rural decile cut-off values for India, for the six regions and for the 24 major states revealed large variability in wealth by geographical area and level, and rural wealth score gaps exceeded those observed in urban areas. The large variability in sub-national distributions of national wealth index scores indicates the importance of accounting for such variation when constructing wealth indices and deriving score distribution cut-off points. Such an approach allows for proper within-sample economic classification, resulting in scores that are valid indicators of wealth and correlate well with health outcomes, and enables wealth-related analyses at whichever geographical area and level may be most informative for policy-making processes.

  19. Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2014-01-01

    This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.

  20. A General Definition of the Heritable Variation That Determines the Potential of a Population to Respond to Selection

    PubMed Central

    Bijma, Piter

    2011-01-01

    Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population’s intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection. PMID:21926298

  1. A general definition of the heritable variation that determines the potential of a population to respond to selection.

    PubMed

    Bijma, Piter

    2011-12-01

    Genetic selection is a major force shaping life on earth. In classical genetic theory, response to selection is the product of the strength of selection and the additive genetic variance in a trait. The additive genetic variance reflects a population's intrinsic potential to respond to selection. The ordinary additive genetic variance, however, ignores the social organization of life. With social interactions among individuals, individual trait values may depend on genes in others, a phenomenon known as indirect genetic effects. Models accounting for indirect genetic effects, however, lack a general definition of heritable variation. Here I propose a general definition of the heritable variation that determines the potential of a population to respond to selection. This generalizes the concept of heritable variance to any inheritance model and level of organization. The result shows that heritable variance determining potential response to selection is the variance among individuals in the heritable quantity that determines the population mean trait value, rather than the usual additive genetic component of phenotypic variance. It follows, therefore, that heritable variance may exceed phenotypic variance among individuals, which is impossible in classical theory. This work also provides a measure of the utilization of heritable variation for response to selection and integrates two well-known models of maternal genetic effects. The result shows that relatedness between the focal individual and the individuals affecting its fitness is a key determinant of the utilization of heritable variance for response to selection.

  2. Incorporating Love- and Rayleigh-wave magnitudes, unequal earthquake and explosion variance assumptions and interstation complexity for improved event screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale N; Bonner, Jessie L; Stroujkova, Anastasia

    Our objective is to improve seismic event screening using the properties of surface waves, We are accomplishing this through (1) the development of a Love-wave magnitude formula that is complementary to the Russell (2006) formula for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion-generated surface waves. We have applied the M{sub s} (VMAX) analysis (Bonner et al., 2006) using both Love and Rayleigh waves to events in the Middle East and Korean Peninsula, For the Middle East dataset consisting of approximately 100 events, the Love M{sub s} (VMAX) is greater than the Rayleighmore » M{sub s} (VMAX) estimated for individual stations for the majority of the events and azimuths, with the exception of the measurements for the smaller events from European stations to the northeast. It is unclear whether these smaller events suffer from magnitude bias for the Love waves or whether the paths, which include the Caspian and Mediterranean, have variable attenuation for Love and Rayleigh waves. For the Korean Peninsula, we have estimated Rayleigh- and Love-wave magnitudes for 31 earthquakes and two nuclear explosions, including the 25 May 2009 event. For 25 of the earthquakes, the network-averaged Love-wave magnitude is larger than the Rayleigh-wave estimate. For the 2009 nuclear explosion, the Love-wave M{sub s} (VMAX) was 3.1 while the Rayleigh-wave magnitude was 3.6. We are also utilizing the potential of observed variances in M{sub s} estimates that differ significantly in earthquake and explosion populations. We have considered two possible methods for incorporating unequal variances into the discrimination problem and compared the performance of various approaches on a population of 73 western United States earthquakes and 131 Nevada Test Site explosions. The approach proposes replacing the M{sub s} component by M{sub s} + a* {sigma}, where {sigma} denotes the interstation standard deviation obtained from the stations in the sample that produced the M{sub s} value. We replace the usual linear discriminant a* M{sub s}+b*{sub m{sub b}} with a* M{sub s}+b*{sub m{sub b}} + C*{sigma}. In the second approach, we estimate the optimum hybrid linear-quadratic discriminant function resulting from the unequal variance assumption. We observed slight improvement for the discriminant functions resulting from the theoretical interpretations of the unequal variance function. We have also studied the complexity of the ''magnitude spectra'' at each station. Our hypothesis is that explosion spectra should have fewer focal mechanism-produced complexities in the magnitude spectra than earthquakes. We have developed an intrastation ''complexity'' metric {Delta}M{sub s}, where {Delta}M{sub s} = M{sub s}(i)-M{sub s}(i+1) at periods, i, which are between 9 and 25 seconds. The complexity by itself has discriminating power but does not add substantially to the conditional hybrid discriminant that incorporates the differing spreads of the earthquake and explosion standard deviations.« less

  3. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  4. Empirical single sample quantification of bias and variance in Q-ball imaging.

    PubMed

    Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A

    2018-02-06

    The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.

  5. A Hybrid Model for Research on Subjective Well-Being: Examining Common- and Component-Specific Sources of Variance in Life Satisfaction, Positive Affect, and Negative Affect

    ERIC Educational Resources Information Center

    Busseri, Michael; Sadava, Stanley; DeCourville, Nancy

    2007-01-01

    The primary components of subjective well-being (SWB) include life satisfaction (LS), positive affect (PA), and negative affect (NA). There is little consensus, however, concerning how these components form a model of SWB. In this paper, six longitudinal studies varying in demographic characteristics, length of time between assessment periods,…

  6. Automatic segmentation of colon glands using object-graphs.

    PubMed

    Gunduz-Demir, Cigdem; Kandemir, Melih; Tosun, Akif Burak; Sokmensuer, Cenk

    2010-02-01

    Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues.

  7. Efficient principal component analysis for multivariate 3D voxel-based mapping of brain functional imaging data sets as applied to FDG-PET and normal aging.

    PubMed

    Zuendorf, Gerhard; Kerrouche, Nacer; Herholz, Karl; Baron, Jean-Claude

    2003-01-01

    Principal component analysis (PCA) is a well-known technique for reduction of dimensionality of functional imaging data. PCA can be looked at as the projection of the original images onto a new orthogonal coordinate system with lower dimensions. The new axes explain the variance in the images in decreasing order of importance, showing correlations between brain regions. We used an efficient, stable and analytical method to work out the PCA of Positron Emission Tomography (PET) images of 74 normal subjects using [(18)F]fluoro-2-deoxy-D-glucose (FDG) as a tracer. Principal components (PCs) and their relation to age effects were investigated. Correlations between the projections of the images on the new axes and the age of the subjects were carried out. The first two PCs could be identified as being the only PCs significantly correlated to age. The first principal component, which explained 10% of the data set variance, was reduced only in subjects of age 55 or older and was related to loss of signal in and adjacent to ventricles and basal cisterns, reflecting expected age-related brain atrophy with enlarging CSF spaces. The second principal component, which accounted for 8% of the total variance, had high loadings from prefrontal, posterior parietal and posterior cingulate cortices and showed the strongest correlation with age (r = -0.56), entirely consistent with previously documented age-related declines in brain glucose utilization. Thus, our method showed that the effect of aging on brain metabolism has at least two independent dimensions. This method should have widespread applications in multivariate analysis of brain functional images. Copyright 2002 Wiley-Liss, Inc.

  8. Optimizing occupational exposure measurement strategies when estimating the log-scale arithmetic mean value--an example from the reinforced plastics industry.

    PubMed

    Lampa, Erik G; Nilsson, Leif; Liljelind, Ingrid E; Bergdahl, Ingvar A

    2006-06-01

    When assessing occupational exposures, repeated measurements are in most cases required. Repeated measurements are more resource intensive than a single measurement, so careful planning of the measurement strategy is necessary to assure that resources are spent wisely. The optimal strategy depends on the objectives of the measurements. Here, two different models of random effects analysis of variance (ANOVA) are proposed for the optimization of measurement strategies by the minimization of the variance of the estimated log-transformed arithmetic mean value of a worker group, i.e. the strategies are optimized for precise estimation of that value. The first model is a one-way random effects ANOVA model. For that model it is shown that the best precision in the estimated mean value is always obtained by including as many workers as possible in the sample while restricting the number of replicates to two or at most three regardless of the size of the variance components. The second model introduces the 'shared temporal variation' which accounts for those random temporal fluctuations of the exposure that the workers have in common. It is shown for that model that the optimal sample allocation depends on the relative sizes of the between-worker component and the shared temporal component, so that if the between-worker component is larger than the shared temporal component more workers should be included in the sample and vice versa. The results are illustrated graphically with an example from the reinforced plastics industry. If there exists a shared temporal variation at a workplace, that variability needs to be accounted for in the sampling design and the more complex model is recommended.

  9. Discontinuity of the annuity curves. III. Two types of vital variability in Drosophila melanogaster.

    PubMed

    Bychkovskaia, I B; Mylnikov, S V; Mozhaev, G A

    2016-01-01

    We confirm five-phased construction of Drosophila annuity curves established earlier. Annuity curves were composed of stable five-phase component and variable one. Variable component was due to differences in phase durations. As stable, so variable components were apparent for 60 generations. Stochastic component was described as well. Viability variance which characterize «reaction norm» was apparent for all generation as well. Thus, both types of variability seem to be inherited.

  10. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.

  11. Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.

    PubMed

    Su, Yu-Xuan; Tu, Yu-Kang

    2018-05-22

    Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.

  12. Identifiability and Performance Analysis of Output Over-sampling Approach to Direct Closed-loop Identification

    NASA Astrophysics Data System (ADS)

    Sun, Lianming; Sano, Akira

    Output over-sampling based closed-loop identification algorithm is investigated in this paper. Some instinct properties of the continuous stochastic noise and the plant input, output in the over-sampling approach are analyzed, and they are used to demonstrate the identifiability in the over-sampling approach and to evaluate its identification performance. Furthermore, the selection of plant model order, the asymptotic variance of estimated parameters and the asymptotic variance of frequency response of the estimated model are also explored. It shows that the over-sampling approach can guarantee the identifiability and improve the performance of closed-loop identification greatly.

  13. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    PubMed

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  14. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  15. Distributed Sensible Heat Flux Measurements for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Huwald, H.; Brauchli, T.; Lehning, M.; Higgins, C. W.

    2015-12-01

    The sensible heat flux component of the surface energy balance is typically computed using eddy covariance or two point profile measurements while alternative approaches such as the flux variance method based on convective scaling has been much less explored and applied. Flux variance (FV) certainly has a few limitations and constraints but may be an interesting and competitive method in low-cost and power limited wireless sensor networks (WSN) with the advantage of providing spatio-temporal sensible heat flux over the domain of the network. In a first step, parameters such as sampling frequency, sensor response time, and averaging interval are investigated. Then we explore the applicability and the potential of the FV method for use in WSN in a field experiment. Low-cost sensor systems are tested and compared against reference instruments (3D sonic anemometers) to evaluate the performance and limitations of the sensors as well as the method with respect to the standard calculations. Comparison experiments were carried out at several sites to gauge the flux measurements over different surface types (gravel, grass, water) from the low-cost systems. This study should also serve as an example of spatially distributed sensible heat flux measurements.

  16. On decomposing stimulus and response waveforms in event-related potentials recordings.

    PubMed

    Yin, Gang; Zhang, Jun

    2011-06-01

    Event-related potentials (ERPs) reflect the brain activities related to specific behavioral events, and are obtained by averaging across many trial repetitions with individual trials aligned to the onset of a specific event, e.g., the onset of stimulus (s-aligned) or the onset of the behavioral response (r-aligned). However, the s-aligned and r-aligned ERP waveforms do not purely reflect, respectively, underlying stimulus (S-) or response (R-) component waveform, due to their cross-contaminations in the recorded ERP waveforms. Zhang [J. Neurosci. Methods, 80, pp. 49-63, 1998] proposed an algorithm to recover the pure S-component waveform and the pure R-component waveform from the s-aligned and r-aligned ERP average waveforms-however, due to the nature of this inverse problem, a direct solution is sensitive to noise that disproportionally affects low-frequency components, hindering the practical implementation of this algorithm. Here, we apply the Wiener deconvolution technique to deal with noise in input data, and investigate a Tikhonov regularization approach to obtain a stable solution that is robust against variances in the sampling of reaction-time distribution (when number of trials is low). Our method is demonstrated using data from a Go/NoGo experiment about image classification and recognition.

  17. Finite mixture model: A maximum likelihood estimation approach on time series data

    NASA Astrophysics Data System (ADS)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  18. Transition of Attention in Terminal Area NextGen Operations Using Synthetic Vision Systems

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K. E.; Kramer, Lynda J.; Shelton, Kevin J.; Arthur, Shelton, J. J., III; Prinzel, Lance J., III; Norman, Robert M.

    2011-01-01

    This experiment investigates the capability of Synthetic Vision Systems (SVS) to provide significant situation awareness in terminal area operations, specifically in low visibility conditions. The use of a Head-Up Display (HUD) and Head-Down Displays (HDD) with SVS is contrasted to baseline standard head down displays in terms of induced workload and pilot behavior in 1400 RVR visibility levels. Variances across performance and pilot behavior were reviewed for acceptability when using HUD or HDD with SVS under reduced minimums to acquire the necessary visual components to continue to land. The data suggest superior performance for HUD implementations. Improved attentional behavior is also suggested for HDD implementations of SVS for low-visibility approach and landing operations.

  19. Heterosis and combining ability: a diallel cross of three geographically isolated populations of Pacific abalone Haliotis discus hannai Ino

    NASA Astrophysics Data System (ADS)

    Deng, Yuewen; Liu, Xiao; Zhang, Guofan; Wu, Fucun

    2010-11-01

    We conducted a complete diallel cross among three geographically isolated populations of Pacific abalone Haliotis discus hannai Ino to determine the heterosis and the combining ability of growth traits at the spat stage. The three populations were collected from Qingdao (Q) and Dalian (D) in China, and Miyagi (M) in Japan. We measured the shell length, shell width, and total weight. The magnitude of the general combining ability (GCA) variance was more pronounced than the specific combining ability (SCA) variance, which is evidenced by both the ratio of the genetic component in total variation and the GCA/SCA values. The component variances of GCA and SCA were significant for all three traits ( P<0.05), indicating the importance of additive and non-additive genetic effects in determining the expression of these traits. The reciprocal maternal effects (RE) were also significant for these traits ( P<0.05). Our results suggest that population D was the best general combiner in breeding programs to improve growth traits. The DM cross had the highest heterosis values for all three traits.

  20. The influence of SO4 and NO3 to the acidity (pH) of rainwater using minimum variance quadratic unbiased estimation (MIVQUE) and maximum likelihood methods

    NASA Astrophysics Data System (ADS)

    Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto

    2017-03-01

    Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.

  1. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  2. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  3. Automated Classification and Removal of EEG Artifacts With SVM and Wavelet-ICA.

    PubMed

    Sai, Chong Yeh; Mokhtar, Norrima; Arof, Hamzah; Cumming, Paul; Iwahashi, Masahiro

    2018-05-01

    Brain electrical activity recordings by electroencephalography (EEG) are often contaminated with signal artifacts. Procedures for automated removal of EEG artifacts are frequently sought for clinical diagnostics and brain-computer interface applications. In recent years, a combination of independent component analysis (ICA) and discrete wavelet transform has been introduced as standard technique for EEG artifact removal. However, in performing the wavelet-ICA procedure, visual inspection or arbitrary thresholding may be required for identifying artifactual components in the EEG signal. We now propose a novel approach for identifying artifactual components separated by wavelet-ICA using a pretrained support vector machine (SVM). Our method presents a robust and extendable system that enables fully automated identification and removal of artifacts from EEG signals, without applying any arbitrary thresholding. Using test data contaminated by eye blink artifacts, we show that our method performed better in identifying artifactual components than did existing thresholding methods. Furthermore, wavelet-ICA in conjunction with SVM successfully removed target artifacts, while largely retaining the EEG source signals of interest. We propose a set of features including kurtosis, variance, Shannon's entropy, and range of amplitude as training and test data of SVM to identify eye blink artifacts in EEG signals. This combinatorial method is also extendable to accommodate multiple types of artifacts present in multichannel EEG. We envision future research to explore other descriptive features corresponding to other types of artifactual components.

  4. Climate and Human Pressure Constraints Co-Explain Regional Plant Invasion at Different Spatial Scales

    PubMed Central

    García-Baquero, Gonzalo; Caño, Lidia; Biurrun, Idoia; García-Mijangos, Itziar; Loidi, Javier; Herrera, Mercedes

    2016-01-01

    Alien species invasion represents a global threat to biodiversity and ecosystems. Explaining invasion patterns in terms of environmental constraints will help us to assess invasion risks and plan control strategies. We aim to identify plant invasion patterns in the Basque Country (Spain), and to determine the effects of climate and human pressure on that pattern. We modeled the regional distribution of 89 invasive plant species using two approaches. First, distance-based Moran’s eigenvector maps were used to partition variation in the invasive species richness, S, into spatial components at broad and fine scales; redundancy analysis was then used to explain those components on the basis of climate and human pressure descriptors. Second, we used generalized additive mixed modeling to fit species-specific responses to the same descriptors. Climate and human pressure descriptors have different effects on S at different spatial scales. Broad-scale spatially structured temperature and precipitation, and fine-scale spatially structured human population density and percentage of natural and semi-natural areas, explained altogether 38.7% of the total variance. The distribution of 84% of the individually tested species was related to either temperature, precipitation or both, and 68% was related to either population density or natural and semi-natural areas, displaying similar responses. The spatial pattern of the invasive species richness is strongly environmentally forced, mainly by climate factors. Since individual species responses were proved to be both similarly constrained in shape and explained variance by the same environmental factors, we conclude that the pattern of invasive species richness results from individual species’ environmental preferences. PMID:27741276

  5. Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)

    1980-01-01

    The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.

  6. A fully-stochasticized, age-structured population model for population viability analysis of fish: Lower Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.

    2017-01-01

    We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.

  7. GPZ: non-stationary sparse Gaussian processes for heteroscedastic uncertainty estimation in photometric redshifts

    NASA Astrophysics Data System (ADS)

    Almosallam, Ibrahim A.; Jarvis, Matt J.; Roberts, Stephen J.

    2016-10-01

    The next generation of cosmology experiments will be required to use photometric redshifts rather than spectroscopic redshifts. Obtaining accurate and well-characterized photometric redshift distributions is therefore critical for Euclid, the Large Synoptic Survey Telescope and the Square Kilometre Array. However, determining accurate variance predictions alongside single point estimates is crucial, as they can be used to optimize the sample of galaxies for the specific experiment (e.g. weak lensing, baryon acoustic oscillations, supernovae), trading off between completeness and reliability in the galaxy sample. The various sources of uncertainty in measurements of the photometry and redshifts put a lower bound on the accuracy that any model can hope to achieve. The intrinsic uncertainty associated with estimates is often non-uniform and input-dependent, commonly known in statistics as heteroscedastic noise. However, existing approaches are susceptible to outliers and do not take into account variance induced by non-uniform data density and in most cases require manual tuning of many parameters. In this paper, we present a Bayesian machine learning approach that jointly optimizes the model with respect to both the predictive mean and variance we refer to as Gaussian processes for photometric redshifts (GPZ). The predictive variance of the model takes into account both the variance due to data density and photometric noise. Using the Sloan Digital Sky Survey (SDSS) DR12 data, we show that our approach substantially outperforms other machine learning methods for photo-z estimation and their associated variance, such as TPZ and ANNZ2. We provide a MATLAB and PYTHON implementations that are available to download at https://github.com/OxfordML/GPz.

  8. Principal Component and Linkage Analysis of Cardiovascular Risk Traits in the Norfolk Isolate

    PubMed Central

    Cox, Hannah C.; Bellis, Claire; Lea, Rod A.; Quinlan, Sharon; Hughes, Roger; Dyer, Thomas; Charlesworth, Jac; Blangero, John; Griffiths, Lyn R.

    2009-01-01

    Objective(s) An individual's risk of developing cardiovascular disease (CVD) is influenced by genetic factors. This study focussed on mapping genetic loci for CVD-risk traits in a unique population isolate derived from Norfolk Island. Methods This investigation focussed on 377 individuals descended from the population founders. Principal component analysis was used to extract orthogonal components from 11 cardiovascular risk traits. Multipoint variance component methods were used to assess genome-wide linkage using SOLAR to the derived factors. A total of 285 of the 377 related individuals were informative for linkage analysis. Results A total of 4 principal components accounting for 83% of the total variance were derived. Principal component 1 was loaded with body size indicators; principal component 2 with body size, cholesterol and triglyceride levels; principal component 3 with the blood pressures; and principal component 4 with LDL-cholesterol and total cholesterol levels. Suggestive evidence of linkage for principal component 2 (h2 = 0.35) was observed on chromosome 5q35 (LOD = 1.85; p = 0.0008). While peak regions on chromosome 10p11.2 (LOD = 1.27; p = 0.005) and 12q13 (LOD = 1.63; p = 0.003) were observed to segregate with principal components 1 (h2 = 0.33) and 4 (h2 = 0.42), respectively. Conclusion(s): This study investigated a number of CVD risk traits in a unique isolated population. Findings support the clustering of CVD risk traits and provide interesting evidence of a region on chromosome 5q35 segregating with weight, waist circumference, HDL-c and total triglyceride levels. PMID:19339786

  9. Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyry Copper Deposit (case Study: Ardestan Area, Central Iran)

    NASA Astrophysics Data System (ADS)

    Mahmoudishadi, S.; Malian, A.; Hosseinali, F.

    2017-09-01

    The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyry copper deposits. This research investigated the propylitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propylitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% ,0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyry copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.

  10. Landsat-TM identification of Amblyomma variegatum (Acari: Ixodidae) habitats in Guadeloupe

    NASA Technical Reports Server (NTRS)

    Hugh-Jones, M.; Barre, N.; Nelson, G.; Wehnes, K.; Warner, J.; Garvin, J.; Garris, G.

    1992-01-01

    The feasibility of identifying specific habitats of the African bont tick, Amblyomma variegatum, from Landsat-TM images was investigated by comparing remotely sensed images of visible farms in Grande Terre (Guadeloupe) with field observations made in the same period of time (1986-1987). The different tick habitates could be separated using principal component analysis. The analysis clustered the sites by large and small variance of band values, and by vegetation and moisture indexes. It was found that herds in heterogeneous sites with large variances had more ticks than those in homogeneous or low variance sites. Within the heterogeneous sites, those with high vegetation and moisture indexes had more ticks than those with low values.

  11. An alternative approach to confidence interval estimation for the win ratio statistic.

    PubMed

    Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann

    2015-03-01

    Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes. © 2014, The International Biometric Society.

  12. Heritability of somatotype components: a multivariate analysis.

    PubMed

    Peeters, M W; Thomis, M A; Loos, R J F; Derom, C A; Fagard, R; Claessens, A L; Vlietinck, R F; Beunen, G P

    2007-08-01

    To study the genetic and environmental determination of variation in Heath-Carter somatotype (ST) components (endomorphy, mesomorphy and ectomorphy). Multivariate path analysis on twin data. Eight hundred and three members of 424 adult Flemish twin pairs (18-34 years of age). The results indicate the significance of sex differences and the significance of the covariation between the three ST components. After age-regression, variation of the population in ST components and their covariation is explained by additive genetic sources of variance (A), shared (familial) environment (C) and unique environment (E). In men, additive genetic sources of variance explain 28.0% (CI 8.7-50.8%), 86.3% (71.6-90.2%) and 66.5% (37.4-85.1%) for endomorphy, mesomorphy and ectomorphy, respectively. For women, corresponding values are 32.3% (8.9-55.6%), 82.0% (67.7-87.7%) and 70.1% (48.9-81.8%). For all components in men and women, more than 70% of the total variation was explained by sources of variance shared between the three components, emphasising the importance of analysing the ST in a multivariate way. The findings suggest that the high heritabilities for mesomorphy and ectomorphy reported in earlier twin studies in adolescence are maintained in adulthood. For endomorphy, which represents a relative measure of subcutaneous adipose tissue, however, the results suggest heritability may be considerably lower than most values reported in earlier studies on adolescent twins. The heritability is also lower than values reported for, for example, body mass index (BMI), which next to the weight of organs and adipose tissue also includes muscle and bone tissue. Considering the differences in heritability between musculoskeletal robustness (mesomorphy) and subcutaneous adipose tissue (endomorphy) it may be questioned whether studying the genetics of BMI will eventually lead to a better understanding of the genetics of fatness, obesity and overweight.

  13. A Filtering of Incomplete GNSS Position Time Series with Probabilistic Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2018-04-01

    For the first time, we introduced the probabilistic principal component analysis (pPCA) regarding the spatio-temporal filtering of Global Navigation Satellite System (GNSS) position time series to estimate and remove Common Mode Error (CME) without the interpolation of missing values. We used data from the International GNSS Service (IGS) stations which contributed to the latest International Terrestrial Reference Frame (ITRF2014). The efficiency of the proposed algorithm was tested on the simulated incomplete time series, then CME was estimated for a set of 25 stations located in Central Europe. The newly applied pPCA was compared with previously used algorithms, which showed that this method is capable of resolving the problem of proper spatio-temporal filtering of GNSS time series characterized by different observation time span. We showed, that filtering can be carried out with pPCA method when there exist two time series in the dataset having less than 100 common epoch of observations. The 1st Principal Component (PC) explained more than 36% of the total variance represented by time series residuals' (series with deterministic model removed), what compared to the other PCs variances (less than 8%) means that common signals are significant in GNSS residuals. A clear improvement in the spectral indices of the power-law noise was noticed for the Up component, which is reflected by an average shift towards white noise from - 0.98 to - 0.67 (30%). We observed a significant average reduction in the accuracy of stations' velocity estimated for filtered residuals by 35, 28 and 69% for the North, East, and Up components, respectively. CME series were also subjected to analysis in the context of environmental mass loading influences of the filtering results. Subtraction of the environmental loading models from GNSS residuals provides to reduction of the estimated CME variance by 20 and 65% for horizontal and vertical components, respectively.

  14. Change of a motor synergy for dampening hand vibration depending on a task difficulty.

    PubMed

    Togo, Shunta; Kagawa, Takahiro; Uno, Yoji

    2014-10-01

    The present study investigated the relationship between the number of usable degrees of freedom (DOFs) and joint coordination during a human-dampening hand vibration task. Participants stood on a platform generating an anterior-posterior directional oscillation and held a water-filled cup. Their usable DOFs were changed under the following conditions of limb constraint: (1) no constraint; (2) ankle constrained; and (3) ankle-knee constrained. Kinematic whole-body data were recorded using a three-dimensional position measurement system. The jerk of each body part was evaluated as an index of oscillation intensity. To quantify joint coordination, an uncontrolled manifold (UCM) analysis was applied and the variance of joints related to hand jerk divided into two components: a UCM component that did not affect hand jerk and an orthogonal (ORT) component that directly affected hand jerk. The results showed that hand jerk when the task used a cup filled with water was significantly smaller than when a cup containing stones was used, regardless of limb constraint condition. Thus, participants dampened their hand vibration utilizing usable joint DOFs. According to UCM analysis, increasing the oscillation velocity and the decrease in usable DOFs by the limb constraints led to an increase of total variance of the joints and the UCM component, indicating that a synergy-dampening hand vibration was enhanced. These results show that the variance of usable joint DOFs is more fitted to the UCM subspace when the joints are varied by increasing the velocity and limb constraints and suggest that humans adopt enhanced synergies to achieve more difficult tasks.

  15. Bioequivalence evaluation of two brands of amoxicillin/clavulanic acid 250/125 mg combination tablets in healthy human volunteers: use of replicate design approach.

    PubMed

    Idkaidek, Nasir M; Al-Ghazawi, Ahmad; Najib, Naji M

    2004-12-01

    The purpose of this study was to apply a replicate design approach to a bioequivalence study of amoxicillin/clavulanic acid combination following a 250/125 mg oral dose to 23 subjects, and to compare the analysis of individual bioequivalence with average bioequivalence. This was conducted as a 2-treatment 2-sequence 4-period crossover study. Average bioequivalence was shown, while the results from the individual bioequivalence approach had no success in showing bioequivalence. In conclusion, the individual bioequivalence approach is a strong statistical tool to test for intra-subject variances and also subject-by-formulation interaction variance compared with the average bioequivalence approach. copyright (c) 2004 John Wiley & Sons, Ltd.

  16. Some New Results on Grubbs’ Estimators.

    DTIC Science & Technology

    1983-06-01

    8217 ESTIMATORS DENNIS A. BRINDLEY AND RALPH A. BRADLEY* Consider a two-way classification with n rows and r columns and the usual model of analysis of variance...except that the error components of the model may have heterogeneous variances, by columns. -Grubbs provided unbiased estimators Q. of a . that depend...of observations yij, i = 1, ... , n, j 1, ... , r, and the model , Yij = Ili + ij + Ej, (1) when Vi represents the mean response of row i, . represents

  17. Two dynamic regimes in the human gut microbiome

    PubMed Central

    Smillie, Chris S.; Alm, Eric J.

    2017-01-01

    The gut microbiome is a dynamic system that changes with host development, health, behavior, diet, and microbe-microbe interactions. Prior work on gut microbial time series has largely focused on autoregressive models (e.g. Lotka-Volterra). However, we show that most of the variance in microbial time series is non-autoregressive. In addition, we show how community state-clustering is flawed when it comes to characterizing within-host dynamics and that more continuous methods are required. Most organisms exhibited stable, mean-reverting behavior suggestive of fixed carrying capacities and abundant taxa were largely shared across individuals. This mean-reverting behavior allowed us to apply sparse vector autoregression (sVAR)—a multivariate method developed for econometrics—to model the autoregressive component of gut community dynamics. We find a strong phylogenetic signal in the non-autoregressive co-variance from our sVAR model residuals, which suggests niche filtering. We show how changes in diet are also non-autoregressive and that Operational Taxonomic Units strongly correlated with dietary variables have much less of an autoregressive component to their variance, which suggests that diet is a major driver of microbial dynamics. Autoregressive variance appears to be driven by multi-day recovery from frequent facultative anaerobe blooms, which may be driven by fluctuations in luminal redox. Overall, we identify two dynamic regimes within the human gut microbiota: one likely driven by external environmental fluctuations, and the other by internal processes. PMID:28222117

  18. Estimation of genetic parameters for heat stress, including dominance gene effects, on milk yield in Thai Holstein dairy cattle.

    PubMed

    Boonkum, Wuttigrai; Duangjinda, Monchai

    2015-03-01

    Heat stress in tropical regions is a major cause that strongly negatively affects to milk production in dairy cattle. Genetic selection for dairy heat tolerance is powerful technique to improve genetic performance. Therefore, the current study aimed to estimate genetic parameters and investigate the threshold point of heat stress for milk yield. Data included 52 701 test-day milk yield records for the first parity from 6247 Thai Holstein dairy cattle, covering the period 1990 to 2007. The random regression test day model with EM-REML was used to estimate variance components, genetic parameters and milk production loss. A decline in milk production was found when temperature and humidity index (THI) exceeded a threshold of 74, also it was associated with the high percentage of Holstein genetics. All variance component estimates increased with THI. The estimate of heritability of test-day milk yield was 0.231. Dominance variance as a proportion to additive variance (0.035) indicated that non-additive effects might not be of concern for milk genetics studies in Thai Holstein cattle. Correlations between genetic and permanent environmental effects, for regular conditions and due to heat stress, were - 0.223 and - 0.521, respectively. The heritability and genetic correlations from this study show that simultaneous selection for milk production and heat tolerance is possible. © 2014 Japanese Society of Animal Science.

  19. Two dynamic regimes in the human gut microbiome.

    PubMed

    Gibbons, Sean M; Kearney, Sean M; Smillie, Chris S; Alm, Eric J

    2017-02-01

    The gut microbiome is a dynamic system that changes with host development, health, behavior, diet, and microbe-microbe interactions. Prior work on gut microbial time series has largely focused on autoregressive models (e.g. Lotka-Volterra). However, we show that most of the variance in microbial time series is non-autoregressive. In addition, we show how community state-clustering is flawed when it comes to characterizing within-host dynamics and that more continuous methods are required. Most organisms exhibited stable, mean-reverting behavior suggestive of fixed carrying capacities and abundant taxa were largely shared across individuals. This mean-reverting behavior allowed us to apply sparse vector autoregression (sVAR)-a multivariate method developed for econometrics-to model the autoregressive component of gut community dynamics. We find a strong phylogenetic signal in the non-autoregressive co-variance from our sVAR model residuals, which suggests niche filtering. We show how changes in diet are also non-autoregressive and that Operational Taxonomic Units strongly correlated with dietary variables have much less of an autoregressive component to their variance, which suggests that diet is a major driver of microbial dynamics. Autoregressive variance appears to be driven by multi-day recovery from frequent facultative anaerobe blooms, which may be driven by fluctuations in luminal redox. Overall, we identify two dynamic regimes within the human gut microbiota: one likely driven by external environmental fluctuations, and the other by internal processes.

  20. DEVELOPMENT OF A METHOD TO QUANTIFY THE IMPACT ...

    EPA Pesticide Factsheets

    Advances in human health risk assessment, especially for contaminants encountered by the inhalation route, have evolved so that the uncertainty factors (UF) used in the extrapolation of non-cancer effects across species (UFA) have been split into the respective pharmacodynamic (PD) and pharmacokinetic (PK) components. Present EPA default values for these components are divided into two half-logs (e.g., 10 to the 0.5 power or 3.16), so that their multiplication yields the 10-fold UF customarily seen in Agency risk assessments as UFA. The state of the science at present does not support a detailed evaluation of species-dependent and human interindividual variance of PD, but more data exist by which PK variance can be examined and quantified both across species and within the human species. Because metabolism accounts for much of the PK variance, we sought to examine the impact that differences in hepatic enzyme content exerts upon risk-relevant PK outcomes among humans. Because of the age and ethnic diversity expressed in the human organ donor population and the wide availability of tissues from these human organ donors, a program was developed to include information from those tissues in characterizing human interindividual PK variance. An Interagency Agreement with CDC/NIOSH Taft Laboratory, a Cooperative Agreement with CIIT Centers for Health Research, and a collaborative agreement with NHEERL/ETD were established to successfully complete the project. The di

  1. The Multidimensional Influence of Acculturation on Digit Symbol-Coding and Wisconsin Card Sorting Test in Hispanics.

    PubMed

    Krch, Denise; Lequerica, Anthony; Arango-Lasprilla, Juan Carlos; Rogers, Heather L; DeLuca, John; Chiaravalloti, Nancy D

    2015-01-01

    The purpose of the current study was to evaluate the relative contribution of acculturation to two tests of nonverbal test performance in Hispanics. This study compared 40 Hispanic and 20 non-Hispanic whites on Digit Symbol-Coding (DSC) and the Wisconsin Card Sorting Test (WCST) and evaluated the relative contribution of the various acculturation components to cognitive test performance in the Hispanic group. Hispanics performed significantly worse on DSC and WCST relative to non-Hispanic whites. Multiple regressions conducted within the Hispanic group revealed that language use uniquely accounted for 11.0% of the variance on the DSC, 18.8% of the variance on WCST categories completed, and 13.0% of the variance in perseverative errors on the WCST. Additionally, years of education in the United States uniquely accounted for 14.9% of the variance in DSC. The significant impact of acculturation on DSC and WCST lends support that nonverbal cognitive tests are not necessarily culture free. The differential contribution of acculturation proxies highlights the importance of considering these separate components when interpreting performance on neuropsychological tests in clinical and research settings. Factors, such as the country where education was received, may in fact be more meaningful information than the years of education of education attained. Thus, acculturation should be considered an important factor in any cognitive evaluation of culturally diverse individuals.

  2. Identification of regional activation by factorization of high-density surface EMG signals: A comparison of Principal Component Analysis and Non-negative Matrix factorization.

    PubMed

    Gallina, Alessio; Garland, S Jayne; Wakeling, James M

    2018-05-22

    In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Does the Assessment of Recovery Capital scale reflect a single or multiple domains?

    PubMed

    Arndt, Stephan; Sahker, Ethan; Hedden, Suzy

    2017-01-01

    The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.

  4. Estimating multilevel logistic regression models when the number of clusters is low: a comparison of different statistical software procedures.

    PubMed

    Austin, Peter C

    2010-04-22

    Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.

  5. The Pattern Across the Continental United States of Evapotranspiration Variability Associated with Water Availability

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Salvucci, Guido D.; Rigden, Angela J.; Jung, Martin; Collatz, G. James; Schubert, Siegfried D.

    2015-01-01

    The spatial pattern across the continental United States of the interannual variance of warm season water-dependent evapotranspiration, a pattern of relevance to land-atmosphere feedback, cannot be measured directly. Alternative and indirect approaches to estimating the pattern, however, do exist, and given the uncertainty of each, we use several such approaches here. We first quantify the water dependent evapotranspiration variance pattern inherent in two derived evapotranspiration datasets available from the literature. We then search for the pattern in proxy geophysical variables (air temperature, stream flow, and NDVI) known to have strong ties to evapotranspiration. The variances inherent in all of the different (and mostly independent) data sources show some differences but are generally strongly consistent they all show a large variance signal down the center of the U.S., with lower variances toward the east and (for the most part) toward the west. The robustness of the pattern across the datasets suggests that it indeed represents the pattern operating in nature. Using Budykos hydroclimatic framework, we show that the pattern can largely be explained by the relative strength of water and energy controls on evapotranspiration across the continent.

  6. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  7. A consistent transported PDF model for treating differential molecular diffusion

    NASA Astrophysics Data System (ADS)

    Wang, Haifeng; Zhang, Pei

    2016-11-01

    Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.

  8. Gender Differences in Marital Status Moderation of Genetic and Environmental Influences on Subjective Health.

    PubMed

    Finkel, Deborah; Franz, Carol E; Horwitz, Briana; Christensen, Kaare; Gatz, Margaret; Johnson, Wendy; Kaprio, Jaako; Korhonen, Tellervo; Niederheiser, Jenae; Petersen, Inge; Rose, Richard J; Silventoinen, Karri

    2015-10-14

    From the IGEMS Consortium, data were available from 26,579 individuals aged 23 to 102 years on 3 subjective health items: self-rated health (SRH), health compared to others (COMP), and impact of health on activities (ACT). Marital status was a marker of environmental resources that may moderate genetic and environmental influences on subjective health. Results differed for the 3 subjective health items, indicating that they do not tap the same construct. Although there was little impact of marital status on variance components for women, marital status was a significant modifier of variance in all 3 subjective health measures for men. For both SRH and ACT, single men demonstrated greater shared and nonshared environmental variance than married men. For the COMP variable, genetic variance was greater for single men vs. married men. Results suggest gender differences in the role of marriage as a source of resources that are associated with subjective health.

  9. Ecohydrological perspective of phytogenic organic and inorganic components in Greek lignites: a quantitative reinterpretation

    NASA Astrophysics Data System (ADS)

    Mulder, Christian; Sakorafa, Vasiliki; Burragato, Francesco; Visscher, Henk

    2000-06-01

    A consensus about the development of freshwater wetlands in relation to time and space is urgently required. Our study aims to address this issue by providing additional data for a fine-scaled comparison of local depositional settings of Greek mires during the Pliocene and Pleistocene. Lignite profiles exhibit phytogenic organic components (macerals) that have been used to investigate the past peat-forming vegetation structure and their succession series. The organic petrology of lignite samples from the opencast mines of Komanos (Ptolemais) and Choremi (Megalopolis) was achieved to assess the water supply, wetland type, nutrient status and vegetation physiognomy. A holistic approach (a study of ecosystems as complete entities) was carried out for a paleoecological reconstruction of the mires. Huminite, liptinite and inertinite were traced by means of their chemical and morphological differences together with the morphogenic and taphonomic affinities. The problem of combining independent information from different approaches in a multivariate calibration setup has been considered. Linear regression, non-metric multidimensional scaling and one-way analysis of variance tested the occurrence of palynological and petrological proxies. Although the lignite formation and deposition are less related to humid periods than expected, the resulting differences occurring in the reconstructed development stages appear to be related to astronomically forced climate fluctuations.

  10. Unique relation between surface-limited evaporation and relative humidity profiles holds in both field data and climate model simulations

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.

    2013-12-01

    A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.

  11. Assessing implementation difficulties in tobacco use prevention and cessation counselling among dental providers

    PubMed Central

    2011-01-01

    Background Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. Methods A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Results Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55.0 to 65.0). Three factors were extracted that explain 70.8% of the variance: motivation (47.6% of variance, α = 0.86), capability (13.3% of variance, α = 0.83), and opportunity (10.0% of variance, α = 0.71). Conclusions This study demonstrated a theoretically informed approach to identifying possible implementation difficulties in TUPAC counselling among dental providers. This approach provides a method for moving from diagnosing implementation difficulties to designing and evaluating interventions. PMID:21615948

  12. Specific Trauma Subtypes Improve the Predictive Validity of the Harvard Trauma Questionnaire in Iraqi Refugees

    PubMed Central

    Arnetz, Bengt B.; Broadbridge, Carissa L.; Jamil, Hikmet; Lumley, Mark A.; Pole, Nnamdi; Barkho, Evone; Fakhouri, Monty; Talia, Yousif Rofa; Arnetz, Judith E.

    2014-01-01

    Background Trauma exposure contributes to poor mental health among refugees, and exposure often is measured using a cumulative index of items from the Harvard Trauma Questionnaire (HTQ). Few studies, however, have asked whether trauma subtypes derived from the HTQ could be superior to this cumulative index in predicting mental health outcomes. Methods A community sample of recently arrived Iraqi refugees (N = 298) completed the HTQ and measures of posttraumatic stress disorder (PTSD) and depression symptoms. Results Principal components analysis of HTQ items revealed a 5-component subtype model of trauma that accounted for more item variance than a 1-component solution. These trauma subtypes also accounted for more variance in PTSD and depression symptoms (12% and 10%, respectively) than did the cumulative trauma index (7% and 3%, respectively). Discussion Trauma subtypes provided more information than cumulative trauma in the prediction of negative mental health outcomes. Therefore, use of these subtypes may enhance the utility of the HTQ when assessing at-risk populations. PMID:24549491

  13. Predicting Levels of Reading and Writing Achievement in Typically Developing, English-Speaking 2nd and 5th Graders

    PubMed Central

    Jones, Jasmin Niedo; Abbott, Robert D.; Berninger, Virginia W.

    2014-01-01

    Human traits tend to fall along normal distributions. The aim of this research was to evaluate an evidence-based conceptual framework for predicting expected individual differences in reading and writing achievement outcomes for typically developing readers and writers in early and middle childhood from Verbal Reasoning with or without Working Memory Components (phonological, orthographic, and morphological word storage and processing units, phonological and orthographic loops, and rapid switching attention for cross-code integration). Verbal Reasoning (reconceptualized as Bidirectional Cognitive-Linguistic Translation) plus the Working Memory Components (reconceptualized as a language learning system) accounted for more variance than Verbal Reasoning alone, except for handwriting for which Working Memory Components alone were better predictors. Which predictors explained unique variance varied within and across reading (oral real word and pseudoword accuracy and rate, reading comprehension) and writing (handwriting, spelling, composing) skills and grade levels (second and fifth) in this longitudinal study. Educational applications are illustrated and theoretical and practical significance discussed. PMID:24948868

  14. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  15. Specific trauma subtypes improve the predictive validity of the Harvard Trauma Questionnaire in Iraqi refugees.

    PubMed

    Arnetz, Bengt B; Broadbridge, Carissa L; Jamil, Hikmet; Lumley, Mark A; Pole, Nnamdi; Barkho, Evone; Fakhouri, Monty; Talia, Yousif Rofa; Arnetz, Judith E

    2014-12-01

    Trauma exposure contributes to poor mental health among refugees, and exposure often is measured using a cumulative index of items from the Harvard Trauma Questionnaire (HTQ). Few studies, however, have asked whether trauma subtypes derived from the HTQ could be superior to this cumulative index in predicting mental health outcomes. A community sample of recently arrived Iraqi refugees (N = 298) completed the HTQ and measures of posttraumatic stress disorder (PTSD) and depression symptoms. Principal components analysis of HTQ items revealed a 5-component subtype model of trauma that accounted for more item variance than a 1-component solution. These trauma subtypes also accounted for more variance in PTSD and depression symptoms (12 and 10%, respectively) than did the cumulative trauma index (7 and 3%, respectively). Trauma subtypes provided more information than cumulative trauma in the prediction of negative mental health outcomes. Therefore, use of these subtypes may enhance the utility of the HTQ when assessing at-risk populations.

  16. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  17. Evaluation of genetic components in traits related to superovulation, in vitro fertilization, and embryo transfer in Holstein cattle

    USDA-ARS?s Scientific Manuscript database

    The objectives of this study were to estimate variance components and identify regions of the genome associated with traits related to embryo transfer in Holsteins. Reproductive technologies are used in the dairy industry to increase the reproductive rate of superior females. A drawback of these met...

  18. Physical context for theoretical approaches to sediment transport magnitude-frequency analysis in alluvial channels

    NASA Astrophysics Data System (ADS)

    Sholtes, Joel; Werbylo, Kevin; Bledsoe, Brian

    2014-10-01

    Theoretical approaches to magnitude-frequency analysis (MFA) of sediment transport in channels couple continuous flow probability density functions (PDFs) with power law flow-sediment transport relations (rating curves) to produce closed-form equations relating MFA metrics such as the effective discharge, Qeff, and fraction of sediment transported by discharges greater than Qeff, f+, to statistical moments of the flow PDF and rating curve parameters. These approaches have proven useful in understanding the theoretical drivers behind the magnitude and frequency of sediment transport. However, some of their basic assumptions and findings may not apply to natural rivers and streams with more complex flow-sediment transport relationships or management and design scenarios, which have finite time horizons. We use simple numerical experiments to test the validity of theoretical MFA approaches in predicting the magnitude and frequency of sediment transport. Median values of Qeff and f+ generated from repeated, synthetic, finite flow series diverge from those produced with theoretical approaches using the same underlying flow PDF. The closed-form relation for f+ is a monotonically increasing function of flow variance. However, using finite flow series, we find that f+ increases with flow variance to a threshold that increases with flow record length. By introducing a sediment entrainment threshold, we present a physical mechanism for the observed diverging relationship between Qeff and flow variance in fine and coarse-bed channels. Our work shows that through complex and threshold-driven relationships sediment transport mode, channel morphology, flow variance, and flow record length all interact to influence estimates of what flow frequencies are most responsible for transporting sediment in alluvial channels.

  19. The seasonal predictability of blocking frequency in two seasonal prediction systems (CMCC, Met-Office) and the associated representation of low-frequency variability.

    NASA Astrophysics Data System (ADS)

    Athanasiadis, Panos; Gualdi, Silvio; Scaife, Adam A.; Bellucci, Alessio; Hermanson, Leon; MacLachlan, Craig; Arribas, Alberto; Materia, Stefano; Borelli, Andrea

    2014-05-01

    Low-frequency variability is a fundamental component of the atmospheric circulation. Extratropical teleconnections, the occurrence of blocking and the slow modulation of the jet streams and storm tracks are all different aspects of low-frequency variability. Part of the latter is attributed to the chaotic nature of the atmosphere and is inherently unpredictable. On the other hand, primarily as a response to boundary forcings, tropospheric low-frequency variability includes components that are potentially predictable. Seasonal forecasting faces the difficult task of predicting these components. Particularly referring to the extratropics, the current generation of seasonal forecasting systems seem to be approaching this target by realistically initializing most components of the climate system, using higher resolution and utilizing large ensemble sizes. Two seasonal prediction systems (Met-Office GloSea and CMCC-SPS-v1.5) are analyzed in terms of their representation of different aspects of extratropical low-frequency variability. The current operational Met-Office system achieves unprecedented high scores in predicting the winter-mean phase of the North Atlantic Oscillation (NAO, corr. 0.74 at 500 hPa) and the Pacific-N. American pattern (PNA, corr. 0.82). The CMCC system, considering its small ensemble size and course resolution, also achieves good scores (0.42 for NAO, 0.51 for PNA). Despite these positive features, both models suffer from biases in low-frequency variance, particularly in the N. Atlantic. Consequently, it is found that their intrinsic variability patterns (sectoral EOFs) differ significantly from the observed, and the known teleconnections are underrepresented. Regarding the representation of N. hemisphere blocking, after bias correction both systems exhibit a realistic climatology of blocking frequency. In this assessment, instantaneous blocking and large-scale persistent blocking events are identified using daily geopotential height fields at 500 hPa. Given a documented strong relationship between high-latitude N. Atlantic blocking and the NAO, one would expect a predictive skill for the seasonal frequency of blocking comparable to that of the NAO. However, this remains elusive. Future efforts should be in the direction of reducing model biases not only in the mean but also in variability (band-passed variances).

  20. Mental health stigmatisation in deployed UK Armed Forces: a principal components analysis.

    PubMed

    Fertout, Mohammed; Jones, N; Keeling, M; Greenberg, N

    2015-12-01

    UK military research suggests that there is a significant link between current psychological symptoms, mental health stigmatisation and perceived barriers to care (stigma/BTC). Few studies have explored the construct of stigma/BTC in depth amongst deployed UK military personnel. Three survey datasets containing a stigma/BTC scale obtained during UK deployments to Iraq and Afghanistan were combined (n=3405 personnel). Principal component analysis was used to identify the key components of stigma/BTC. The relationship between psychological symptoms, the stigma/BTC components and help seeking were examined. Two components were identified: 'potential loss of personal military credibility and trust' (stigma Component 1, five items, 49.4% total model variance) and 'negative perceptions of mental health services and barriers to help seeking' (Component 2, six items, 11.2% total model variance). Component 1 was endorsed by 37.8% and Component 2 by 9.4% of personnel. Component 1 was associated with both assessed and subjective mental health, medical appointments and admission to hospital. Stigma Component 2 was associated with subjective and assessed mental health but not with medical appointments. Neither component was associated with help-seeking for subjective psycho-social problems. Potential loss of credibility and trust appeared to be associated with help-seeking for medical reasons but not for help-seeking for subjective psychosocial problems. Those experiencing psychological symptoms appeared to minimise the effects of stigma by seeking out a socially acceptable route into care, such as the medical consultation, whereas those who experienced a subjective mental health problem appeared willing to seek help from any source. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Ancestral Relationships Using Metafounders: Finite Ancestral Populations and Across Population Relationships

    PubMed Central

    Legarra, Andres; Christensen, Ole F.; Vitezica, Zulma G.; Aguilar, Ignacio; Misztal, Ignacy

    2015-01-01

    Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a “metafounder,” a pseudo-individual included as founder of the pedigree and similar to an “unknown parent group.” Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. PMID:25873631

  2. Second-moment budgets in cloud topped boundary layers: A large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Heinze, Rieke; Mironov, Dmitrii; Raasch, Siegfried

    2015-06-01

    A detailed analysis of second-order moment budgets for cloud topped boundary layers (CTBLs) is performed using high-resolution large-eddy simulation (LES). Two CTBLs are simulated—one with trade wind shallow cumuli, and the other with nocturnal marine stratocumuli. Approximations to the ensemble-mean budgets of the Reynolds-stress components, of the fluxes of two quasi-conservative scalars, and of the scalar variances and covariance are computed by averaging the LES data over horizontal planes and over several hundred time steps. Importantly, the subgrid scale contributions to the budget terms are accounted for. Analysis of the LES-based second-moment budgets reveals, among other things, a paramount importance of the pressure scrambling terms in the Reynolds-stress and scalar-flux budgets. The pressure-strain correlation tends to evenly redistribute kinetic energy between the components, leading to the growth of horizontal-velocity variances at the expense of the vertical-velocity variance which is produced by buoyancy over most of both CTBLs. The pressure gradient-scalar covariances are the major sink terms in the budgets of scalar fluxes. The third-order transport proves to be of secondary importance in the scalar-flux budgets. However, it plays a key role in maintaining budgets of TKE and of the scalar variances and covariance. Results from the second-moment budget analysis suggest that the accuracy of description of the CTBL structure within the second-order closure framework strongly depends on the fidelity of parameterizations of the pressure scrambling terms in the flux budgets and of the third-order transport terms in the variance budgets. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  3. Principal Component Analysis of Lipid Molecule Conformational Changes in Molecular Dynamics Simulations.

    PubMed

    Buslaev, Pavel; Gordeliy, Valentin; Grudinin, Sergei; Gushchin, Ivan

    2016-03-08

    Molecular dynamics simulations of lipid bilayers are ubiquitous nowadays. Usually, either global properties of the bilayer or some particular characteristics of each lipid molecule are evaluated in such simulations, but the structural properties of the molecules as a whole are rarely studied. Here, we show how a comprehensive quantitative description of conformational space and dynamics of a single lipid molecule can be achieved via the principal component analysis (PCA). We illustrate the approach by analyzing and comparing simulations of DOPC bilayers obtained using eight different force fields: all-atom generalized AMBER, CHARMM27, CHARMM36, Lipid14, and Slipids and united-atom Berger, GROMOS43A1-S3, and GROMOS54A7. Similarly to proteins, most of the structural variance of a lipid molecule can be described by only a few principal components. These major components are similar in different simulations, although there are notable distinctions between the older and newer force fields and between the all-atom and united-atom force fields. The DOPC molecules in the simulations generally equilibrate on the time scales of tens to hundreds of nanoseconds. The equilibration is the slowest in the GAFF simulation and the fastest in the Slipids simulation. Somewhat unexpectedly, the equilibration in the united-atom force fields is generally slower than in the all-atom force fields. Overall, there is a clear separation between the more variable previous generation force fields and significantly more similar new generation force fields (CHARMM36, Lipid14, Slipids). We expect that the presented approaches will be useful for quantitative analysis of conformations and dynamics of individual lipid molecules in other simulations of lipid bilayers.

  4. A chemometric approach for characterization of serum transthyretin in familial amyloidotic polyneuropathy type I (FAP-I) by electrospray ionization-ion mobility mass spectrometry.

    PubMed

    Pont, Laura; Sanz-Nebot, Victoria; Vilaseca, Marta; Jaumot, Joaquim; Tauler, Roma; Benavente, Fernando

    2018-05-01

    In this study, we describe a chemometric data analysis approach to assist in the interpretation of the complex datasets from the analysis of high-molecular mass oligomeric proteins by ion mobility mass spectrometry (IM-MS). The homotetrameric protein transthyretin (TTR) is involved in familial amyloidotic polyneuropathy type I (FAP-I). FAP-I is associated with a specific TTR mutant variant (TTR(Met30)) that can be easily detected analyzing the monomeric forms of the mutant protein. However, the mechanism of protein misfolding and aggregation onset, which could be triggered by structural changes in the native tetrameric protein, remains under investigation. Serum TTR from healthy controls and FAP-I patients was purified under non-denaturing conditions by conventional immunoprecipitation in solution and analyzed by IM-MS. IM-MS allowed separation and characterization of several tetrameric, trimeric and dimeric TTR gas ions due to their differential drift time. After an appropriate data pre-processing, multivariate curve resolution alternating least squares (MCR-ALS) was applied to the complex datasets. A group of seven independent components being characterized by their ion mobility profiles and mass spectra were resolved to explain the observed data variance in control and patient samples. Then, principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were considered for exploration and classification. Only four out of the seven resolved components were enough for an accurate differentiation. Furthermore, the specific TTR ions identified in the mass spectra of these components and the resolved ion mobility profiles provided a straightforward insight into the most relevant oligomeric TTR proteoforms for the disease. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Compounding approach for univariate time series with nonstationary variances

    NASA Astrophysics Data System (ADS)

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  6. Compounding approach for univariate time series with nonstationary variances.

    PubMed

    Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich

    2015-12-01

    A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.

  7. Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.

    PubMed

    Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe

    2015-01-01

    Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.

  8. Good genes, genetic compatibility and the evolution of polyandry: use of the diallel cross to address competing hypotheses.

    PubMed

    Ivy, T M

    2007-03-01

    Genetic benefits can enhance the fitness of polyandrous females through the high intrinsic genetic quality of females' mates or through the interaction between female and male genes. I used a full diallel cross, a quantitative genetics design that involves all possible crosses among a set of genetically homogeneous lines, to determine the mechanism through which polyandrous female decorated crickets (Gryllodes sigillatus) obtain genetic benefits. I measured several traits related to fitness and partitioned the phenotypic variance into components representing the contribution of additive genetic variance ('good genes'), nonadditive genetic variance (genetic compatibility), as well as maternal and paternal effects. The results reveal a significant variance attributable to both nonadditive and additive sources in the measured traits, and their influence depended on which trait was considered. The lack of congruence in sources of phenotypic variance among these fitness-related traits suggests that the evolution and maintenance of polyandry are unlikely to have resulted from one selective influence, but rather are the result of the collective effects of a number of factors.

  9. Estimating variance components and breeding values for number of oocytes and number of embryos in dairy cattle using a single-step genomic evaluation.

    PubMed

    Cornelissen, M A M C; Mullaart, E; Van der Linde, C; Mulder, H A

    2017-06-01

    Reproductive technologies such as multiple ovulation and embryo transfer (MOET) and ovum pick-up (OPU) accelerate genetic improvement in dairy breeding schemes. To enhance the efficiency of embryo production, breeding values for traits such as number of oocytes (NoO) and number of MOET embryos (NoM) can help in selection of donors with high MOET or OPU efficiency. The aim of this study was therefore to estimate variance components and (genomic) breeding values for NoO and NoM based on Dutch Holstein data. Furthermore, a 10-fold cross-validation was carried out to assess the accuracy of pedigree and genomic breeding values for NoO and NoM. For NoO, 40,734 OPU sessions between 1993 and 2015 were analyzed. These OPU sessions originated from 2,543 donors, from which 1,144 were genotyped. For NoM, 35,695 sessions between 1994 and 2015 were analyzed. These MOET sessions originated from 13,868 donors, from which 3,716 were genotyped. Analyses were done using only pedigree information and using a single-step genomic BLUP (ssGBLUP) approach combining genomic information and pedigree information. Heritabilities were very similar based on pedigree information or based on ssGBLUP [i.e., 0.32 (standard error = 0.03) for NoO and 0.21 (standard error = 0.01) for NoM with pedigree, 0.31 (standard error = 0.03) for NoO, and 0.22 (standard error = 0.01) for NoM with ssGBLUP]. For animals without their own information as mimicked in the cross-validation, the accuracy of pedigree-based breeding values was 0.46 for NoO and NoM. The accuracies of genomic breeding values from ssGBLUP were 0.54 for NoO and 0.52 for NoM. These results show that including genomic information increases the accuracies. These moderate accuracies in combination with a large genetic variance show good opportunities for selection of potential bull dams. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Genome-wide association study for ketosis in US Jerseys using producer-recorded data.

    PubMed

    Parker Gaddis, K L; Megonigal, J H; Clay, J S; Wolfe, C W

    2018-01-01

    Ketosis is one of the most frequently reported metabolic health events in dairy herds. Several genetic analyses of ketosis in dairy cattle have been conducted; however, few have focused specifically on Jersey cattle. The objectives of this research included estimating variance components for susceptibility to ketosis and identification of genomic regions associated with ketosis in Jersey cattle. Voluntary producer-recorded health event data related to ketosis were available from Dairy Records Management Systems (Raleigh, NC). Standardization was implemented to account for the various acronyms used by producers to designate an incidence of ketosis. Events were restricted to the first reported incidence within 60 d after calving in first through fifth parities. After editing, there were a total of 42,233 records from 23,865 cows. A total of 1,750 genotyped animals were used for genomic analyses using 60,671 markers. Because of the binary nature of the trait, a threshold animal model was fitted using THRGIBBS1F90 (version 2.110) using only pedigree information, and genomic information was incorporated using a single-step genomic BLUP approach. Individual single nucleotide polymorphism (SNP) effects and the proportion of variance explained by 10-SNP windows were calculated using postGSf90 (version 1.38). Heritability of susceptibility to ketosis was 0.083 [standard deviation (SD) = 0.021] and 0.078 (SD = 0.018) in pedigree-based and genomic analyses, respectively. The marker with the largest associated effect was located on chromosome 10 at 66.3 Mbp. The 10-SNP window explaining the largest proportion of variance (0.70%) was located on chromosome 6 beginning at 56.1 Mbp. Gene Ontology (GO) and Medical Subject Heading (MeSH) enrichment analyses identified several overrepresented processes and terms related to immune function. Our results indicate that there is a genetic component related to ketosis susceptibility in Jersey cattle and, as such, genetic selection for improved resistance to ketosis is feasible. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Estimation of within-stratum variance for sample allocation: Foreign commodity production forecasting

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)

    1980-01-01

    The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.

  12. Capturing multidimensionality in stroke aphasia: mapping principal behavioural components to neural structures

    PubMed Central

    Butler, Rebecca A.

    2014-01-01

    Stroke aphasia is a multidimensional disorder in which patient profiles reflect variation along multiple behavioural continua. We present a novel approach to separating the principal aspects of chronic aphasic performance and isolating their neural bases. Principal components analysis was used to extract core factors underlying performance of 31 participants with chronic stroke aphasia on a large, detailed battery of behavioural assessments. The rotated principle components analysis revealed three key factors, which we labelled as phonology, semantic and executive/cognition on the basis of the common elements in the tests that loaded most strongly on each component. The phonology factor explained the most variance, followed by the semantic factor and then the executive-cognition factor. The use of principle components analysis rendered participants’ scores on these three factors orthogonal and therefore ideal for use as simultaneous continuous predictors in a voxel-based correlational methodology analysis of high resolution structural scans. Phonological processing ability was uniquely related to left posterior perisylvian regions including Heschl’s gyrus, posterior middle and superior temporal gyri and superior temporal sulcus, as well as the white matter underlying the posterior superior temporal gyrus. The semantic factor was uniquely related to left anterior middle temporal gyrus and the underlying temporal stem. The executive-cognition factor was not correlated selectively with the structural integrity of any particular region, as might be expected in light of the widely-distributed and multi-functional nature of the regions that support executive functions. The identified phonological and semantic areas align well with those highlighted by other methodologies such as functional neuroimaging and neurostimulation. The use of principle components analysis allowed us to characterize the neural bases of participants’ behavioural performance more robustly and selectively than the use of raw assessment scores or diagnostic classifications because principle components analysis extracts statistically unique, orthogonal behavioural components of interest. As such, in addition to improving our understanding of lesion–symptom mapping in stroke aphasia, the same approach could be used to clarify brain–behaviour relationships in other neurological disorders. PMID:25348632

  13. A two-dimensional spectrum analysis for sedimentation velocity experiments of mixtures with heterogeneity in molecular weight and shape.

    PubMed

    Brookes, Emre; Cao, Weiming; Demeler, Borries

    2010-02-01

    We report a model-independent analysis approach for fitting sedimentation velocity data which permits simultaneous determination of shape and molecular weight distributions for mono- and polydisperse solutions of macromolecules. Our approach allows for heterogeneity in the frictional domain, providing a more faithful description of the experimental data for cases where frictional ratios are not identical for all components. Because of increased accuracy in the frictional properties of each component, our method also provides more reliable molecular weight distributions in the general case. The method is based on a fine grained two-dimensional grid search over s and f/f (0), where the grid is a linear combination of whole boundary models represented by finite element solutions of the Lamm equation with sedimentation and diffusion parameters corresponding to the grid points. A Monte Carlo approach is used to characterize confidence limits for the determined solutes. Computational algorithms addressing the very large memory needs for a fine grained search are discussed. The method is suitable for globally fitting multi-speed experiments, and constraints based on prior knowledge about the experimental system can be imposed. Time- and radially invariant noise can be eliminated. Serial and parallel implementations of the method are presented. We demonstrate with simulated and experimental data of known composition that our method provides superior accuracy and lower variance fits to experimental data compared to other methods in use today, and show that it can be used to identify modes of aggregation and slow polymerization.

  14. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  15. Advanced Communication Processing Techniques Held in Ruidoso, New Mexico on 14-17 May 1989

    DTIC Science & Technology

    1990-01-01

    Criteria: * Prob. of Detection and False Alarm * Variances of Parameter Estimators * Prob. of Correct Classiflcsation and Rejection 0 2 In the exposure...couple of criteria. The tell? [LAUGHTER] If it was anybody else, I standard Neyman-Pearson approach for de- wouldn’t say .... tection, variances for... VARIANCE AISJ11T UPPER AND0 LOWER PMIOUIESOES FEATUE---OELET!U FETUA1E----WW-4A140 TIME SEOLIENTIAL CORRELATION FEATUE -$-ESTIMATED INA FEATURE-ID--LOW

  16. Logistic and Multiple Regression: A Two-Pronged Approach to Accurately Estimate Cost Growth in Major DoD Weapon Systems

    DTIC Science & Technology

    2004-03-01

    Breusch - Pagan test for constant variance of the residuals. Using Microsoft Excel® we calculate a p-value of 0.841237. This high p-value, which is above...our alpha of 0.05, indicates that our residuals indeed pass the Breusch - Pagan test for constant variance. In addition to the assumption tests , we...Wilk Test for Normality – Support (Reduced) Model (OLS) Finally, we perform a Breusch - Pagan test for constant variance of the residuals. Using

  17. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  18. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  19. Separation of Trend and Chaotic Components of Time Series and Estimation of Their Characteristics by Linear Splines

    NASA Astrophysics Data System (ADS)

    Kryanev, A. V.; Ivanov, V. V.; Romanova, A. O.; Sevastyanov, L. A.; Udumyan, D. K.

    2018-03-01

    This paper considers the problem of separating the trend and the chaotic component of chaotic time series in the absence of information on the characteristics of the chaotic component. Such a problem arises in nuclear physics, biomedicine, and many other applied fields. The scheme has two stages. At the first stage, smoothing linear splines with different values of smoothing parameter are used to separate the "trend component." At the second stage, the method of least squares is used to find the unknown variance σ2 of the noise component.

  20. Retest of a Principal Components Analysis of Two Household Environmental Risk Instruments.

    PubMed

    Oneal, Gail A; Postma, Julie; Odom-Maryon, Tamara; Butterfield, Patricia

    2016-08-01

    Household Risk Perception (HRP) and Self-Efficacy in Environmental Risk Reduction (SEERR) instruments were developed for a public health nurse-delivered intervention designed to reduce home-based, environmental health risks among rural, low-income families. The purpose of this study was to test both instruments in a second low-income population that differed geographically and economically from the original sample. Participants (N = 199) were recruited from the Women, Infants, and Children (WIC) program. Paper and pencil surveys were collected at WIC sites by research-trained student nurses. Exploratory principal components analysis (PCA) was conducted, and comparisons were made to the original PCA for the purpose of data reduction. Instruments showed satisfactory Cronbach alpha values for all components. HRP components were reduced from five to four, which explained 70% of variance. The components were labeled sensed risks, unseen risks, severity of risks, and knowledge. In contrast to the original testing, environmental tobacco smoke (ETS) items was not a separate component of the HRP. The SEERR analysis demonstrated four components explaining 71% of variance, with similar patterns of items as in the first study, including a component on ETS, but some differences in item location. Although low-income populations constituted both samples, differences in demographics and risk exposures may have played a role in component and item locations. Findings provided justification for changing or reducing items, and for tailoring the instruments to population-level risks and behaviors. Although analytic refinement will continue, both instruments advance the measurement of environmental health risk perception and self-efficacy. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  1. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts

    PubMed Central

    PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS

    2017-01-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876

  2. Exploring Oral Cancer Patients' Preference in Medical Decision Making and Quality of Life.

    PubMed

    Cheng, Sun-Long; Liao, Hsien-Hua; Shueng, Pei-Wei; Lee, Hsi-Chieh; Cheewakriangkrai, Chalong; Chang, Chi-Chang

    2017-01-01

    Little is known about the clinical effects of shared medical decision making (SMDM) associated with quality of life about oral cancer? To understand patients who occurred potential cause of SMDM and extended to explore the interrelated components of quality of life for providing patients with potential adaptation of early assessment. All consenting patients completed the SMDM questionnaire and 36-Item Short Form (SF-36). Regression analyses were conducted to find predictors of quality of life among oral cancer patients. The proposed model predicted 57.4% of the variance in patients' SF-36 Mental Component scores. Patient mental component summary scores were associated with smoking habit (β=-0.3449, p=0.022), autonomy (β=-0.226, p=0.018) and Control preference (β=-0.388, p=0.007). The proposed model predicted 42.6% of the variance in patients' SF-36 Physical component scores. Patient physical component summary scores were associated with higher education (β=0.288, p=0.007), employment status (β=-0.225, p=0.033), involvement perceived (β=-0.606, p=0.011) and Risk communication (β=-0.558, p=0.019). Future research is necessary to determine whether oral cancer patients would benefit from early screening and intervention to address shared medical decision making.

  3. Geochemical differentiation processes for arc magma of the Sengan volcanic cluster, Northeastern Japan, constrained from principal component analysis

    NASA Astrophysics Data System (ADS)

    Ueki, Kenta; Iwamori, Hikaru

    2017-10-01

    In this study, with a view of understanding the structure of high-dimensional geochemical data and discussing the chemical processes at work in the evolution of arc magmas, we employed principal component analysis (PCA) to evaluate the compositional variations of volcanic rocks from the Sengan volcanic cluster of the Northeastern Japan Arc. We analyzed the trace element compositions of various arc volcanic rocks, sampled from 17 different volcanoes in a volcanic cluster. The PCA results demonstrated that the first three principal components accounted for 86% of the geochemical variation in the magma of the Sengan region. Based on the relationships between the principal components and the major elements, the mass-balance relationships with respect to the contributions of minerals, the composition of plagioclase phenocrysts, geothermal gradient, and seismic velocity structure in the crust, the first, the second, and the third principal components appear to represent magma mixing, crystallizations of olivine/pyroxene, and crystallizations of plagioclase, respectively. These represented 59%, 20%, and 6%, respectively, of the variance in the entire compositional range, indicating that magma mixing accounted for the largest variance in the geochemical variation of the arc magma. Our result indicated that crustal processes dominate the geochemical variation of magma in the Sengan volcanic cluster.

  4. Speckle variance optical coherence tomography of blood flow in the beating mouse embryonic heart.

    PubMed

    Grishina, Olga A; Wang, Shang; Larina, Irina V

    2017-05-01

    Efficient separation of blood and cardiac wall in the beating embryonic heart is essential and critical for experiment-based computational modelling and analysis of early-stage cardiac biomechanics. Although speckle variance optical coherence tomography (SV-OCT) relying on calculation of intensity variance over consecutively acquired frames is a powerful approach for segmentation of fluid flow from static tissue, application of this method in the beating embryonic heart remains challenging because moving structures generate SV signal indistinguishable from the blood. Here, we demonstrate a modified four-dimensional SV-OCT approach that effectively separates the blood flow from the dynamic heart wall in the beating mouse embryonic heart. The method takes advantage of the periodic motion of the cardiac wall and is based on calculation of the SV signal over the frames corresponding to the same phase of the heartbeat cycle. Through comparison with Doppler OCT imaging, we validate this speckle-based approach and show advantages in its insensitiveness to the flow direction and velocity as well as reduced influence from the heart wall movement. This approach has a potential in variety of applications relying on visualization and segmentation of blood flow in periodically moving structures, such as mechanical simulation studies and finite element modelling. Picture: Four-dimensional speckle variance OCT imaging shows the blood flow inside the beating heart of an E8.5 mouse embryo. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Variance and covariance estimates for weaning weight of Senepol cattle.

    PubMed

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively.

  6. Job Tasks as Determinants of Thoracic Aerosol Exposure in the Cement Production Industry.

    PubMed

    Notø, Hilde; Nordby, Karl-Christian; Skare, Øivind; Eduard, Wijnand

    2017-12-15

    The aims of this study were to identify important determinants and investigate the variance components of thoracic aerosol exposure for the workers in the production departments of European cement plants. Personal thoracic aerosol measurements and questionnaire information (Notø et al., 2015) were the basis for this study. Determinants categorized in three levels were selected to describe the exposure relationships separately for the job types production, cleaning, maintenance, foreman, administration, laboratory, and other jobs by linear mixed models. The influence of plant and job determinants on variance components were explored separately and also combined in full models (plant&job) against models with no determinants (null). The best mixed models (best) describing the exposure for each job type were selected by the lowest Akaike information criterion (AIC; Akaike, 1974) after running all possible combination of the determinants. Tasks that significantly increased the thoracic aerosol exposure above the mean level for production workers were: packing and shipping, raw meal, cement and filter cleaning, and de-clogging of the cyclones. For maintenance workers, time spent with welding and dismantling before repair work increased the exposure while time with electrical maintenance and oiling decreased the exposure. Administration work decreased the exposure among foremen. A subjective tidiness factor scored by the research team explained up to a 3-fold (cleaners) variation in thoracic aerosol levels. Within-worker (WW) variance contained a major part of the total variance (35-58%) for all job types. Job determinants had little influence on the WW variance (0-4% reduction), some influence on the between-plant (BP) variance (from 5% to 39% reduction for production, maintenance, and other jobs respectively but an 79% increase for foremen) and a substantial influence on the between-worker within-plant variance (30-96% for production, foremen, and other workers). Plant determinants had little influence on the WW variance (0-2% reduction), some influence on the between-worker variance (0-1% reduction and 8% increase), and considerable influence on the BP variance (36-58% reduction) compared to the null models. Some job tasks contribute to low levels of thoracic aerosol exposure and others to higher exposure among cement plant workers. Thus, job task may predict exposure in this industry. Dust control measures in the packing and shipping departments and in the areas of raw meal and cement handling could contribute substantially to reduce the exposure levels. Rotation between low and higher exposed tasks may contribute to equalize the exposure levels between high and low exposed workers as a temporary solution before more permanent dust reduction measures is implemented. A tidy plant may reduce the overall exposure for almost all workers no matter of job type. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  7. [Application of the elliptic fourier functions to the description of avian egg shape].

    PubMed

    Ávila, Dennis Denis

    2014-12-01

    Egg shape is difficult to quantify due to the lack of an exact formula to describe its geometry. Here I described a simple algorithm to characterize and compare egg shapes using Fourier functions. These functions can delineate any closed contour and had been previously applied to describe several biological objects. I described, step by step, the process of data acquisition, processing and the use of the SHAPE software to extract function coefficients in a study case. I compared egg shapes in three birds' species representing different reproductive strategies: Cuban Parakeet (Aratinga euops), Royal Tern (Thalasseus maximus) and Cuban Blackbird (Dives atroviolaceus). Using 73 digital pictures of eggs kept in Cuban scientific collections, I calculated Fourier descriptors with 4, 6, 8, 16 and 20 harmonics. Descriptors were reduced by a Principal Component Analysis and the scores of the eigen-values that account for 90% of variance were used in a Lineal Discriminant Function to analyze the possibility to differentiate eggs according to its shapes. Using four harmonics, the first five component accounted for 97% of shape variances; more harmonics diluted the variance increasing to eight the number of components needed to explain most of the variation. Convex polygons in the discriminant space showed a clear separation between species, allowing trustful discrimination (classification errors between 7-15%). Misclassifications were related to specific egg shape variability between species. In the study case, A. euops eggs were perfectly classified, but for the other species, errors ranged from 5 to 29% of misclassifications, in relation to the numbers or harmonics and components used. The proposed algorithm, despite its apparent mathematical complexity, showed many advantages to describe eggs shape allowing a deeper understanding of factors related to this variable.

  8. Genetic and environmental influences on Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition (DSM-5) maladaptive personality traits and their connections with normative personality traits.

    PubMed

    Wright, Zara E; Pahlen, Shandell; Krueger, Robert F

    2017-05-01

    The Diagnostic and Statistical Manual for Mental Disorders-Fifth Edition (DSM-5) proposes an alternative model for personality disorders, which includes maladaptive-level personality traits. These traits can be operationalized by the Personality Inventory for the DSM-5 (PID-5). Although there has been extensive research on genetic and environmental influences on normative level personality, the heritability of the DSM-5 traits remains understudied. The present study addresses this gap in the literature by assessing traits indexed by the PID-5 and the International Personality Item Pool NEO (IPIP-NEO) in adult twins (N = 1,812 individuals). Research aims include (a) replicating past findings of the heritability of normative level personality as measured by the IPIP-NEO as a benchmark for studying maladaptive level traits, (b) ascertaining univariate heritability estimates of maladaptive level traits as measured by the PID-5, (c) establishing how much variation in personality pathology can be attributed to the same genetic components affecting variation in normative level personality, and (d) determining residual variance in personality pathology domains after variance attributable to genetic and environmental components of general personality has been removed. Results revealed that PID-5 traits reflect similar levels of heritability to that of IPIP-NEO traits. Further, maladaptive and normative level traits that correlate at the phenotypic level also correlate at the genotypic level, indicating overlapping genetic components contribute to variance in both. Nevertheless, we also found evidence for genetic and environmental components unique to maladaptive level personality traits, not shared with normative level traits. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Missile Systems Maintenance, AFSC 411XOB/C.

    DTIC Science & Technology

    1988-04-01

    technician’s rating. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of variance of...senior technician’s ratings. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of...FABRICATION TRANSITORS *INPUT/OUTPUT (PERIPHERAL) DEVICES SOLID-STATE SPECIAL PURPOSE DEVICES COMPUTER MICRO PROCESSORS AND PROGRAMS POWER SUPPLIES

  10. Genetic variation of the riparian pioneer tree species populus nigra. II. Variation In susceptibility to the foliar rust melampsora larici-populina

    PubMed

    Legionnet; Muranty; Lefevre

    1999-04-01

    Partial resistance of Populus nigra L. to three races of the foliar rust Melampsora larici-populina Kleb. was studied in a field trial and in laboratory tests, using a collection of P. nigra originating from different places throughout France. No total resistance was found. The partial resistance was split into epidemiological components, which proved to be under genetic control. Various patterns of association of epidemiological components values were found. Principal components analysis revealed their relationships. Only 24% of the variance of the field susceptibility could be explained by the variation of the epidemiological components of susceptibility. This variable was significantly correlated with susceptibility to the most ancient and widespread race of the pathogen, and with the variables related to the size of the lesions of the different races. Analysis of variance showed significant differences in susceptibility between regions and between stands within one region. Up to 20% of variation was between regions, and up to 22% between stands, so that these genetic factors appeared to be more differentiated than the neutral diversity (up to 3.5% Legionnet & Lefevre, 1996). However, no clear pattern of geographical distribution of diversity was detected.

  11. The assessment of facial variation in 4747 British school children.

    PubMed

    Toma, Arshed M; Zhurov, Alexei I; Playle, Rebecca; Marshall, David; Rosin, Paul L; Richmond, Stephen

    2012-12-01

    The aim of this study is to identify key components contributing to facial variation in a large population-based sample of 15.5-year-old children (2514 females and 2233 males). The subjects were recruited from the Avon Longitudinal Study of Parents and Children. Three-dimensional facial images were obtained for each subject using two high-resolution Konica Minolta laser scanners. Twenty-one reproducible facial landmarks were identified and their coordinates were recorded. The facial images were registered using Procrustes analysis. Principal component analysis was then employed to identify independent groups of correlated coordinates. For the total data set, 14 principal components (PCs) were identified which explained 82 per cent of the total variance, with the first three components accounting for 46 per cent of the variance. Similar results were obtained for males and females separately with only subtle gender differences in some PCs. Facial features may be treated as a multidimensional statistical continuum with respect to the PCs. The first three PCs characterize the face in terms of height, width, and prominence of the nose. The derived PCs may be useful to identify and classify faces according to a scale of normality.

  12. How Many Environmental Impact Indicators Are Needed in the Evaluation of Product Life Cycles?

    PubMed

    Steinmann, Zoran J N; Schipper, Aafke M; Hauck, Mara; Huijbregts, Mark A J

    2016-04-05

    Numerous indicators are currently available for environmental impact assessments, especially in the field of Life Cycle Impact Assessment (LCIA). Because decision-making on the basis of hundreds of indicators simultaneously is unfeasible, a nonredundant key set of indicators representative of the overall environmental impact is needed. We aimed to find such a nonredundant set of indicators based on their mutual correlations. We have used Principal Component Analysis (PCA) in combination with an optimization algorithm to find an optimal set of indicators out of 135 impact indicators calculated for 976 products from the ecoinvent database. The first four principal components covered 92% of the variance in product rankings, showing the potential for indicator reduction. The same amount of variance (92%) could be covered by a minimal set of six indicators, related to climate change, ozone depletion, the combined effects of acidification and eutrophication, terrestrial ecotoxicity, marine ecotoxicity, and land use. In comparison, four commonly used resource footprints (energy, water, land, materials) together accounted for 84% of the variance in product rankings. We conclude that the plethora of environmental indicators can be reduced to a small key set, representing the major part of the variation in environmental impacts between product life cycles.

  13. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  14. Linear score tests for variance components in linear mixed models and applications to genetic association studies.

    PubMed

    Qu, Long; Guennel, Tobias; Marshall, Scott L

    2013-12-01

    Following the rapid development of genome-scale genotyping technologies, genetic association mapping has become a popular tool to detect genomic regions responsible for certain (disease) phenotypes, especially in early-phase pharmacogenomic studies with limited sample size. In response to such applications, a good association test needs to be (1) applicable to a wide range of possible genetic models, including, but not limited to, the presence of gene-by-environment or gene-by-gene interactions and non-linearity of a group of marker effects, (2) accurate in small samples, fast to compute on the genomic scale, and amenable to large scale multiple testing corrections, and (3) reasonably powerful to locate causal genomic regions. The kernel machine method represented in linear mixed models provides a viable solution by transforming the problem into testing the nullity of variance components. In this study, we consider score-based tests by choosing a statistic linear in the score function. When the model under the null hypothesis has only one error variance parameter, our test is exact in finite samples. When the null model has more than one variance parameter, we develop a new moment-based approximation that performs well in simulations. Through simulations and analysis of real data, we demonstrate that the new test possesses most of the aforementioned characteristics, especially when compared to existing quadratic score tests or restricted likelihood ratio tests. © 2013, The International Biometric Society.

  15. Quantitative genetic analysis of the body composition and blood pressure association in two ethnically diverse populations.

    PubMed

    Ghosh, Sudipta; Dosaev, Tasbulat; Prakash, Jai; Livshits, Gregory

    2017-04-01

    The major aim of this study was to conduct comparative quantitative-genetic analysis of the body composition (BCP) and somatotype (STP) variation, as well as their correlations with blood pressure (BP) in two ethnically, culturally and geographically different populations: Santhal, indigenous ethnic group from India and Chuvash, indigenous population from Russia. Correspondently two pedigree-based samples were collected from 1,262 Santhal and1,558 Chuvash individuals, respectively. At the first stage of the study, descriptive statistics and a series of univariate regression analyses were calculated. Finally, multiple and multivariate regression (MMR) analyses, with BP measurements as dependent variables and age, sex, BCP and STP as independent variables were carried out in each sample separately. The significant and independent covariates of BP were identified and used for re-examination in pedigree-based variance decomposition analysis. Despite clear and significant differences between the populations in BCP/STP, both Santhal and Chuvash were found to be predominantly mesomorphic irrespective of their sex. According to MMR analyses variation of BP significantly depended on age and mesomorphic component in both samples, and in addition on sex, ectomorphy and fat mass index in Santhal and on fat free mass index in Chuvash samples, respectively. Additive genetic component contributes to a substantial proportion of blood pressure and body composition variance. Variance component analysis in addition to above mentioned results suggests that additive genetic factors influence BP and BCP/STP associations significantly. © 2017 Wiley Periodicals, Inc.

  16. Genome-wide interaction of genotype by erythrocyte n-3 PUFAs contributes to phenotypic variance of diabetes-related traits

    USDA-ARS?s Scientific Manuscript database

    While genome-wide association studies (GWAS) and candidate gene approach have identified many genetic variants that contribute to disease risk as main effects, the impact of genotype by environment (GxE) interactions remains rather under-surveyed. The present study aimed to examine variance contribu...

  17. Blinded sample size re-estimation in three-arm trials with 'gold standard' design.

    PubMed

    Mütze, Tobias; Friede, Tim

    2017-10-15

    In this article, we study blinded sample size re-estimation in the 'gold standard' design with internal pilot study for normally distributed outcomes. The 'gold standard' design is a three-arm clinical trial design that includes an active and a placebo control in addition to an experimental treatment. We focus on the absolute margin approach to hypothesis testing in three-arm trials at which the non-inferiority of the experimental treatment and the assay sensitivity are assessed by pairwise comparisons. We compare several blinded sample size re-estimation procedures in a simulation study assessing operating characteristics including power and type I error. We find that sample size re-estimation based on the popular one-sample variance estimator results in overpowered trials. Moreover, sample size re-estimation based on unbiased variance estimators such as the Xing-Ganju variance estimator results in underpowered trials, as it is expected because an overestimation of the variance and thus the sample size is in general required for the re-estimation procedure to eventually meet the target power. To overcome this problem, we propose an inflation factor for the sample size re-estimation with the Xing-Ganju variance estimator and show that this approach results in adequately powered trials. Because of favorable features of the Xing-Ganju variance estimator such as unbiasedness and a distribution independent of the group means, the inflation factor does not depend on the nuisance parameter and, therefore, can be calculated prior to a trial. Moreover, we prove that the sample size re-estimation based on the Xing-Ganju variance estimator does not bias the effect estimate. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Effect of components of a workplace lactation program on breastfeeding duration among employees of a public-sector employer.

    PubMed

    Balkam, Jane A Johnston; Cadwell, Karin; Fein, Sara B

    2011-07-01

    The purpose of this study was to evaluate the impact of the individual services offered via a workplace lactation program of one large public-sector employer on the duration of any breastfeeding and exclusive breastfeeding. Exclusive breastfeeding was defined as exclusive feeding of human milk for the milk feeding. A cross-sectional mailed survey approach was used. The sample (n = 128) consisted of women who had used at least one component of the lactation program in the past 3 years and who were still employed at the same organization when data were collected. Descriptive statistics included frequency distributions and contingency table analysis. Chi-square analysis was used for comparison of groups, and both analysis of variance (ANOVA) and univariate analysis of variance from a general linear model were used for comparison of means. The survey respondents were primarily older, white, married, well-educated, high-income women. More of the women who received each lactation program service were exclusively breastfeeding at 6 months of infant age in all categories of services, with significant differences in the categories of telephone support and return to work consultation. After adjusting for race and work status, logistic regression analysis showed the number of services received was positively related to exclusive breastfeeding at 6 months and participation in a return to work consultation was positively related to any breastfeeding at 6 months. The study demonstrated that the workplace lactation program had a positive impact on duration of breastfeeding for the women who participated. Participation in the telephone support and return to work consultation services, and the total number of services used were related to longer duration of exclusive and/or any breastfeeding.

  19. Positive influences of home food environment on primary-school children's diet and weight status: a structural equation model approach.

    PubMed

    Ong, Jia Xin; Ullah, Shahid; Magarey, Anthea; Leslie, Eva

    2016-10-01

    The mechanism by which the home food environment (HFE) influences childhood obesity is unclear. The present study aimed to investigate the relationship between HFE and childhood obesity as mediated by diet in primary-school children. Cross-sectional data collected from parents and primary-school children participating in the Obesity Prevention and Lifestyle Evaluation Project. Only children aged 9-11 years participated in the study. Matched parent/child data (n 3323) were analysed. Exploratory factor analysis underlined components of twenty-one HFE items; these were linked to child diet (meeting guidelines for fruit, vegetable and non-core food intakes) and measured child BMI, in structural equation modelling, adjusting for confounders. Twenty geographically bounded metropolitan and regional South Australian communities. School children and their parents from primary schools in selected communities. In the initial exploratory factor analysis, nineteen items remaining extracted eight factors with eigenvalues >1·0 (72·4 % of total variance). A five-factor structure incorporating ten items described HFE. After adjusting for age, gender, socio-economic status and physical activity all associations in the model were significant (P<0·05), explaining 9·3 % and 4·5 % of the variance in child diet and BMI, respectively. A more positive HFE was directly and indirectly associated with a lower BMI in children through child diet. The robust statistical methodology used in the present study provides support for a model of direct and indirect dynamics between the HFE and childhood obesity. The model can be tested in future longitudinal and intervention studies to identify the most effective components of the HFE to target in childhood obesity prevention efforts.

  20. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    PubMed

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Genome-wide linkage scan for loci of musical aptitude in Finnish families: evidence for a major locus at 4q22

    PubMed Central

    Pulli, K; Karma, K; Norio, R; Sistonen, P; Göring, H H H; Järvelä, I

    2008-01-01

    Background: Music perception and performance are comprehensive human cognitive functions and thus provide an excellent model system for studying human behaviour and brain function. However, the molecules involved in mediating music perception and performance are so far uncharacterised. Objective: To unravel the biological background of music perception, using molecular and statistical genetic approaches. Methods: 15 Finnish multigenerational families (with a total of 234 family members) were recruited via a nationwide search. The phenotype of all family members was determined using three tests used in defining musical aptitude: a test for auditory structuring ability (Karma Music test; KMT) commonly used in Finland, and the Seashore pitch and time discrimination subtests (SP and ST respectively) used internationally. We calculated heritabilities and performed a genome-wide variance components-based linkage scan using genotype data for 1113 microsatellite markers. Results: The heritability estimates were 42% for KMT, 57% for SP, 21% for ST and 48% for the combined music test scores. Significant evidence of linkage was obtained on chromosome 4q22 (LOD 3.33) and suggestive evidence of linkage at 8q13-21 (LOD 2.29) with the combined music test scores, using variance component linkage analyses. The major contribution of the 4q22 locus was obtained for the KMT (LOD 2.91). Interestingly, a positive LOD score of 1.69 was shown at 18q, a region previously linked to dyslexia (DYX6) using combined music test scores. Conclusion: Our results show that there is a genetic contribution to musical aptitude that is likely to be regulated by several predisposing genes or variants. PMID:18424507

  2. Narrow band quantitative and multivariate electroencephalogram analysis of peri-adolescent period.

    PubMed

    Martinez, E I Rodríguez; Barriga-Paulino, C I; Zapata, M I; Chinchilla, C; López-Jiménez, A M; Gómez, C M

    2012-08-24

    The peri-adolescent period is a crucial developmental moment of transition from childhood to emergent adulthood. The present report analyses the differences in Power Spectrum (PS) of the Electroencephalogram (EEG) between late childhood (24 children between 8 and 13 years old) and young adulthood (24 young adults between 18 and 23 years old). The narrow band analysis of the Electroencephalogram was computed in the frequency range of 0-20 Hz. The analysis of mean and variance suggested that six frequency ranges presented a different rate of maturation at these ages, namely: low delta, delta-theta, low alpha, high alpha, low beta and high beta. For most of these bands the maturation seems to occur later in anterior sites than posterior sites. Correlational analysis showed a lower pattern of correlation between different frequencies in children than in young adults, suggesting a certain asynchrony in the maturation of different rhythms. The topographical analysis revealed similar topographies of the different rhythms in children and young adults. Principal Component Analysis (PCA) demonstrated the same internal structure for the Electroencephalogram of both age groups. Principal Component Analysis allowed to separate four subcomponents in the alpha range. All these subcomponents peaked at a lower frequency in children than in young adults. The present approaches complement and solve some of the incertitudes when the classical brain broad rhythm analysis is applied. Children have a higher absolute power than young adults for frequency ranges between 0-20 Hz, the correlation of Power Spectrum (PS) with age and the variance age comparison showed that there are six ranges of frequencies that can distinguish the level of EEG maturation in children and adults. The establishment of maturational order of different frequencies and its possible maturational interdependence would require a complete series including all the different ages.

  3. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  4. Where does work stress come from? A generalizability analysis of stress in police officers.

    PubMed

    Lucas, Todd; Weidner, Nathan; Janisse, James

    2012-01-01

    Differences among workers and workplace stressors both contribute to perceiving work as stressful. However, the relative importance of these sources to work stress is not well delineated. Moreover, the extent to which work stress additionally reflects unique matches between specific workers and particular job stressors is also unclear. In this study, we use generalizability theory to specify and compare sources of variance in stress associated with police work. US police officers (N = 115) provided ratings of 60 stressors commonly associated with policing duties. Primary and secondary stress appraisal ratings reflected differences among officers in tendencies to generally perceive work stressors as stressful (14-15% officer effect), and also agreement among officers in viewing some stressors as more stressful than others (18-19% stressor effect). However, ratings especially reflected distinct pairings of officers and stressors (38-41% interaction effect). Additional analyses revealed individual differences and stressor characteristics associated with each variance component, including an officer × stressor interaction - compared to officers low in neuroticism, highly neurotic officers provided lower primary appraisal ratings of stressors generally seen as not serious, and also higher primary appraisal ratings of stressors that were seen as serious. We discuss implications of the current approach for the continued study of stress at work.

  5. The impact of seasonal signals on spatio-temporal filtering

    NASA Astrophysics Data System (ADS)

    Gruszczynski, Maciej; Klos, Anna; Bogusz, Janusz

    2016-04-01

    Existence of Common Mode Errors (CMEs) in permanent GNSS networks contribute to spatial and temporal correlation in residual time series. Time series from permanently observing GNSS stations of distance less than 2 000 km are similarly influenced by such CME sources as: mismodelling (Earth Orientation Parameters - EOP, satellite orbits or antenna phase center variations) during the process of the reference frame realization, large-scale atmospheric and hydrospheric effects as well as small scale crust deformations. Residuals obtained as a result of detrending and deseasonalising of topocentric GNSS time series arranged epoch-by-epoch form an observation matrix independently for each component (North, East, Up). CME is treated as internal structure of the data. Assuming a uniform temporal function across the network it is possible to filter CME out using PCA (Principal Component Analysis) approach. Some of above described CME sources may be reflected as a wide range of frequencies in GPS residual time series. In order to determine an impact of seasonal signals modeling to existence of spatial correlation in network and consequently the results of CME filtration, we chose two ways of modeling. The first approach was commonly presented by previous authors, who modeled with the Least-Squares Estimation (LSE) only annual and semi-annual oscillations. In the second one the set of residuals was a result of modeling of deterministic part that included fortnightly periods plus up to 9th harmonics of Chandlerian, tropical and draconitic oscillations. Correlation coefficients for residuals in parallel with KMO (Kaiser-Meyer-Olkin) statistic and Bartlett's test of sphericity were determined. For this research we used time series expressed in ITRF2008 provided by JPL (Jet Propulsion Laboratory). GPS processing was made using GIPSY-OASIS software in a PPP (Precise Point Positioning) mode. In order to form GPS station network that meet demands of uniform spatial response to the CME we chose 18 stations located in Central Europe. Created network extends up to 1500 kilometers. The KMO statistic indicate whether a component analysis may be useful for a chosen data set. We obtained KMO statistic value of 0.87 and 0.62 for residuals of Up component after first and second approaches were applied, what means that both residuals share common errors. Bartlett's test of sphericity analysis met a requirement that in both cases there are correlations in residuals. Another important results are the eigenvalues expressed as a percentage of the total variance explained by the first few components in PCA. For North, East and Up component we obtain respectively 68%, 75%, 65% and 47%, 54%, 52% after first and second approaches were applied. The results of CME filtration using PCA approach performed on both residual time series influence directly the uncertainty of the velocity of permanent stations. In our case spatial filtering reduces the uncertainty of velocity from 0.5 to 0.8 mm for horizontal components and from 0.6 to 0.9 mm on average for Up component when annual and semi-annual signals were assumed. Nevertheless, while second approach to the deterministic part modelling was used, deterioration of velocity uncertainty was noticed only for Up component, probably due to much higher autocorrelation in the time series when comparing to horizontal components.

  6. On the impact of a refined stochastic model for airborne LiDAR measurements

    NASA Astrophysics Data System (ADS)

    Bolkas, Dimitrios; Fotopoulos, Georgia; Glennie, Craig

    2016-09-01

    Accurate topographic information is critical for a number of applications in science and engineering. In recent years, airborne light detection and ranging (LiDAR) has become a standard tool for acquiring high quality topographic information. The assessment of airborne LiDAR derived DEMs is typically based on (i) independent ground control points and (ii) forward error propagation utilizing the LiDAR geo-referencing equation. The latter approach is dependent on the stochastic model information of the LiDAR observation components. In this paper, the well-known statistical tool of variance component estimation (VCE) is implemented for a dataset in Houston, Texas, in order to refine the initial stochastic information. Simulations demonstrate the impact of stochastic-model refinement for two practical applications, namely coastal inundation mapping and surface displacement estimation. Results highlight scenarios where erroneous stochastic information is detrimental. Furthermore, the refined stochastic information provides insights on the effect of each LiDAR measurement in the airborne LiDAR error budget. The latter is important for targeting future advancements in order to improve point cloud accuracy.

  7. Optimization of Micro Metal Injection Molding By Using Grey Relational Grade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, M. H. I.; Precision Process Research Group, Dept. of Mechanical and Materials Engineering, Faculty of Engineering, Universiti Kebangsaan Malaysia; Muhamad, N.

    2011-01-17

    Micro metal injection molding ({mu}MIM) which is a variant of MIM process is a promising method towards near net-shape of metallic micro components of complex geometry. In this paper, {mu}MIM is applied to produce 316L stainless steel micro components. Due to highly stringent characteristic of {mu}MIM properties, the study has been emphasized on optimization of process parameter where Taguchi method associated with Grey Relational Analysis (GRA) will be implemented as it represents novel approach towards investigation of multiple performance characteristics. Basic idea of GRA is to find a grey relational grade (GRG) which can be used for the optimization conversionmore » from multi objectives case which are density and strength to a single objective case. After considering the form 'the larger the better', results show that the injection time(D) is the most significant followed by injection pressure(A), holding time(E), mold temperature(C) and injection temperature(B). Analysis of variance (ANOVA) is also employed to strengthen the significant of each parameter involved in this study.« less

  8. Daily water and sediment discharges from selected rivers of the eastern United States; a time-series modeling approach

    USGS Publications Warehouse

    Fitzgerald, Michael G.; Karlinger, Michael R.

    1983-01-01

    Time-series models were constructed for analysis of daily runoff and sediment discharge data from selected rivers of the Eastern United States. Logarithmic transformation and first-order differencing of the data sets were necessary to produce second-order, stationary time series and remove seasonal trends. Cyclic models accounted for less than 42 percent of the variance in the water series and 31 percent in the sediment series. Analysis of the apparent oscillations of given frequencies occurring in the data indicates that frequently occurring storms can account for as much as 50 percent of the variation in sediment discharge. Components of the frequency analysis indicate that a linear representation is reasonable for the water-sediment system. Models that incorporate lagged water discharge as input prove superior to univariate techniques in modeling and prediction of sediment discharges. The random component of the models includes errors in measurement and model hypothesis and indicates no serial correlation. An index of sediment production within or between drain-gage basins can be calculated from model parameters.

  9. Northern Russian chironomid-based modern summer temperature data set and inference models

    NASA Astrophysics Data System (ADS)

    Nazarova, Larisa; Self, Angela E.; Brooks, Stephen J.; van Hardenbroek, Maarten; Herzschuh, Ulrike; Diekmann, Bernhard

    2015-11-01

    West and East Siberian data sets and 55 new sites were merged based on the high taxonomic similarity, and the strong relationship between mean July air temperature and the distribution of chironomid taxa in both data sets compared with other environmental parameters. Multivariate statistical analysis of chironomid and environmental data from the combined data set consisting of 268 lakes, located in northern Russia, suggests that mean July air temperature explains the greatest amount of variance in chironomid distribution compared with other measured variables (latitude, longitude, altitude, water depth, lake surface area, pH, conductivity, mean January air temperature, mean July air temperature, and continentality). We established two robust inference models to reconstruct mean summer air temperatures from subfossil chironomids based on ecological and geographical approaches. The North Russian 2-component WA-PLS model (RMSEPJack = 1.35 °C, rJack2 = 0.87) can be recommended for application in palaeoclimatic studies in northern Russia. Based on distinctive chironomid fauna and climatic regimes of Kamchatka the Far East 2-component WAPLS model (RMSEPJack = 1.3 °C, rJack2 = 0.81) has potentially better applicability in Kamchatka.

  10. Using foreground/background analysis to determine leaf and canopy chemistry

    NASA Technical Reports Server (NTRS)

    Pinzon, J. E.; Ustin, S. L.; Hart, Q. J.; Jacquemoud, S.; Smith, M. O.

    1995-01-01

    Spectral Mixture Analysis (SMA) has become a well established procedure for analyzing imaging spectrometry data, however, the technique is relatively insensitive to minor sources of spectral variation (e.g., discriminating stressed from unstressed vegetation and variations in canopy chemistry). Other statistical approaches have been tried e.g., stepwise multiple linear regression analysis to predict canopy chemistry. Grossman et al. reported that SMLR is sensitive to measurement error and that the prediction of minor chemical components are not independent of patterns observed in more dominant spectral components like water. Further, they observed that the relationships were strongly dependent on the mode of expressing reflectance (R, -log R) and whether chemistry was expressed on a weight (g/g) or are basis (g/sq m). Thus, alternative multivariate techniques need to be examined. Smith et al. reported a revised SMA that they termed Foreground/Background Analysis (FBA) that permits directing the analysis along any axis of variance by identifying vectors through the n-dimensional spectral volume orthonormal to each other. Here, we report an application of the FBA technique for the detection of canopy chemistry using a modified form of the analysis.

  11. A longitudinal examination of event-related potentials sensitive to monetary reward and loss feedback from late childhood to middle adolescence.

    PubMed

    Kujawa, Autumn; Carroll, Ashley; Mumper, Emma; Mukherjee, Dahlia; Kessel, Ellen M; Olino, Thomas; Hajcak, Greg; Klein, Daniel N

    2017-11-04

    Brain regions involved in reward processing undergo developmental changes from childhood to adolescence, and alterations in reward-related brain function are thought to contribute to the development of psychopathology. Event-related potentials (ERPs), such as the reward positivity (RewP) component, are valid measures of reward responsiveness that are easily assessed across development and provide insight into temporal dynamics of reward processing. Little work has systematically examined developmental changes in ERPs sensitive to reward. In this longitudinal study of 75 youth assessed 3 times across 6years, we used principal components analyses (PCA) to differentiate ERPs sensitive to monetary reward and loss feedback in late childhood, early adolescence, and middle adolescence. We then tested reliability of, and developmental changes in, ERPs. A greater number of ERP components differentiated reward and loss feedback in late childhood compared to adolescence, but components in childhood accounted for only a small proportion of variance. A component consistent with RewP was the only one to consistently emerge at each of the 3 assessments. RewP demonstrated acceptable reliability, particularly from early to middle adolescence, though reliability estimates varied depending on scoring approach and developmental period. The magnitude of the RewP component did not significantly change across time. Results provide insight into developmental changes in the structure of ERPs sensitive to reward, and indicate that RewP is a consistently observed and relatively stable measure of reward responsiveness, particularly across adolescence. Copyright © 2017. Published by Elsevier B.V.

  12. Statistical aspects of quantitative real-time PCR experiment design.

    PubMed

    Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales

    2010-04-01

    Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.

  13. Population ecology of breeding Pacific common eiders on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Wilson, Heather M.; Flint, Paul L.; Powell, Abby N.; Grand, J. Barry; Moral, Christine L.

    2012-01-01

    Populations of Pacific common eiders (Somateria mollissima v-nigrum) on the Yukon-Kuskokwim Delta (YKD) in western Alaska declined by 50–90% from 1957 to 1992 and then stabilized at reduced numbers from the early 1990s to the present. We investigated the underlying processes affecting their population dynamics by collection and analysis of demographic data from Pacific common eiders at 3 sites on the YKD (1991–2004) for 29 site-years. We examined variation in components of reproduction, tested hypotheses about the influence of specific ecological factors on life-history variables, and investigated their relative contributions to local population dynamics. Reproductive output was low and variable, both within and among individuals, whereas apparent survival of adult females was high and relatively invariant (0.89 ± 0.005). All reproductive parameters varied across study sites and years. Clutch initiation dates ranged from 4 May to 28 June, with peak (modal) initiation occurring on 26 May. Females at an island study site consistently initiated clutches 3–5 days earlier in each year than those on 2 mainland sites. Population variance in nest initiation date was negatively related to the peak, suggesting increased synchrony in years of delayed initiation. On average, total clutch size (laid) ranged from 4.8 to 6.6 eggs, and declined with date of nest initiation. After accounting for partial predation and non-viability of eggs, average clutch size at hatch ranged from 2.0 to 5.8 eggs. Within seasons, daily survival probability (DSP) of nests was lowest during egg-laying and late-initiation dates. Estimated nest survival varied considerably across sites and years (mean = 0.55, range: 0.06–0.92), but process variance in nest survival was relatively low (0.02, CI: 0.01–0.05), indicating that most variance was likely attributed to sampling error. We found evidence that observer effects may have reduced overall nest survival by 0.0–0.36 across site-years. Study sites with lower sample sizes and more frequent visitations appeared to experience greater observer effects. In general, Pacific common eiders exhibited high spatio-temporal variance in reproductive components. Larger clutch sizes and high nest survival at early initiation dates suggested directional selection favoring early nesting. However, stochastic environmental effects may have precluded response to this apparent selection pressure. Our results suggest that females breeding early in the season have the greatest reproductive value, as these birds lay the largest clutches and have the highest probability of successfully hatching. We developed stochastic, stage-based, matrix population models that incorporated observed spatio-temporal (process) variance and co-variation in vital rates, and projected the stable stage distribution () and population growth rate (λ). We used perturbation analyses to examine the relative influence of changes in vital rates on λ and variance decomposition to assess the proportion of variation in λ explained by process variation in each vital rate. In addition to matrix-based λ, we estimated λ using capture–recapture approaches, and log-linear regression. We found the stable age distribution for Pacific common eiders was weighted heavily towards experienced adult females (≥4 yr of age), and all calculations of λ indicated that the YKD population was stable to slightly increasing (λmatrix = 1.02, CI: 1.00–1.04); λreverse-capture–recapture = 1.05, CI: 0.99–1.11; λlog-linear = 1.04, CI: 0.98–1.10). Perturbation analyses suggested the population would respond most dramatically to changes in adult female survival (relative influence of adult survival was 1.5 times that of fecundity), whereas retrospective variation in λ was primarily explained by fecundity parameters (60%), particularly duckling survival (42%). Among components of fecundity, sensitivities were highest for duckling survival, suggesti

  14. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  15. Statistics of some atmospheric turbulence records relevant to aircraft response calculations

    NASA Technical Reports Server (NTRS)

    Mark, W. D.; Fischer, R. W.

    1981-01-01

    Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.

  16. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  17. Local Distributions of Wealth to Describe Health Inequalities in India: A New Approach for Analyzing Nationally Representative Household Survey Data, 1992–2008

    PubMed Central

    Bassani, Diego G.; Corsi, Daniel J.; Gaffey, Michelle F.; Barros, Aluisio J. D.

    2014-01-01

    Background Worse health outcomes including higher morbidity and mortality are most often observed among the poorest fractions of a population. In this paper we present and validate national, regional and state-level distributions of national wealth index scores, for urban and rural populations, derived from household asset data collected in six survey rounds in India between 1992–3 and 2007–8. These new indices and their sub-national distributions allow for comparative analyses of a standardized measure of wealth across time and at various levels of population aggregation in India. Methods Indices were derived through principal components analysis (PCA) performed using standardized variables from a correlation matrix to minimize differences in variance. Valid and simple indices were constructed with the minimum number of assets needed to produce scores with enough variability to allow definition of unique decile cut-off points in each urban and rural area of all states. Results For all indices, the first PCA components explained between 36% and 43% of the variance in household assets. Using sub-national distributions of national wealth index scores, mean height-for-age z-scores increased from the poorest to the richest wealth quintiles for all surveys, and stunting prevalence was higher among the poorest and lower among the wealthiest. Urban and rural decile cut-off values for India, for the six regions and for the 24 major states revealed large variability in wealth by geographical area and level, and rural wealth score gaps exceeded those observed in urban areas. Conclusions The large variability in sub-national distributions of national wealth index scores indicates the importance of accounting for such variation when constructing wealth indices and deriving score distribution cut-off points. Such an approach allows for proper within-sample economic classification, resulting in scores that are valid indicators of wealth and correlate well with health outcomes, and enables wealth-related analyses at whichever geographical area and level may be most informative for policy-making processes. PMID:25356667

  18. DREEM on: validation of the Dundee Ready Education Environment Measure in Pakistan.

    PubMed

    Khan, Junaid Sarfraz; Tabasum, Saima; Yousafzai, Usman Khalil; Fatima, Mehreen

    2011-09-01

    To validate DREEM in medical education environment of Punjab, Pakistan. The DREEM questionnaire was anonymously collected from Final year Baccalaureate of Medicine; Baccalaureate of Surgery students in the private and public medical colleges affiliated with the University of Health Sciences, Lahore. Data was analyzed using Principal Component Analysis with Varimax Rotation. The response rate was 84.14 %. The average DREEM score was 125. Confirmatory and Exploratory Factor Analysis was applied under the conditions of eigenvalues >1 and loadings > or = 0.3. In CONFIRMATORY FACTOR ANALYSIS, Five components were extracted accounting for 40.10% of variance and in EXPLORATORY FACTOR ANALYSIS, Ten components were extracted accounting for 52.33% of variance. Total 50 items had internal consistency reliability of 0.91 (Cronbach's Alpha). The value of Spearman-Brown was 0.868 showing the reliability of the analysis. In both analyses the subscales produced were sensible but the mismatch from the original was largely due to the English-Pakistan contextual and cultural differences. DREEM is a generic instrument that will do well with regional modifications to suit individual, contextual and cultural settings.

  19. Compatible Models of Carbon Content of Individual Trees on a Cunninghamia lanceolata Plantation in Fujian Province, China

    PubMed Central

    Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu

    2016-01-01

    We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054

  20. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

Top