ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Alvin H. Yu; Garry Chick
2010-01-01
This study compared the utility of two different post-hoc tests after detecting significant differences within factors on multiple dependent variables using multivariate analysis of variance (MANOVA). We compared the univariate F test (the Scheffé method) to descriptive discriminant analysis (DDA) using an educational-tour survey of university study-...
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
A Mean variance analysis of arbitrage portfolios
NASA Astrophysics Data System (ADS)
Fang, Shuhong
2007-03-01
Based on the careful analysis of the definition of arbitrage portfolio and its return, the author presents a mean-variance analysis of the return of arbitrage portfolios, which implies that Korkie and Turtle's results ( B. Korkie, H.J. Turtle, A mean-variance analysis of self-financing portfolios, Manage. Sci. 48 (2002) 427-443) are misleading. A practical example is given to show the difference between the arbitrage portfolio frontier and the usual portfolio frontier.
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.
ERIC Educational Resources Information Center
Peet, Martha W.
This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301…
Genomic Analysis of Complex Microbial Communities in Wounds
2012-01-01
thoroughly in the ecology literature. Permutation Multivariate Analysis of Variance ( PerMANOVA ). We used PerMANOVA to test the null-hypothesis of no...difference between the bacterial communities found within a single wound compared to those from different patients (α = 0.05). PerMANOVA is a...permutation-based version of the multivariate analysis of variance (MANOVA). PerMANOVA uses the distances between samples to partition variance and
An Analysis of Variance Framework for Matrix Sampling.
ERIC Educational Resources Information Center
Sirotnik, Kenneth
Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…
Excoffier, L; Smouse, P E; Quattro, J M
1992-06-01
We present here a framework for the study of molecular variation within a single species. Information on DNA haplotype divergence is incorporated into an analysis of variance format, derived from a matrix of squared-distances among all pairs of haplotypes. This analysis of molecular variance (AMOVA) produces estimates of variance components and F-statistic analogs, designated here as phi-statistics, reflecting the correlation of haplotypic diversity at different levels of hierarchical subdivision. The method is flexible enough to accommodate several alternative input matrices, corresponding to different types of molecular data, as well as different types of evolutionary assumptions, without modifying the basic structure of the analysis. The significance of the variance components and phi-statistics is tested using a permutational approach, eliminating the normality assumption that is conventional for analysis of variance but inappropriate for molecular data. Application of AMOVA to human mitochondrial DNA haplotype data shows that population subdivisions are better resolved when some measure of molecular differences among haplotypes is introduced into the analysis. At the intraspecific level, however, the additional information provided by knowing the exact phylogenetic relations among haplotypes or by a nonlinear translation of restriction-site change into nucleotide diversity does not significantly modify the inferred population genetic structure. Monte Carlo studies show that site sampling does not fundamentally affect the significance of the molecular variance components. The AMOVA treatment is easily extended in several different directions and it constitutes a coherent and flexible framework for the statistical analysis of molecular data.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Tanner-Smith, Emily E; Tipton, Elizabeth
2014-03-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates. Copyright © 2013 John Wiley & Sons, Ltd.
Distribution of lod scores in oligogenic linkage analysis.
Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J
2001-01-01
In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Dangers in Using Analysis of Covariance Procedures.
ERIC Educational Resources Information Center
Campbell, Kathleen T.
Problems associated with the use of analysis of covariance (ANCOVA) as a statistical control technique are explained. Three problems relate to the use of "OVA" methods (analysis of variance, analysis of covariance, multivariate analysis of variance, and multivariate analysis of covariance) in general. These are: (1) the wasting of information when…
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
Environmental Influences on Well-Being: A Dyadic Latent Panel Analysis of Spousal Similarity
ERIC Educational Resources Information Center
Schimmack, Ulrich; Lucas, Richard E.
2010-01-01
This article uses dyadic latent panel analysis (DLPA) to examine environmental influences on well-being. DLPA requires longitudinal dyadic data. It decomposes the observed variance of both members of a dyad into a trait, state, and an error component. Furthermore, state variance is decomposed into initial and new state variance. Total observed…
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
Analysis of Wind Tunnel Polar Replicates Using the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard; Micol, John R.
2010-01-01
The role of variance in a Modern Design of Experiments analysis of wind tunnel data is reviewed, with distinctions made between explained and unexplained variance. The partitioning of unexplained variance into systematic and random components is illustrated, with examples of the elusive systematic component provided for various types of real-world tests. The importance of detecting and defending against systematic unexplained variance in wind tunnel testing is discussed, and the random and systematic components of unexplained variance are examined for a representative wind tunnel data set acquired in a test in which a missile is used as a test article. The adverse impact of correlated (non-independent) experimental errors is described, and recommendations are offered for replication strategies that facilitate the quantification of random and systematic unexplained variance.
Analysis of Developmental Data: Comparison Among Alternative Methods
ERIC Educational Resources Information Center
Wilson, Ronald S.
1975-01-01
To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)
Statistical analysis of Skylab 3. [endocrine/metabolic studies of astronauts
NASA Technical Reports Server (NTRS)
Johnston, D. A.
1974-01-01
The results of endocrine/metabolic studies of astronauts on Skylab 3 are reported. One-way analysis of variance, contrasts, two-way unbalanced analysis of variance, and analysis of periodic changes in flight are included. Results for blood tests, and urine tests are presented.
An Empirical Assessment of Defense Contractor Risk 1976-1984.
1986-06-01
Model to evaluate the. Department of Defense contract pricing , financing, and profit policies . ’ D*’ ’ *NTV D? 7A’:: TA E *A l ..... -:- A-i SN 0102...defense con- tractor risk-return relationship is performed utilizing four methods: mean-variance analysis of rate of return, the Capital Asset Pricing Model ...relationship is performed utilizing four methods: mean- variance analysis of rate of return, the Capital Asset Pricing Model , mean-variance analysis of total
ERIC Educational Resources Information Center
Lix, Lisa M.; And Others
1996-01-01
Meta-analytic techniques were used to summarize the statistical robustness literature on Type I error properties of alternatives to the one-way analysis of variance "F" test. The James (1951) and Welch (1951) tests performed best under violations of the variance homogeneity assumption, although their use is not always appropriate. (SLD)
Xu, Chonggang; Gertner, George
2013-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Xu, Nan; Veesler, David; Doerschuk, Peter C; Johnson, John E
2018-05-01
The information content of cryo EM data sets exceeds that of the electron scattering potential (cryo EM) density initially derived for structure determination. Previously we demonstrated the power of data variance analysis for characterizing regions of cryo EM density that displayed functionally important variance anomalies associated with maturation cleavage events in Nudaurelia Omega Capensis Virus and the presence or absence of a maturation protease in bacteriophage HK97 procapsids. Here we extend the analysis in two ways. First, instead of imposing icosahedral symmetry on every particle in the data set during the variance analysis, we only assume that the data set as a whole has icosahedral symmetry. This change removes artifacts of high variance along icosahedral symmetry axes, but retains all of the features previously reported in the HK97 data set. Second we present a covariance analysis that reveals correlations in structural dynamics (variance) between the interior of the HK97 procapsid with the protease and regions of the exterior (not seen in the absence of the protease). The latter analysis corresponds well with hydrogen deuterium exchange studies previously published that reveal the same correlation. Copyright © 2018 Elsevier Inc. All rights reserved.
Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja
2013-01-01
Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding.
A note on variance estimation in random effects meta-regression.
Sidik, Kurex; Jonkman, Jeffrey N
2005-01-01
For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Why risk is not variance: an expository note.
Cox, Louis Anthony Tony
2008-08-01
Variance (or standard deviation) of return is widely used as a measure of risk in financial investment risk analysis applications, where mean-variance analysis is applied to calculate efficient frontiers and undominated portfolios. Why, then, do health, safety, and environmental (HS&E) and reliability engineering risk analysts insist on defining risk more flexibly, as being determined by probabilities and consequences, rather than simply by variances? This note suggests an answer by providing a simple proof that mean-variance decision making violates the principle that a rational decisionmaker should prefer higher to lower probabilities of receiving a fixed gain, all else being equal. Indeed, simply hypothesizing a continuous increasing indifference curve for mean-variance combinations at the origin is enough to imply that a decisionmaker must find unacceptable some prospects that offer a positive probability of gain and zero probability of loss. Unlike some previous analyses of limitations of variance as a risk metric, this expository note uses only simple mathematics and does not require the additional framework of von Neumann Morgenstern utility theory.
ERIC Educational Resources Information Center
Tanner-Smith, Emily E.; Tipton, Elizabeth
2014-01-01
Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…
40 CFR 264.97 - General ground-water monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... paragraph (i) of this section. (1) A parametric analysis of variance (ANOVA) followed by multiple... mean levels for each constituent. (2) An analysis of variance (ANOVA) based on ranks followed by...
Thorlund, Kristian; Thabane, Lehana; Mills, Edward J
2013-01-11
Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the 'common variance' assumption). This approach 'borrows strength' for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice.
Once upon Multivariate Analyses: When They Tell Several Stories about Biological Evolution.
Renaud, Sabrina; Dufour, Anne-Béatrice; Hardouin, Emilie A; Ledevin, Ronan; Auffray, Jean-Christophe
2015-01-01
Geometric morphometrics aims to characterize of the geometry of complex traits. It is therefore by essence multivariate. The most popular methods to investigate patterns of differentiation in this context are (1) the Principal Component Analysis (PCA), which is an eigenvalue decomposition of the total variance-covariance matrix among all specimens; (2) the Canonical Variate Analysis (CVA, a.k.a. linear discriminant analysis (LDA) for more than two groups), which aims at separating the groups by maximizing the between-group to within-group variance ratio; (3) the between-group PCA (bgPCA) which investigates patterns of between-group variation, without standardizing by the within-group variance. Standardizing within-group variance, as performed in the CVA, distorts the relationships among groups, an effect that is particularly strong if the variance is similarly oriented in a comparable way in all groups. Such shared direction of main morphological variance may occur and have a biological meaning, for instance corresponding to the most frequent standing genetic variation in a population. Here we undertake a case study of the evolution of house mouse molar shape across various islands, based on the real dataset and simulations. We investigated how patterns of main variance influence the depiction of among-group differentiation according to the interpretation of the PCA, bgPCA and CVA. Without arguing about a method performing 'better' than another, it rather emerges that working on the total or between-group variance (PCA and bgPCA) will tend to put the focus on the role of direction of main variance as line of least resistance to evolution. Standardizing by the within-group variance (CVA), by dampening the expression of this line of least resistance, has the potential to reveal other relevant patterns of differentiation that may otherwise be blurred.
von Thiele Schwarz, Ulrica; Sjöberg, Anders; Hasson, Henna; Tafvelin, Susanne
2014-12-01
To test the factor structure and variance components of the productivity subscales of the Health and Work Questionnaire (HWQ). A total of 272 individuals from one company answered the HWQ scale, including three dimensions (efficiency, quality, and quantity) that the respondent rated from three perspectives: their own, their supervisor's, and their coworkers'. A confirmatory factor analysis was performed, and common and unique variance components evaluated. A common factor explained 81% of the variance (reliability 0.95). All dimensions and rater perspectives contributed with unique variance. The final model provided a perfect fit to the data. Efficiency, quality, and quantity and three rater perspectives are valid parts of the self-rated productivity measurement model, but with a large common factor. Thus, the HWQ can be analyzed either as one factor or by extracting the unique variance for each subdimension.
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request. PMID:21611181
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
NASA Astrophysics Data System (ADS)
Asanuma, Jun
Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original hypothesis by Panofsky and McCormick that the local scaling in terms of the local buoyancy flux defines the lower bound of the moments.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Aligning Event Logs to Task-Time Matrix Clinical Pathways in BPMN for Variance Analysis.
Yan, Hui; Van Gorp, Pieter; Kaymak, Uzay; Lu, Xudong; Ji, Lei; Chiau, Choo Chiap; Korsten, Hendrikus H M; Duan, Huilong
2018-03-01
Clinical pathways (CPs) are popular healthcare management tools to standardize care and ensure quality. Analyzing CP compliance levels and variances is known to be useful for training and CP redesign purposes. Flexible semantics of the business process model and notation (BPMN) language has been shown to be useful for the modeling and analysis of complex protocols. However, in practical cases one may want to exploit that CPs often have the form of task-time matrices. This paper presents a new method parsing complex BPMN models and aligning traces to the models heuristically. A case study on variance analysis is undertaken, where a CP from the practice and two large sets of patients data from an electronic medical record (EMR) database are used. The results demonstrate that automated variance analysis between BPMN task-time models and real-life EMR data are feasible, whereas that was not the case for the existing analysis techniques. We also provide meaningful insights for further improvement.
Theodorsson-Norheim, E
1986-08-01
Multiple t tests at a fixed p level are frequently used to analyse biomedical data where analysis of variance followed by multiple comparisons or the adjustment of the p values according to Bonferroni would be more appropriate. The Kruskal-Wallis test is a nonparametric 'analysis of variance' which may be used to compare several independent samples. The present program is written in an elementary subset of BASIC and will perform Kruskal-Wallis test followed by multiple comparisons between the groups on practically any computer programmable in BASIC.
Merlo, J; Ohlsson, H; Lynch, K F; Chaix, B; Subramanian, S V
2009-12-01
Social epidemiology investigates both individuals and their collectives. Although the limits that define the individual bodies are very apparent, the collective body's geographical or cultural limits (eg "neighbourhood") are more difficult to discern. Also, epidemiologists normally investigate causation as changes in group means. However, many variables of interest in epidemiology may cause a change in the variance of the distribution of the dependent variable. In spite of that, variance is normally considered a measure of uncertainty or a nuisance rather than a source of substantive information. This reasoning is also true in many multilevel investigations, whereas understanding the distribution of variance across levels should be fundamental. This means-centric reductionism is mostly concerned with risk factors and creates a paradoxical situation, as social medicine is not only interested in increasing the (mean) health of the population, but also in understanding and decreasing inappropriate health and health care inequalities (variance). Critical essay and literature review. The present study promotes (a) the application of measures of variance and clustering to evaluate the boundaries one uses in defining collective levels of analysis (eg neighbourhoods), (b) the combined use of measures of variance and means-centric measures of association, and (c) the investigation of causes of health variation (variance-altering causation). Both measures of variance and means-centric measures of association need to be included when performing contextual analyses. The variance approach, a new aspect of contextual analysis that cannot be interpreted in means-centric terms, allows perspectives to be expanded.
Variation of gene expression in Bacillus subtilis samples of fermentation replicates.
Zhou, Ying; Yu, Wen-Bang; Ye, Bang-Ce
2011-06-01
The application of comprehensive gene expression profiling technologies to compare wild and mutated microorganism samples or to assess molecular differences between various treatments has been widely used. However, little is known about the normal variation of gene expression in microorganisms. In this study, an Agilent customized microarray representing 4,106 genes was used to quantify transcript levels of five-repeated flasks to assess normal variation in Bacillus subtilis gene expression. CV analysis and analysis of variance were employed to investigate the normal variance of genes and the components of variance, respectively. The results showed that above 80% of the total variation was caused by biological variance. For the 12 replicates, 451 of 4,106 genes exhibited variance with CV values over 10%. The functional category enrichment analysis demonstrated that these variable genes were mainly involved in cell type differentiation, cell type localization, cell cycle and DNA processing, and spore or cyst coat. Using power analysis, the minimal biological replicate number for a B. subtilis microarray experiment was determined to be six. The results contribute to the definition of the baseline level of variability in B. subtilis gene expression and emphasize the importance of replicate microarray experiments.
2013-01-01
Background Multiple treatment comparison (MTC) meta-analyses are commonly modeled in a Bayesian framework, and weakly informative priors are typically preferred to mirror familiar data driven frequentist approaches. Random-effects MTCs have commonly modeled heterogeneity under the assumption that the between-trial variance for all involved treatment comparisons are equal (i.e., the ‘common variance’ assumption). This approach ‘borrows strength’ for heterogeneity estimation across treatment comparisons, and thus, ads valuable precision when data is sparse. The homogeneous variance assumption, however, is unrealistic and can severely bias variance estimates. Consequently 95% credible intervals may not retain nominal coverage, and treatment rank probabilities may become distorted. Relaxing the homogeneous variance assumption may be equally problematic due to reduced precision. To regain good precision, moderately informative variance priors or additional mathematical assumptions may be necessary. Methods In this paper we describe four novel approaches to modeling heterogeneity variance - two novel model structures, and two approaches for use of moderately informative variance priors. We examine the relative performance of all approaches in two illustrative MTC data sets. We particularly compare between-study heterogeneity estimates and model fits, treatment effect estimates and 95% credible intervals, and treatment rank probabilities. Results In both data sets, use of moderately informative variance priors constructed from the pair wise meta-analysis data yielded the best model fit and narrower credible intervals. Imposing consistency equations on variance estimates, assuming variances to be exchangeable, or using empirically informed variance priors also yielded good model fits and narrow credible intervals. The homogeneous variance model yielded high precision at all times, but overall inadequate estimates of between-trial variances. Lastly, treatment rankings were similar among the novel approaches, but considerably different when compared with the homogenous variance approach. Conclusions MTC models using a homogenous variance structure appear to perform sub-optimally when between-trial variances vary between comparisons. Using informative variance priors, assuming exchangeability or imposing consistency between heterogeneity variances can all ensure sufficiently reliable and realistic heterogeneity estimation, and thus more reliable MTC inferences. All four approaches should be viable candidates for replacing or supplementing the conventional homogeneous variance MTC model, which is currently the most widely used in practice. PMID:23311298
Vowel category dependence of the relationship between palate height, tongue height, and oral area.
Hasegawa-Johnson, Mark; Pizza, Shamala; Alwan, Abeer; Cha, Jul Setsu; Haker, Katherine
2003-06-01
This article evaluates intertalker variance of oral area, logarithm of the oral area, tongue height, and formant frequencies as a function of vowel category. The data consist of coronal magnetic resonance imaging (MRI) sequences and acoustic recordings of 5 talkers, each producing 11 different vowels. Tongue height (left, right, and midsagittal), palate height, and oral area were measured in 3 coronal sections anterior to the oropharyngeal bend and were subjected to multivariate analysis of variance, variance ratio analysis, and regression analysis. The primary finding of this article is that oral area (between palate and tongue) showed less intertalker variance during production of vowels with an oral place of articulation (palatal and velar vowels) than during production of vowels with a uvular or pharyngeal place of articulation. Although oral area variance is place dependent, percentage variance (log area variance) is not place dependent. Midsagittal tongue height in the molar region was positively correlated with palate height during production of palatal vowels, but not during production of nonpalatal vowels. Taken together, these results suggest that small oral areas are characterized by relatively talker-independent vowel targets and that meeting these talker-independent targets is important enough that each talker adjusts his or her own tongue height to compensate for talker-dependent differences in constriction anatomy. Computer simulation results are presented to demonstrate that these results may be explained by an acoustic control strategy: When talkers with very different anatomical characteristics try to match talker-independent formant targets, the resulting area variances are minimized near the primary vocal tract constriction.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
WASP (Write a Scientific Paper) using Excel 9: Analysis of variance.
Grech, Victor
2018-06-01
Analysis of variance (ANOVA) may be required by researchers as an inferential statistical test when more than two means require comparison. This paper explains how to perform ANOVA in Microsoft Excel. Copyright © 2018 Elsevier B.V. All rights reserved.
Meta-analysis for explaining the variance in public transport demand elasticities in Europe
DOT National Transportation Integrated Search
1998-01-01
Results from past studies on transport demand elasticities show a large variance. This paper assesses key factors that influence the sensitivity of public transport users to transport costs in Europe, by carrying out a comparative analysis of the dif...
Saviane, Chiara; Silver, R Angus
2006-06-15
Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.
Bayesian Factor Analysis When Only a Sample Covariance Matrix Is Available
ERIC Educational Resources Information Center
Hayashi, Kentaro; Arav, Marina
2006-01-01
In traditional factor analysis, the variance-covariance matrix or the correlation matrix has often been a form of inputting data. In contrast, in Bayesian factor analysis, the entire data set is typically required to compute the posterior estimates, such as Bayes factor loadings and Bayes unique variances. We propose a simple method for computing…
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Impact of Damping Uncertainty on SEA Model Response Variance
NASA Technical Reports Server (NTRS)
Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand
2010-01-01
Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2011-01-01
This slide presentation is a series of educational presentations that are on the statistical function of analysis of variance (ANOVA). Analysis of Variance (ANOVA) examines variability between groups, relative to within groups, to determine whether there's evidence that the groups are not from the same population. One other presentation reviews hypothesis testing.
Testing Interaction Effects without Discarding Variance.
ERIC Educational Resources Information Center
Lopez, Kay A.
Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…
Decomposing genomic variance using information from GWA, GWE and eQTL analysis.
Ehsani, A; Janss, L; Pomp, D; Sørensen, P
2016-04-01
A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. © 2015 Stichting International Foundation for Animal Genetics.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
iTemplate: A template-based eye movement data analysis approach.
Xiao, Naiqi G; Lee, Kang
2018-02-08
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…
A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists
ERIC Educational Resources Information Center
Warne, Russell T.
2014-01-01
Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.
DeCarlo, Lawrence T
2003-02-01
The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.
Wildhaber, Mark L.; Albers, Janice; Green, Nicholas; Moran, Edward H.
2017-01-01
We develop a fully-stochasticized, age-structured population model suitable for population viability analysis (PVA) of fish and demonstrate its use with the endangered pallid sturgeon (Scaphirhynchus albus) of the Lower Missouri River as an example. The model incorporates three levels of variance: parameter variance (uncertainty about the value of a parameter itself) applied at the iteration level, temporal variance (uncertainty caused by random environmental fluctuations over time) applied at the time-step level, and implicit individual variance (uncertainty caused by differences between individuals) applied within the time-step level. We found that population dynamics were most sensitive to survival rates, particularly age-2+ survival, and to fecundity-at-length. The inclusion of variance (unpartitioned or partitioned), stocking, or both generally decreased the influence of individual parameters on population growth rate. The partitioning of variance into parameter and temporal components had a strong influence on the importance of individual parameters, uncertainty of model predictions, and quasiextinction risk (i.e., pallid sturgeon population size falling below 50 age-1+ individuals). Our findings show that appropriately applying variance in PVA is important when evaluating the relative importance of parameters, and reinforce the need for better and more precise estimates of crucial life-history parameters for pallid sturgeon.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Data Analysis and Its Impact on Predicting Schedule & Cost Risk
2006-03-01
variance of the error term by performing a Breusch - Pagan test for constant variance (Neter et al., 1996:239). In order to test the normality of...is constant variance. Using Microsoft Excel®, we calculate a p- 68 value of 0.225678 for the Breusch - Pagan test . We again compare this p-value to...calculate a p-value of 0.121211092 Breusch - Pagan test . We again compare this p-value to an alpha of 0.05 indicating our assumption of constant variance
[A self administered survey to assess bullying in schools].
Lecannelier, Felipe; Varela, Jorge; Rodríguez, Jorge; Hoffmann, Marianela; Flores, Fernanda; Ascanio, Lorena
2011-04-01
Bullying is common in schools and has negative consequences. It can be assessed using a self-reported instrument. To validate a Spanish self-reporting tool called "Survey of High School Bullying Abuse of Power" (MIAP). The instrument has 13 questions, of which 7 are multiple choice, rendering a total of 49 items. It was applied to 2.341 children of seventh and eighth grade attending private, subsidized and municipal schools in the city of Concepción, Chile. Expert judge analysis and estimated reliability using the Cronbach Alpha were used to validate the survey. The instrument obtained a Cronbach Alpha coefficient of 0.8892, classified as good. This analysis generated four scales that explained 30.9% of the variance. They were called "Witness Bullying" with 18 items, accounting for 11.4% of the variance, "Bullying Victim" with 12 items, accounting for 7.5% of the variance, "Bullying Perpetrator and Severe bullying Victim", with 10 items explaining 6.4% of the variance and "Aggressor Bullying" with 6 items accounting for 5.7% of the variance. The MIAP can recognize four basic factors that facilitate the analysis and understanding of bullying, with good levels of reliability and validity. The remaining questions also deliver valuable information.
[Analysis of variance of repeated data measured by water maze with SPSS].
Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang
2007-01-01
To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P
Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis
ERIC Educational Resources Information Center
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia
2016-01-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…
Tang, Yongqiang
2017-12-01
Control-based pattern mixture models (PMM) and delta-adjusted PMMs are commonly used as sensitivity analyses in clinical trials with non-ignorable dropout. These PMMs assume that the statistical behavior of outcomes varies by pattern in the experimental arm in the imputation procedure, but the imputed data are typically analyzed by a standard method such as the primary analysis model. In the multiple imputation (MI) inference, Rubin's variance estimator is generally biased when the imputation and analysis models are uncongenial. One objective of the article is to quantify the bias of Rubin's variance estimator in the control-based and delta-adjusted PMMs for longitudinal continuous outcomes. These PMMs assume the same observed data distribution as the mixed effects model for repeated measures (MMRM). We derive analytic expressions for the MI treatment effect estimator and the associated Rubin's variance in these PMMs and MMRM as functions of the maximum likelihood estimator from the MMRM analysis and the observed proportion of subjects in each dropout pattern when the number of imputations is infinite. The asymptotic bias is generally small or negligible in the delta-adjusted PMM, but can be sizable in the control-based PMM. This indicates that the inference based on Rubin's rule is approximately valid in the delta-adjusted PMM. A simple variance estimator is proposed to ensure asymptotically valid MI inferences in these PMMs, and compared with the bootstrap variance. The proposed method is illustrated by the analysis of an antidepressant trial, and its performance is further evaluated via a simulation study. © 2017, The International Biometric Society.
ERIC Educational Resources Information Center
Krus, David J.; Krus, Patricia H.
1978-01-01
The conceptual differences between coded regression analysis and traditional analysis of variance are discussed. Also, a modification of several SPSS routines is proposed which allows for direct interpretation of ANOVA and ANCOVA results in a form stressing the strength and significance of scrutinized relationships. (Author)
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
ERIC Educational Resources Information Center
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
A Demonstration of the Analysis of Variance Using Physical Movement and Space
ERIC Educational Resources Information Center
Owen, William J.; Siakaluk, Paul D.
2011-01-01
Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…
NASA Astrophysics Data System (ADS)
Yun, Wanying; Lu, Zhenzhou; Jiang, Xian
2018-06-01
To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.
Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard
2016-01-01
The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. PMID:26743326
NASA Astrophysics Data System (ADS)
Kitterød, Nils-Otto
2017-08-01
Unconsolidated sediment cover thickness (D) above bedrock was estimated by using a publicly available well database from Norway, GRANADA. General challenges associated with such databases typically involve clustering and bias. However, if information about the horizontal distance to the nearest bedrock outcrop (L) is included, does the spatial estimation of D improve? This idea was tested by comparing two cross-validation results: ordinary kriging (OK) where L was disregarded; and co-kriging (CK) where cross-covariance between D and L was included. The analysis showed only minor differences between OK and CK with respect to differences between estimation and true values. However, the CK results gave in general less estimation variance compared to the OK results. All observations were declustered and transformed to standard normal probability density functions before estimation and back-transformed for the cross-validation analysis. The semivariogram analysis gave correlation lengths for D and L of approx. 10 and 6 km. These correlations reduce the estimation variance in the cross-validation analysis because more than 50 % of the data material had two or more observations within a radius of 5 km. The small-scale variance of D, however, was about 50 % of the total variance, which gave an accuracy of less than 60 % for most of the cross-validation cases. Despite the noisy character of the observations, the analysis demonstrated that L can be used as secondary information to reduce the estimation variance of D.
Sampling in freshwater environments: suspended particle traps and variability in the final data.
Barbizzi, Sabrina; Pati, Alessandra
2008-11-01
This paper reports one practical method to estimate the measurement uncertainty including sampling, derived by the approach implemented by Ramsey for soil investigations. The methodology has been applied to estimate the measurements uncertainty (sampling and analyses) of (137)Cs activity concentration (Bq kg(-1)) and total carbon content (%) in suspended particle sampling in a freshwater ecosystem. Uncertainty estimates for between locations, sampling and analysis components have been evaluated. For the considered measurands, the relative expanded measurement uncertainties are 12.3% for (137)Cs and 4.5% for total carbon. For (137)Cs, the measurement (sampling+analysis) variance gives the major contribution to the total variance, while for total carbon the spatial variance is the dominant contributor to the total variance. The limitations and advantages of this basic method are discussed.
NASA Astrophysics Data System (ADS)
Beiden, Sergey V.; Wagner, Robert F.; Campbell, Gregory; Metz, Charles E.; Chan, Heang-Ping; Nishikawa, Robert M.; Schnall, Mitchell D.; Jiang, Yulei
2001-06-01
In recent years, the multiple-reader, multiple-case (MRMC) study paradigm has become widespread for receiver operating characteristic (ROC) assessment of systems for diagnostic imaging and computer-aided diagnosis. We review how MRMC data can be analyzed in terms of the multiple components of the variance (case, reader, interactions) observed in those studies. Such information is useful for the design of pivotal studies from results of a pilot study and also for studying the effects of reader training. Recently, several of the present authors have demonstrated methods to generalize the analysis of multiple variance components to the case where unaided readers of diagnostic images are compared with readers who receive the benefit of a computer assist (CAD). For this case it is necessary to model the possibility that several of the components of variance might be reduced when readers incorporate the computer assist, compared to the unaided reading condition. We review results of this kind of analysis on three previously published MRMC studies, two of which were applications of CAD to diagnostic mammography and one was an application of CAD to screening mammography. The results for the three cases are seen to differ, depending on the reader population sampled and the task of interest. Thus, it is not possible to generalize a particular analysis of variance components beyond the tasks and populations actually investigated.
Minor, K S; Willits, J A; Marggraf, M P; Jones, M N; Lysaker, P H
2018-04-25
Conveying information cohesively is an essential element of communication that is disrupted in schizophrenia. These disruptions are typically expressed through disorganized symptoms, which have been linked to neurocognitive, social cognitive, and metacognitive deficits. Automated analysis can objectively assess disorganization within sentences, between sentences, and across paragraphs by comparing explicit communication to a large text corpus. Little work in schizophrenia has tested: (1) links between disorganized symptoms measured via automated analysis and neurocognition, social cognition, or metacognition; and (2) if automated analysis explains incremental variance in cognitive processes beyond clinician-rated scales. Disorganization was measured in schizophrenia (n = 81) with Coh-Metrix 3.0, an automated program that calculates basic and complex language indices. Trained staff also assessed neurocognition, social cognition, metacognition, and clinician-rated disorganization. Findings showed that all three cognitive processes were significantly associated with at least one automated index of disorganization. When automated analysis was compared with a clinician-rated scale, it accounted for significant variance in neurocognition and metacognition beyond the clinician-rated measure. When combined, these two methods explained 28-31% of the variance in neurocognition, social cognition, and metacognition. This study illustrated how automated analysis can highlight the specific role of disorganization in neurocognition, social cognition, and metacognition. Generally, those with poor cognition also displayed more disorganization in their speech-making it difficult for listeners to process essential information needed to tie the speaker's ideas together. Our findings showcase how implementing a mixed-methods approach in schizophrenia can explain substantial variance in cognitive processes.
An improved method for bivariate meta-analysis when within-study correlations are unknown.
Hong, Chuan; D Riley, Richard; Chen, Yong
2018-03-01
Multivariate meta-analysis, which jointly analyzes multiple and possibly correlated outcomes in a single analysis, is becoming increasingly popular in recent years. An attractive feature of the multivariate meta-analysis is its ability to account for the dependence between multiple estimates from the same study. However, standard inference procedures for multivariate meta-analysis require the knowledge of within-study correlations, which are usually unavailable. This limits standard inference approaches in practice. Riley et al proposed a working model and an overall synthesis correlation parameter to account for the marginal correlation between outcomes, where the only data needed are those required for a separate univariate random-effects meta-analysis. As within-study correlations are not required, the Riley method is applicable to a wide variety of evidence synthesis situations. However, the standard variance estimator of the Riley method is not entirely correct under many important settings. As a consequence, the coverage of a function of pooled estimates may not reach the nominal level even when the number of studies in the multivariate meta-analysis is large. In this paper, we improve the Riley method by proposing a robust variance estimator, which is asymptotically correct even when the model is misspecified (ie, when the likelihood function is incorrect). Simulation studies of a bivariate meta-analysis, in a variety of settings, show a function of pooled estimates has improved performance when using the proposed robust variance estimator. In terms of individual pooled estimates themselves, the standard variance estimator and robust variance estimator give similar results to the original method, with appropriate coverage. The proposed robust variance estimator performs well when the number of studies is relatively large. Therefore, we recommend the use of the robust method for meta-analyses with a relatively large number of studies (eg, m≥50). When the sample size is relatively small, we recommend the use of the robust method under the working independence assumption. We illustrate the proposed method through 2 meta-analyses. Copyright © 2017 John Wiley & Sons, Ltd.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recentmore » work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance. This study highlights the large sensitivity of SOA loadings to the particle-phase transformation of SOA volatility, which is neglected in most previous models.« less
ERIC Educational Resources Information Center
Kim, Soyoung; Olejnik, Stephen
2005-01-01
The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…
Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance
ERIC Educational Resources Information Center
Finch, W. Holmes
2016-01-01
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…
Benefits of Using Planned Comparisons Rather Than Post Hoc Tests: A Brief Review with Examples.
ERIC Educational Resources Information Center
DuRapau, Theresa M.
The rationale behind analysis of variance (including analysis of covariance and multiple analyses of variance and covariance) methods is reviewed, and unplanned and planned methods of evaluating differences between means are briefly described. Two advantages of using planned or a priori tests over unplanned or post hoc tests are presented. In…
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
ERIC Educational Resources Information Center
Thompson, Bruce
The relationship between analysis of variance (ANOVA) methods and their analogs (analysis of covariance and multiple analyses of variance and covariance--collectively referred to as OVA methods) and the more general analytic case is explored. A small heuristic data set is used, with a hypothetical sample of 20 subjects, randomly assigned to five…
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…
Teaching Principles of One-Way Analysis of Variance Using M&M's Candy
ERIC Educational Resources Information Center
Schwartz, Todd A.
2013-01-01
I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…
Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory
ERIC Educational Resources Information Center
Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya
2015-01-01
Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.
Yang, Ye; Christensen, Ole F; Sorensen, Daniel
2011-02-01
Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.
Feasibility of histogram analysis of susceptibility-weighted MRI for staging of liver fibrosis
Yang, Zhao-Xia; Liang, He-Yue; Hu, Xin-Xing; Huang, Ya-Qin; Ding, Ying; Yang, Shan; Zeng, Meng-Su; Rao, Sheng-Xiang
2016-01-01
PURPOSE We aimed to evaluate whether histogram analysis of susceptibility-weighted imaging (SWI) could quantify liver fibrosis grade in patients with chronic liver disease (CLD). METHODS Fifty-three patients with CLD who underwent multi-echo SWI (TEs of 2.5, 5, and 10 ms) were included. Histogram analysis of SWI images were performed and mean, variance, skewness, kurtosis, and the 1st, 10th, 50th, 90th, and 99th percentiles were derived. Quantitative histogram parameters were compared. For significant parameters, further receiver operating characteristic (ROC) analyses were performed to evaluate the potential diagnostic performance for differentiating liver fibrosis stages. RESULTS The number of patients in each pathologic fibrosis grade was 7, 3, 5, 5, and 33 for F0, F1, F2, F3, and F4, respectively. The results of variance (TE: 10 ms), 90th percentile (TE: 10 ms), and 99th percentile (TE: 10 and 5 ms) in F0–F3 group were significantly lower than in F4 group, with areas under the ROC curves (AUCs) of 0.84 for variance and 0.70–0.73 for the 90th and 99th percentiles, respectively. The results of variance (TE: 10 and 5 ms), 99th percentile (TE: 10 ms), and skewness (TE: 2.5 and 5 ms) in F0–F2 group were smaller than those of F3/F4 group, with AUCs of 0.88 and 0.69 for variance (TE: 10 and 5 ms, respectively), 0.68 for 99th percentile (TE: 10 ms), and 0.73 and 0.68 for skewness (TE: 2.5 and 5 ms, respectively). CONCLUSION Magnetic resonance histogram analysis of SWI, particularly the variance, is promising for predicting advanced liver fibrosis and cirrhosis. PMID:27113421
Why you cannot transform your way out of trouble for small counts.
Warton, David I
2018-03-01
While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation. This result has clear implications for the analysis of counts as often implemented in the applied sciences, but particularly for multivariate analysis in ecology. Multivariate discrete data are often collected in ecology, typically with a large proportion of zeros, and it is currently widespread to use methods of analysis that do not account for differences in variance across observations nor across responses. Simulations demonstrate that failure to account for the mean-variance relationship can have particularly severe consequences in this context, and also in the univariate context if the sampling design is unbalanced. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
ERIC Educational Resources Information Center
Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee
2004-01-01
This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…
ERIC Educational Resources Information Center
Trumpower, David L.
2015-01-01
Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…
ERIC Educational Resources Information Center
Proger, Barton B.; And Others
Many researchers assume that unequal cell frequencies in analysis of variance (ANOVA) designs result from poor planning. However, there are several valid reasons why one might have to analyze an unequal-n data matrix. The present study reviewed four categories of methods for treating unequal-n matrices by ANOVA: (a) unaltered data (least-squares…
An approach to the analysis of performance of quasi-optimum digital phase-locked loops.
NASA Technical Reports Server (NTRS)
Polk, D. R.; Gupta, S. C.
1973-01-01
An approach to the analysis of performance of quasi-optimum digital phase-locked loops (DPLL's) is presented. An expression for the characteristic function of the prior error in the state estimate is derived, and from this expression an infinite dimensional equation for the prior error variance is obtained. The prior error-variance equation is a function of the communication system model and the DPLL gain and is independent of the method used to derive the DPLL gain. Two approximations are discussed for reducing the prior error-variance equation to finite dimension. The effectiveness of one approximation in analyzing DPLL performance is studied.
Mcalister, Courtney; Schmitter-Edgecombe, Maureen; Lamb, Richard
2016-03-01
The objective of this meta-analysis was to improve understanding of the heterogeneity in the relationship between cognition and functional status in individuals with mild cognitive impairment (MCI). Demographic, clinical, and methodological moderators were examined. Cognition explained an average of 23% of the variance in functional outcomes. Executive function measures explained the largest amount of variance (37%), whereas global cognitive status and processing speed measures explained the least (20%). Short- and long-delayed memory measures accounted for more variance (35% and 31%) than immediate memory measures (18%), and the relationship between cognition and functional outcomes was stronger when assessed with informant-report (28%) compared with self-report (21%). Demographics, sample characteristics, and type of everyday functioning measures (i.e., questionnaire, performance-based) explained relatively little variance compared with cognition. Executive functioning, particularly measured by Trails B, was a strong predictor of everyday functioning in individuals with MCI. A large proportion of variance remained unexplained by cognition. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Study on Analysis of Variance on the indigenous wild and cultivated rice species of Manipur Valley
NASA Astrophysics Data System (ADS)
Medhabati, K.; Rohinikumar, M.; Rajiv Das, K.; Henary, Ch.; Dikash, Th.
2012-10-01
The analysis of variance revealed considerable variation among the cultivars and the wild species for yield and other quantitative characters in both the years of investigation. The highly significant differences among the cultivars in year wise and pooled analysis of variance for all the 12 characters reveal that there are enough genetic variabilities for all the characters studied. The existence of genetic variability is of paramount importance for starting a judicious plant breeding programme. Since introduced high yielding rice cultivars usually do not perform well. Improvement of indigenous cultivars is a clear choice for increase of rice production. The genetic variability of 37 rice germplasms in 12 agronomic characters estimated in the present study can be used in breeding programme
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
MRI Texture Analysis of Background Parenchymal Enhancement of the Breast
Woo, Jun; Amano, Maki; Yanagisawa, Fumi; Yamamoto, Hiroshi; Tani, Mayumi
2017-01-01
Purpose The purpose of this study was to determine texture parameters reflecting the background parenchymal enhancement (BPE) of the breast, which were acquired using texture analysis (TA). Methods We investigated 52 breasts of the 26 subjects who underwent dynamic contrast-enhanced MRI. One experienced reader scored BPE visually (i.e., minimal, mild, moderate, and marked). TA, including 12 texture parameters, was performed to distinguish the BPE scores quantitatively. Relationships between the visual BPE scores and texture parameters were evaluated using analysis of variance and receiver operating characteristic analysis. Results The variance and skewness of signal intensity were useful for differentiating between moderate and mild or minimal BPE or between mild and minimal BPE, respectively, with the cutoff value of 356.7 for variance and that of 0.21 for skewness. Some TA features could be useful for defining breast lesions from the BPE. Conclusion TA may be useful for quantifying the BPE of the breast. PMID:28812015
Genetic and environmental variance in content dimensions of the MMPI.
Rose, R J
1988-08-01
To evaluate genetic and environmental variance in the Minnesota Multiphasic Personality Inventory (MMPI), I studied nine factor scales identified in the first item factor analysis of normal adult MMPIs in a sample of 820 adolescent and young adult co-twins. Conventional twin comparisons documented heritable variance in six of the nine MMPI factors (Neuroticism, Psychoticism, Extraversion, Somatic Complaints, Inadequacy, and Cynicism), whereas significant influence from shared environmental experience was found for four factors (Masculinity versus Femininity, Extraversion, Religious Orthodoxy, and Intellectual Interests). Genetic variance in the nine factors was more evident in results from twin sisters than those of twin brothers, and a developmental-genetic analysis, using hierarchical multiple regressions of double-entry matrixes of the twins' raw data, revealed that in four MMPI factor scales, genetic effects were significantly modulated by age or gender or their interaction during the developmental period from early adolescence to early adulthood.
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Network Structure and Biased Variance Estimation in Respondent Driven Sampling
Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.
2015-01-01
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
Tabachnick, W J; Mecham, J O
1991-03-01
An enzyme-linked immunoassay for detecting bluetongue virus in infected Culicoides variipennis was evaluated using a nested analysis of variance to determine sources of experimental error in the procedure. The major source of variation was differences among individual insects (84% of the total variance). Storing insects at -70 degrees C for two months contributed to experimental variation in the ELISA reading (14% of the total variance) and should be avoided. Replicate assays of individual insects were shown to be unnecessary, since variation among replicate wells and plates was minor (2% of the total variance).
Jackknife for Variance Analysis of Multifactor Experiments.
1982-05-01
variance-covariance matrix is generated y a subroutine named CORAN (UNIVAC, 1969). The jackknife variances are then punched on computer cards in the same...LEVEL OF: InMte CALL cORAN (oaILa.NSUR.NOAY.D,*OXflRRORR.PCOF.2K.1’)I WRITE IP97111 )1RRN.4 .1:NDAY) 0 a 3fill1UR I .’t UN 001f’..1uŔ:1 .w100710n
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Rubio-Aparicio, María; Sánchez-Meca, Julio; López-López, José Antonio; Botella, Juan; Marín-Martínez, Fulgencio
2017-11-01
Subgroup analyses allow us to examine the influence of a categorical moderator on the effect size in meta-analysis. We conducted a simulation study using a dichotomous moderator, and compared the impact of pooled versus separate estimates of the residual between-studies variance on the statistical performance of the Q B (P) and Q B (S) tests for subgroup analyses assuming a mixed-effects model. Our results suggested that similar performance can be expected as long as there are at least 20 studies and these are approximately balanced across categories. Conversely, when subgroups were unbalanced, the practical consequences of having heterogeneous residual between-studies variances were more evident, with both tests leading to the wrong statistical conclusion more often than in the conditions with balanced subgroups. A pooled estimate should be preferred for most scenarios, unless the residual between-studies variances are clearly different and there are enough studies in each category to obtain precise separate estimates. © 2017 The British Psychological Society.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Jackknife variance of the partial area under the empirical receiver operating characteristic curve.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-04-01
Receiver operating characteristic analysis provides an important methodology for assessing traditional (e.g., imaging technologies and clinical practices) and new (e.g., genomic studies, biomarker development) diagnostic problems. The area under the clinically/practically relevant part of the receiver operating characteristic curve (partial area or partial area under the receiver operating characteristic curve) is an important performance index summarizing diagnostic accuracy at multiple operating points (decision thresholds) that are relevant to actual clinical practice. A robust estimate of the partial area under the receiver operating characteristic curve is provided by the area under the corresponding part of the empirical receiver operating characteristic curve. We derive a closed-form expression for the jackknife variance of the partial area under the empirical receiver operating characteristic curve. Using the derived analytical expression, we investigate the differences between the jackknife variance and a conventional variance estimator. The relative properties in finite samples are demonstrated in a simulation study. The developed formula enables an easy way to estimate the variance of the empirical partial area under the receiver operating characteristic curve, thereby substantially reducing the computation burden, and provides important insight into the structure of the variability. We demonstrate that when compared with the conventional approach, the jackknife variance has substantially smaller bias, and leads to a more appropriate type I error rate of the Wald-type test. The use of the jackknife variance is illustrated in the analysis of a data set from a diagnostic imaging study.
The microcomputer scientific software series 3: general linear model--analysis of variance.
Harold M. Rauscher
1985-01-01
A BASIC language set of programs, designed for use on microcomputers, is presented. This set of programs will perform the analysis of variance for any statistical model describing either balanced or unbalanced designs. The program computes and displays the degrees of freedom, Type I sum of squares, and the mean square for the overall model, the error, and each factor...
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-07-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
Teaching Principles of Inference with ANOVA
ERIC Educational Resources Information Center
Tarlow, Kevin R.
2016-01-01
Analysis of variance (ANOVA) is a test of "mean" differences, but the reference to "variances" in the name is often overlooked. Classroom activities are presented to illustrate how ANOVA works with emphasis on how to think critically about inferential reasoning.
Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R
2007-01-01
Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics datasets. This significantly improves the discrimination between sample classes and has resulted in higher classification accuracies compared to unscaled, autoscaled or Pareto scaled data. Additionally we have confirmed the broad applicability of the glog approach using three disparate datasets from different biological samples using 1D NMR spectra, 1D projections of 2D JRES spectra, and intact 2D JRES spectra. PMID:17605789
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Dexter, Franklin; Ledolter, Johannes
2003-07-01
Surgeons using the same amount of operating room (OR) time differ in their achieved hospital contribution margins (revenue minus variable costs) by >1000%. Thus, to improve the financial return from perioperative facilities, OR strategic decisions should selectively focus additional OR capacity and capital purchasing on a few surgeons or subspecialties. These decisions use estimates of each surgeon's and/or subspecialty's contribution margin per OR hour. The estimates are subject to uncertainty (e.g., from outliers). We account for the uncertainties by using mean-variance portfolio analysis (i.e., quadratic programming). This method characterizes the problem of selectively expanding OR capacity based on the expected financial return and risk of different portfolios of surgeons. The assessment reveals whether the choices, of which surgeons have their OR capacity expanded, are sensitive to the uncertainties in the surgeons' contribution margins per OR hour. Thus, mean-variance analysis reduces the chance of making strategic decisions based on spurious information. We also assess the financial benefit of using mean-variance portfolio analysis when the planned expansion of OR capacity is well diversified over at least several surgeons or subspecialties. Our results show that, in such circumstances, there may be little benefit from further changing the portfolio to reduce its financial risk. Surgeon and subspecialty specific hospital financial data are uncertain, a fact that should be taken into account when making decisions about expanding operating room capacity. We show that mean-variance portfolio analysis can incorporate this uncertainty, thereby guiding operating room management decision-making and reducing the chance of a strategic decision being made based on spurious information.
Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.
2010-01-01
In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273
Predictors of burnout among correctional mental health professionals.
Gallavan, Deanna B; Newman, Jody L
2013-02-01
This study focused on the experience of burnout among a sample of correctional mental health professionals. We examined the relationship of a linear combination of optimism, work family conflict, and attitudes toward prisoners with two dimensions derived from the Maslach Burnout Inventory and the Professional Quality of Life Scale. Initially, three subscales from the Maslach Burnout Inventory and two subscales from the Professional Quality of Life Scale were subjected to principal components analysis with oblimin rotation in order to identify underlying dimensions among the subscales. This procedure resulted in two components accounting for approximately 75% of the variance (r = -.27). The first component was labeled Negative Experience of Work because it seemed to tap the experience of being emotionally spent, detached, and socially avoidant. The second component was labeled Positive Experience of Work and seemed to tap a sense of competence, success, and satisfaction in one's work. Two multiple regression analyses were subsequently conducted, in which Negative Experience of Work and Positive Experience of Work, respectively, were predicted from a linear combination of optimism, work family conflict, and attitudes toward prisoners. In the first analysis, 44% of the variance in Negative Experience of Work was accounted for, with work family conflict and optimism accounting for the most variance. In the second analysis, 24% of the variance in Positive Experience of Work was accounted for, with optimism and attitudes toward prisoners accounting for the most variance.
Differential distribution of amino acids in plants.
Kumar, Vinod; Sharma, Anket; Kaur, Ravdeep; Thukral, Ashwani Kumar; Bhardwaj, Renu; Ahmad, Parvaiz
2017-05-01
Plants are a rich source of amino acids and their individual abundance in plants is of great significance especially in terms of food. Therefore, it is of utmost necessity to create a database of the relative amino acid contents in plants as reported in literature. Since in most of the cases complete analysis of profiles of amino acids in plants was not reported, the units used and the methods applied and the plant parts used were different, amino acid contents were converted into relative units with respect to lysine for statistical analysis. The most abundant amino acids in plants are glutamic acid and aspartic acid. Pearson's correlation analysis among different amino acids showed that there were no negative correlations between the amino acids. Cluster analysis (CA) applied to relative amino acid contents of different families. Alismataceae, Cyperaceae, Capparaceae and Cactaceae families had close proximity with each other on the basis of their relative amino acid contents. First three components of principal component analysis (PCA) explained 79.5% of the total variance. Factor analysis (FA) explained four main underlying factors for amino acid analysis. Factor-1 accounted for 29.4% of the total variance and had maximum loadings on glycine, isoleucine, leucine, threonine and valine. Factor-2 explained 25.8% of the total variance and had maximum loadings on alanine, aspartic acid, serine and tyrosine. 14.2% of the total variance was explained by factor-3 and had maximum loadings on arginine and histidine. Factor-4 accounted 8.3% of the total variance and had maximum loading on the proline amino acid. The relative content of different amino acids presented in this paper is alanine (1.4), arginine (1.8), asparagine (0.7), aspartic acid (2.4), cysteine (0.5), glutamic acid (2.8), glutamine (0.6), glycine (1.0), histidine (0.5), isoleucine (0.9), leucine (1.7), lysine (1.0), methionine (0.4), phenylalanine (0.9), proline (1.1), serine (1.0), threonine (1.0), tryptophan (0.3), tyrosine (0.7) and valine (1.2).
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Landsat-TM identification of Amblyomma variegatum (Acari: Ixodidae) habitats in Guadeloupe
NASA Technical Reports Server (NTRS)
Hugh-Jones, M.; Barre, N.; Nelson, G.; Wehnes, K.; Warner, J.; Garvin, J.; Garris, G.
1992-01-01
The feasibility of identifying specific habitats of the African bont tick, Amblyomma variegatum, from Landsat-TM images was investigated by comparing remotely sensed images of visible farms in Grande Terre (Guadeloupe) with field observations made in the same period of time (1986-1987). The different tick habitates could be separated using principal component analysis. The analysis clustered the sites by large and small variance of band values, and by vegetation and moisture indexes. It was found that herds in heterogeneous sites with large variances had more ticks than those in homogeneous or low variance sites. Within the heterogeneous sites, those with high vegetation and moisture indexes had more ticks than those with low values.
Harrison, Jay M; Howard, Delia; Malven, Marianne; Halls, Steven C; Culler, Angela H; Harrigan, George G; Wolfinger, Russell D
2013-07-03
Compositional studies on genetically modified (GM) and non-GM crops have consistently demonstrated that their respective levels of key nutrients and antinutrients are remarkably similar and that other factors such as germplasm and environment contribute more to compositional variability than transgenic breeding. We propose that graphical and statistical approaches that can provide meaningful evaluations of the relative impact of different factors to compositional variability may offer advantages over traditional frequentist testing. A case study on the novel application of principal variance component analysis (PVCA) in a compositional assessment of herbicide-tolerant GM cotton is presented. Results of the traditional analysis of variance approach confirmed the compositional equivalence of the GM and non-GM cotton. The multivariate approach of PVCA provided further information on the impact of location and germplasm on compositional variability relative to GM.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Estimating synaptic parameters from mean, variance, and covariance in trains of synaptic responses.
Scheuss, V; Neher, E
2001-10-01
Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli.
Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz
2017-04-30
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Team climate at Antarctic research stations 1996-2000: leadership matters.
Schmidt, Lacey L; Wood, JoAnna; Lugg, Desmond J
2004-08-01
The popular assumption is that extreme environments induce a climate of hostility, incompatibility, and tension by intensifying differences and disagreements among team members. Team members' perceptions of team climate are likely to change over time in an extreme environment, and thus team climate should be considered as a dynamic outcome variable resulting from multiple factors. In order to explore team climate as a dynamic outcome, we explored whether variables at multiple levels of analysis contributed to team climate over time for teams living and working in Antarctica. Data for this study were collected from volunteers involved in Australian National Antarctic Research Expeditions conducted from 1996 to 2000. Multilevel analysis was used to partition and estimate the variance in team climate and to explore factors explaining variance at the group/team, individual, and weekly levels. Most of the variance in perceptions of team climate was at the individual level (57%). Team climate had less variance at the group level (16%) and at the weekly level (26%). Results indicated that perceived leadership effectiveness was significantly related to team climate. Perceived leadership effectiveness accounted for an estimated 77% of the group level variance, which equated to 14% of the overall variance in team climate. Our results suggest that exploring the characteristics and behaviors that constitute effective leadership would contribute to a more complete and useful picture of team climate, as well as guide selection research.
Analysis of conditional genetic effects and variance components in developmental genetics.
Zhu, J
1995-12-01
A genetic model with additive-dominance effects and genotype x environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t-1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects.
Analysis of Conditional Genetic Effects and Variance Components in Developmental Genetics
Zhu, J.
1995-01-01
A genetic model with additive-dominance effects and genotype X environment interactions is presented for quantitative traits with time-dependent measures. The genetic model for phenotypic means at time t conditional on phenotypic means measured at previous time (t - 1) is defined. Statistical methods are proposed for analyzing conditional genetic effects and conditional genetic variance components. Conditional variances can be estimated by minimum norm quadratic unbiased estimation (MINQUE) method. An adjusted unbiased prediction (AUP) procedure is suggested for predicting conditional genetic effects. A worked example from cotton fruiting data is given for comparison of unconditional and conditional genetic variances and additive effects. PMID:8601500
Thompson, William H; Fransson, Peter
2015-01-01
When studying brain connectivity using fMRI, signal intensity time-series are typically correlated with each other in time to compute estimates of the degree of interaction between different brain regions and/or networks. In the static connectivity case, the problem of defining which connections that should be considered significant in the analysis can be addressed in a rather straightforward manner by a statistical thresholding that is based on the magnitude of the correlation coefficients. More recently, interest has come to focus on the dynamical aspects of brain connectivity and the problem of deciding which brain connections that are to be considered relevant in the context of dynamical changes in connectivity provides further options. Since we, in the dynamical case, are interested in changes in connectivity over time, the variance of the correlation time-series becomes a relevant parameter. In this study, we discuss the relationship between the mean and variance of brain connectivity time-series and show that by studying the relation between them, two conceptually different strategies to analyze dynamic functional brain connectivity become available. Using resting-state fMRI data from a cohort of 46 subjects, we show that the mean of fMRI connectivity time-series scales negatively with its variance. This finding leads to the suggestion that magnitude- versus variance-based thresholding strategies will induce different results in studies of dynamic functional brain connectivity. Our assertion is exemplified by showing that the magnitude-based strategy is more sensitive to within-resting-state network (RSN) connectivity compared to between-RSN connectivity whereas the opposite holds true for a variance-based analysis strategy. The implications of our findings for dynamical functional brain connectivity studies are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Fritts, Andrea; Knights, Brent C.; Lafrancois, Toben D.; Bartsch, Lynn; Vallazza, Jon; Bartsch, Michelle; Richardson, William B.; Karns, Byron N.; Bailey, Sean; Kreiling, Rebecca
2018-01-01
Fatty acid and stable isotope signatures allow researchers to better understand food webs, food sources, and trophic relationships. Research in marine and lentic systems has indicated that the variance of these biomarkers can exhibit substantial differences across spatial and temporal scales, but this type of analysis has not been completed for large river systems. Our objectives were to evaluate variance structures for fatty acids and stable isotopes (i.e. δ13C and δ15N) of seston, threeridge mussels, hydropsychid caddisflies, gizzard shad, and bluegill across spatial scales (10s-100s km) in large rivers of the Upper Mississippi River Basin, USA that were sampled annually for two years, and to evaluate the implications of this variance on the design and interpretation of trophic studies. The highest variance for both isotopes was present at the largest spatial scale for all taxa (except seston δ15N) indicating that these isotopic signatures are responding to factors at a larger geographic level rather than being influenced by local-scale alterations. Conversely, the highest variance for fatty acids was present at the smallest spatial scale (i.e. among individuals) for all taxa except caddisflies, indicating that the physiological and metabolic processes that influence fatty acid profiles can differ substantially between individuals at a given site. Our results highlight the need to consider the spatial partitioning of variance during sample design and analysis, as some taxa may not be suitable to assess ecological questions at larger spatial scales.
Optimal Superpositioning of Flexible Molecule Ensembles
Gapsys, Vytautas; de Groot, Bert L.
2013-01-01
Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Kohler, Friedbert; Renton, Roger; Dickson, Hugh G; Estell, John; Connolly, Carol E
2011-02-01
We sought the best predictors for length of stay, discharge destination and functional improvement for inpatients undergoing rehabilitation following a stroke and compared these predictors against AN-SNAP v2. The Oxfordshire classification subgroup, sociodemographic data and functional data were collected for patients admitted between 1997 and 2007, with a diagnosis of recent stroke. The data were factor analysed using Principal Components Analysis for categorical data (CATPCA). Categorical regression analyses was performed to determine the best predictors of length of stay, discharge destination, and functional improvement. A total of 1154 patients were included in the study. Principal components analysis indicated that the data were effectively unidimensional, with length of stay being the most important component. Regression analysis demonstrated that the best predictor was the admission motor FIM score, explaining 38.9% of variance for length of stay, 37.4%.of variance for functional improvement and 16% of variance for discharge destination. The best explanatory variable in our inpatient rehabilitation service is the admission motor FIM. AN- SNAP v2 classification is a less effective explanatory variable. This needs to be taken into account when using AN-SNAP v2 classification for clinical or funding purposes.
De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric
2010-01-11
Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
Weighted analysis of paired microarray experiments.
Kristiansson, Erik; Sjögren, Anders; Rudemo, Mats; Nerman, Olle
2005-01-01
In microarray experiments quality often varies, for example between samples and between arrays. The need for quality control is therefore strong. A statistical model and a corresponding analysis method is suggested for experiments with pairing, including designs with individuals observed before and after treatment and many experiments with two-colour spotted arrays. The model is of mixed type with some parameters estimated by an empirical Bayes method. Differences in quality are modelled by individual variances and correlations between repetitions. The method is applied to three real and several simulated datasets. Two of the real datasets are of Affymetrix type with patients profiled before and after treatment, and the third dataset is of two-colour spotted cDNA type. In all cases, the patients or arrays had different estimated variances, leading to distinctly unequal weights in the analysis. We suggest also plots which illustrate the variances and correlations that affect the weights computed by our analysis method. For simulated data the improvement relative to previously published methods without weighting is shown to be substantial.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Podsakoff, P M; MacKenzie, S B; Bommer, W H
1996-08-01
A meta-analysis was conducted to estimate more accurately the bivariate relationships between leadership behaviors, substitutes for leadership, and subordinate attitudes, and role perceptions and performance, and to examine the relative strengths of the relationships between these variables. Estimates of 435 relationships were obtained from 22 studies containing 36 independent samples. The findings showed that the combination of the substitutes variables and leader behaviors account for a majority of the variance in employee attitudes and role perceptions and a substantial proportion of the variance in in-role and extra-role performance; on average, the substitutes for leadership uniquely accounted for more of the variance in the criterion variables than did leader behaviors.
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-12-01
The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.
New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2010-01-01
Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Mixed model approaches for diallel analysis based on a bio-model.
Zhu, J; Weir, B S
1996-12-01
A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.
Measuring kinetics of complex single ion channel data using mean-variance histograms.
Patlak, J B
1993-01-01
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed. Images FIGURE 2 FIGURE 4 FIGURE 8 FIGURE 9 PMID:7690261
Management Accounting in School Food Service.
ERIC Educational Resources Information Center
Bryan, E. Lewis; Friedlob, G. Thomas
1982-01-01
Describes a model for establishing control of school food services through analysis of the aggregate variances of quantity, collection, and price, and of their separate components. The separable component variances are identified, measured, and compared monthly to help supervisors identify exactly where plans and operations vary. (Author/MLF)
Mesoscale Gravity Wave Variances from AMSU-A Radiances
NASA Technical Reports Server (NTRS)
Wu, Dong L.
2004-01-01
A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.
An apparent contradiction: increasing variability to achieve greater precision?
Rosenblatt, Noah J; Hurt, Christopher P; Latash, Mark L; Grabiner, Mark D
2014-02-01
To understand the relationship between variability of foot placement in the frontal plane and stability of gait patterns, we explored how constraining mediolateral foot placement during walking affects the structure of kinematic variance in the lower-limb configuration space during the swing phase of gait. Ten young subjects walked under three conditions: (1) unconstrained (normal walking), (2) constrained (walking overground with visual guides for foot placement to achieve the measured unconstrained step width) and, (3) beam (walking on elevated beams spaced to achieve the measured unconstrained step width). The uncontrolled manifold analysis of the joint configuration variance was used to quantify two variance components, one that did not affect the mediolateral trajectory of the foot in the frontal plane ("good variance") and one that affected this trajectory ("bad variance"). Based on recent studies, we hypothesized that across conditions (1) the index of the synergy stabilizing the mediolateral trajectory of the foot (the normalized difference between the "good variance" and "bad variance") would systematically increase and (2) the changes in the synergy index would be associated with a disproportionate increase in the "good variance." Both hypotheses were confirmed. We conclude that an increase in the "good variance" component of the joint configuration variance may be an effective method of ensuring high stability of gait patterns during conditions requiring increased control of foot placement, particularly if a postural threat is present. Ultimately, designing interventions that encourage a larger amount of "good variance" may be a promising method of improving stability of gait patterns in populations such as older adults and neurological patients.
Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions
NASA Astrophysics Data System (ADS)
Luhar, Ashok K.
2010-05-01
Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.
NASA Astrophysics Data System (ADS)
White, Terri Renee'
The primary purpose of the study was to examine different variables (i.e. science process skill ability, science attitudes, and parents' levels of expectation for their children in science, which may impinge on science education differently for males and females in grades five, seven, and nine. The research question addressed by the study was: What are the differences between science process skill ability, science attitudes, and parents' levels of expectation in science on the academic success of fifth, seventh, and ninth graders in science and do effects differ according to gender and grade level? The subjects included fifth, seven, and ninth grade students ( n = 543) and their parents (n = 474) from six rural, public elementary schools and two rural, public middle schools in Southern Mississippi. A two-way (grade x gender) multivariate analysis of variance (MANOVA) was used to determine the differences in science process skill abilities of females and males in grade five, seven, and nine. An additional separate two-way multivariate analysis of variance (grade x gender) was also used to determine the differences in science attitudes of males and females in grade five, seven, and nine. A separate analysis of variance (PPSEX [parent's gender]) with the effects being parents' gender was used to determine differences in parents' levels of expectation for their childrens' performance in science. An additional separate analysis of variance (SSEX [student's gender]) with the effects being the gender of the student was also used to determine differences in parents' levels of expectation for their childrens' performance in science. Results of the analyses indicated significant main effects for grade level (p < .001) and gender (p < .001) on the TIPS II. There was no significant grade by gender interaction on the TIPS II. Results for the TOSRA also indicated a significant main effect for grade (p < .001) and the interaction of grade by sex ( p < .001). On variable ATT 5 (enjoyment of science lessons), males' attitudes toward science decreased across the grade levels; whereas, females decreased from grade five to seven, but showed a significant increase from grade seven to nine. Results from the analysis of variance with the parent's gender as the main effect showed no significant difference. The analysis of variance with student's gender as the main effect showed no significant difference.
Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.
Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L
2005-12-01
To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.
Simulator Evaluation of Lineup Visual Landing Aids for Night Carrier Landing.
1987-03-10
recognized that the system is less than optimum (2,3). Because the information from the meatball is of zero order (displacement only), there are...gives the analysis-of-variance summaries of glideslope performance across the flight segments for TOT glideslope + 0.3 degrees (± 1.0 meatball ), RMS...accepted as reliable. In addition, analysis-of- variance of percent TOT glideslope ± 0.45 degrees (± 1.5 meatball ) did not reveal any statistical
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-19
..., Regulations, and Variances, 1100 Wilson Boulevard, Room 2350, Arlington, VA 22209-3939. (4) Hand Delivery or Courier: MSHA, Office of Standards, Regulations, and Variances, 1100 Wilson Boulevard, Room 2350... CONTACT: Mario Distasio, Chief of the Economic Analysis Division, Office of Standards, Regulations, and...
Pereira, Ana Santos; Dâmaso-Rodrigues, Maria Luísa; Amorim, Ana; Daam, Michiel A; Cerejeira, Maria José
2018-06-16
Studies addressing the predicted effects of pesticides in combination with abiotic and biotic factors on aquatic biota in ditches associated with typical Mediterranean agroecosystems are scarce. The current study aimed to evaluate the predicted effects of pesticides along with environmental factors and biota interactions on macroinvertebrate, zooplankton and phytoplankton community compositions in ditches adjacent to Portuguese maize and tomato crop areas. Data was analysed with the variance partitioning procedure based on redundancy analysis (RDA). The total variance in biological community composition was divided into the variance explained by the multi-substance potentially affected fraction [(msPAF) arthropods and primary producers], environmental factors (water chemistry parameters), biotic interactions, shared variance, and unexplained variance. The total explained variance reached 39.4% and the largest proportion of this explained variance was attributed to msPAF (23.7%). When each group (phytoplankton, zooplankton and macroinvertebrates) was analysed separately, biota interactions and environmental factors explained the largest proportion of variance. Results of this study indicate that besides the presence of pesticide mixtures, environmental factors and biotic interactions also considerably influence field freshwater communities. Subsequently, to increase our understanding of the risk of pesticide mixtures on ecosystem communities in edge-of-field water bodies, variations in environmental and biological factors should also be considered.
Genetic and environmental influences on blood pressure variability: a study in twins.
Xu, Xiaojing; Ding, Xiuhua; Zhang, Xinyan; Su, Shaoyong; Treiber, Frank A; Vlietinck, Robert; Fagard, Robert; Derom, Catherine; Gielen, Marij; Loos, Ruth J F; Snieder, Harold; Wang, Xiaoling
2013-04-01
Blood pressure variability (BPV) and its reduction in response to antihypertensive treatment are predictors of clinical outcomes; however, little is known about its heritability. In this study, we examined the relative influence of genetic and environmental sources of variance of BPV and the extent to which it may depend on race or sex in young twins. Twins were enrolled from two studies. One study included 703 white twins (308 pairs and 87 singletons) aged 18-34 years, whereas another study included 242 white twins (108 pairs and 26 singletons) and 188 black twins (79 pairs and 30 singletons) aged 12-30 years. BPV was calculated from 24-h ambulatory blood pressure recording. Twin modeling showed similar results in the separate analysis in both twin studies and in the meta-analysis. Familial aggregation was identified for SBP variability (SBPV) and DBP variability (DBPV) with genetic factors and common environmental factors together accounting for 18-40% and 23-31% of the total variance of SBPV and DBPV, respectively. Unique environmental factors were the largest contributor explaining up to 82-77% of the total variance of SBPV and DBPV. No sex or race difference in BPV variance components was observed. The results remained the same after adjustment for 24-h blood pressure levels. The variance in BPV is predominantly determined by unique environment in youth and young adults, although familial aggregation due to additive genetic and/or common environment influences was also identified explaining about 25% of the variance in BPV.
Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
A Critical Analysis of IQ Studies of Adopted Children
ERIC Educational Resources Information Center
Richardson, Ken; Norgate, Sarah H.
2006-01-01
The pattern of parent-child correlations in adoption studies has long been interpreted to suggest substantial additive genetic variance underlying variance in IQ. The studies have frequently been criticized on methodological grounds, but those criticisms have not reflected recent perspectives in genetics and developmental theory. Here we apply…
Variance in the chemical composition of dry beans determined from UV spectral fingerprints
USDA-ARS?s Scientific Manuscript database
Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...
Noise induced hearing loss of forest workers in Turkey.
Tunay, M; Melemez, K
2008-09-01
In this study, a total number of 114 workers who were in 3 different groups in terms of age and work underwent audiometric analysis. In order to determine whether there was a statistically significant difference between the hearing loss levels of the workers who were included in the study, variance analysis was applied with the help of the data obtained as a result of the evaluation. Correlation and regression analysis were applied in order to determine the relations between hearing loss and their age and their time of work. As a result of the variance analysis, statistically significant differences were found at 500, 2000 and 4000 Hz frequencies. The most specific difference was observed among chainsaw machine operators at 4000 Hz frequency, which was determined by the variance analysis. As a result of the correlation analysis, significant relations were found between time of work and hearing loss in 0.01 confidence level and between age and hearing loss in 0.05 confidence level. Forest workers using chainsaw machines should be informed, they should wear or use protective materials and less noising chainsaw machines should be used if possible and workers should undergo audiometric tests when they start work and once a year.
The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method
NASA Astrophysics Data System (ADS)
Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad
2018-04-01
Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Poh Kam; Kosaka, Wataru; Oikawa, Shun-ichi
We have solved the Heisenberg equation of motion for the time evolution of the position and momentum operators for a non-relativistic spinless charged particle in the presence of a weakly non-uniform electric and magnetic field. It is shown that the drift velocity operator obtained in this study agrees with the classical counterpart, and that, using the time dependent operators, the variances in position and momentum grow with time. The expansion rate of variance in position and momentum are dependent on the magnetic gradient scale length, however, independent of the electric gradient scale length. In the presence of a weakly non-uniformmore » electric and magnetic field, the theoretical expansion rates of variance expansion are in good agreement with the numerical analysis. It is analytically shown that the variance in position reaches the square of the interparticle separation, which is the characteristic time much shorter than the proton collision time of plasma fusion. After this time, the wavefunctions of the neighboring particles would overlap, as a result, the conventional classical analysis may lose its validity. The broad distribution of individual particle in space means that their Coulomb interactions with other particles become weaker than that expected in classical mechanics.« less
On the impact of relatedness on SNP association analysis.
Gross, Arnd; Tönjes, Anke; Scholz, Markus
2017-12-06
When testing for SNP (single nucleotide polymorphism) associations in related individuals, observations are not independent. Simple linear regression assuming independent normally distributed residuals results in an increased type I error and the power of the test is also affected in a more complicate manner. Inflation of type I error is often successfully corrected by genomic control. However, this reduces the power of the test when relatedness is of concern. In the present paper, we derive explicit formulae to investigate how heritability and strength of relatedness contribute to variance inflation of the effect estimate of the linear model. Further, we study the consequences of variance inflation on hypothesis testing and compare the results with those of genomic control correction. We apply the developed theory to the publicly available HapMap trio data (N=129), the Sorbs (a self-contained population with N=977 characterised by a cryptic relatedness structure) and synthetic family studies with different sample sizes (ranging from N=129 to N=999) and different degrees of relatedness. We derive explicit and easily to apply approximation formulae to estimate the impact of relatedness on the variance of the effect estimate of the linear regression model. Variance inflation increases with increasing heritability. Relatedness structure also impacts the degree of variance inflation as shown for example family structures. Variance inflation is smallest for HapMap trios, followed by a synthetic family study corresponding to the trio data but with larger sample size than HapMap. Next strongest inflation is observed for the Sorbs, and finally, for a synthetic family study with a more extreme relatedness structure but with similar sample size as the Sorbs. Type I error increases rapidly with increasing inflation. However, for smaller significance levels, power increases with increasing inflation while the opposite holds for larger significance levels. When genomic control is applied, type I error is preserved while power decreases rapidly with increasing variance inflation. Stronger relatedness as well as higher heritability result in increased variance of the effect estimate of simple linear regression analysis. While type I error rates are generally inflated, the behaviour of power is more complex since power can be increased or reduced in dependence on relatedness and the heritability of the phenotype. Genomic control cannot be recommended to deal with inflation due to relatedness. Although it preserves type I error, the loss in power can be considerable. We provide a simple formula for estimating variance inflation given the relatedness structure and the heritability of a trait of interest. As a rule of thumb, variance inflation below 1.05 does not require correction and simple linear regression analysis is still appropriate.
Global Distributions of Temperature Variances At Different Stratospheric Altitudes From Gps/met Data
NASA Astrophysics Data System (ADS)
Gavrilov, N. M.; Karpova, N. V.; Jacobi, Ch.
The GPS/MET measurements at altitudes 5 - 35 km are used to obtain global distribu- tions of small-scale temperature variances at different stratospheric altitudes. Individ- ual temperature profiles are smoothed using second order polynomial approximations in 5 - 7 km thick layers centered at 10, 20 and 30 km. Temperature inclinations from the averaged values and their variances obtained for each profile are averaged for each month of year during the GPS/MET experiment. Global distributions of temperature variances have inhomogeneous structure. Locations and latitude distributions of the maxima and minima of the variances depend on altitudes and season. One of the rea- sons for the small-scale temperature perturbations in the stratosphere could be internal gravity waves (IGWs). Some assumptions are made about peculiarities of IGW gener- ation and propagation in the tropo-stratosphere based on the results of GPS/MET data analysis.
Diallel analysis for sex-linked and maternal effects.
Zhu, J; Weir, B S
1996-01-01
Genetic models including sex-linked and maternal effects as well as autosomal gene effects are described. Monte Carlo simulations were conducted to compare efficiencies of estimation by minimum norm quadratic unbiased estimation (MINQUE) and restricted maximum likelihood (REML) methods. MINQUE(1), which has 1 for all prior values, has a similar efficiency to MINQUE(θ), which requires prior estimates of parameter values. MINQUE(1) has the advantage over REML of unbiased estimation and convenient computation. An adjusted unbiased prediction (AUP) method is developed for predicting random genetic effects. AUP is desirable for its easy computation and unbiasedness of both mean and variance of predictors. The jackknife procedure is appropriate for estimating the sampling variances of estimated variances (or covariances) and of predicted genetic effects. A t-test based on jackknife variances is applicable for detecting significance of variation. Worked examples from mice and silkworm data are given in order to demonstrate variance and covariance estimation and genetic effect prediction.
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262
Johnson, Henry C.; Rosevear, G. Craig
1977-01-01
This study explored the relationship between traditional admissions criteria, performance in the first semester of medical school, and performance on the National Board of Medical Examiners' (NBME) Examination, Part 1 for minority medical students, non-minority medical students, and the two groups combined. Correlational analysis and step-wise multiple regression procedures were used as the analysis techniques. A different pattern of admissions variables related to National Board Part 1 performance for the two groups. The General Information section of the Medical College Admission Test (MCAT) contributed the most variance for the minority student group. MCAT-Science contributed the most variance for the non-minority student group. MCATs accounted for a substantial portion of the variance on the National Board examination. PMID:904005
Jager, Justin; Bornstein, Marc H; Putnick, Diane L; Hendricks, Charlene
2012-06-01
Using the McMaster Family Assessment Device (Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's "unique perspective" or nonshared, idiosyncratic view of the family. We used a modified multitrait-multimethod confirmatory factor analysis that (a) isolated for each family member's 6 reports of family dysfunction the nonshared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by 1 or more family members and (b) extracted common variance across each family member's set of nonshared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. In addition, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these "unique perspectives" reflect about the family are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Jager, Justin; Bornstein, Marc H.; Diane, L. Putnick; Hendricks, Charlene
2012-01-01
Using the Family Assessment Device (FAD; Epstein, Baldwin, & Bishop, 1983) and incorporating the perspectives of adolescent, mother, and father, this study examined each family member's “unique perspective” or non-shared, idiosyncratic view of the family. To do so we used a modified multitrait-multimethod confirmatory factor analysis that (1) isolated for each family member's six reports of family dysfunction the non-shared variance (a combination of variance idiosyncratic to the individual and measurement error) from variance shared by one or more family members and (2) extracted common variance across each family member's set of non-shared variances. The sample included 128 families from a U.S. East Coast metropolitan area. Each family member's unique perspective generalized across his or her different reports of family dysfunction and accounted for a sizable proportion of his or her own variance in reports of family dysfunction. Additionally, after holding level of dysfunction constant across families and controlling for a family's shared variance (agreement regarding family dysfunction), each family member's unique perspective was associated with his or her own adjustment. Future applications and competing alternatives for what these “unique perspectives” reflect about the family are discussed. PMID:22545933
Role of Adenosine Receptor A2A in Traumatic Optic Neuropathies (Addendum)
2016-03-01
inflammation was evaluated using Western blot, Real-Time PCR and immuno-staining analyses. Role of A2AAR signaling in the anti-inflammation effect of ABT...Neuroimmunology 277 (2014) 96–104were evaluated by analysis of variance (one-way ANOVA), and the significance of differences between groups was assessed by the...Ahmad et al. / Journal of Neuroimmunology 277 (2014) 96–104were evaluated by analysis of variance (one-way ANOVA), and the significance of differences
The Pricing of European Options Under the Constant Elasticity of Variance with Stochastic Volatility
NASA Astrophysics Data System (ADS)
Bock, Bounghun; Choi, Sun-Yong; Kim, Jeong-Hoon
This paper considers a hybrid risky asset price model given by a constant elasticity of variance multiplied by a stochastic volatility factor. A multiscale analysis leads to an asymptotic pricing formula for both European vanilla option and a Barrier option near the zero elasticity of variance. The accuracy of the approximation is provided in a rigorous manner. A numerical experiment for implied volatilities shows that the hybrid model improves some of the well-known models in view of fitting the data for different maturities.
Applying Rasch model analysis in the development of the cantonese tone identification test (CANTIT).
Lee, Kathy Y S; Lam, Joffee H S; Chan, Kit T Y; van Hasselt, Charles Andrew; Tong, Michael C F
2017-01-01
Applying Rasch analysis to evaluate the internal structure of a lexical tone perception test known as the Cantonese Tone Identification Test (CANTIT). A 75-item pool (CANTIT-75) with pictures and sound tracks was developed. Respondents were required to make a four-alternative forced choice on each item. A short version of 30 items (CANTIT-30) was developed based on fit statistics, difficulty estimates, and content evaluation. Internal structure was evaluated by fit statistics and Rasch Factor Analysis (RFA). 200 children with normal hearing and 141 children with hearing impairment were recruited. For CANTIT-75, all infit and 97% of outfit values were < 2.0. RFA revealed 40.1% of total variance was explained by the Rasch measure. The first residual component explained 2.5% of total variance in an eigenvalue of 3.1. For CANTIT-30, all infit and outfit values were < 2.0. The Rasch measure explained 38.8% of total variance, the first residual component explained 3.9% of total variance in an eigenvalue of 1.9. The Rasch model provides excellent guidance for the development of short forms. Both CANTIT-75 and CANTIT-30 possess satisfactory internal structure as a construct validity evidence in measuring the lexical tone identification ability of the Cantonese speakers.
Bates, S; Jonaitis, D; Nail, S
2013-10-01
Total X-ray Powder Diffraction Analysis (TXRPD) using transmission geometry was able to observe significant variance in measured powder patterns for sucrose lyophilizates with differing residual water contents. Integrated diffraction intensity corresponding to the observed variances was found to be linearly correlated to residual water content as measured by an independent technique. The observed variance was concentrated in two distinct regions of the lyophilizate powder pattern, corresponding to the characteristic sucrose matrix double halo and the high angle diffuse region normally associated with free-water. Full pattern fitting of the lyophilizate powder patterns suggested that the high angle variance was better described by the characteristic diffraction profile of a concentrated sucrose/water system rather than by the free-water diffraction profile. This suggests that the residual water in the sucrose lyophilizates is intimately mixed at the molecular level with sucrose molecules forming a liquid/solid solution. The bound nature of the residual water and its impact on the sucrose matrix gives an enhanced diffraction response between 3.0 and 3.5 beyond that expected for free-water. The enhanced diffraction response allows semi-quantitative analysis of residual water contents within the studied sucrose lyophilizates to levels below 1% by weight. Copyright © 2013 Elsevier B.V. All rights reserved.
Kashir, Junaid; Jones, Celine; Mounce, Ginny; Ramadan, Walaa M; Lemmon, Bernadette; Heindryckx, Bjorn; de Sutter, Petra; Parrington, John; Turner, Karen; Child, Tim; McVeigh, Enda; Coward, Kevin
2013-01-01
To examine whether similar levels of phospholipase C zeta (PLC-ζ) protein are present in sperm from men whose ejaculates resulted in normal oocyte activation, and to examine whether a predominant pattern of PLC-ζ localization is linked to normal oocyte activation ability. Laboratory study. University laboratory. Control subjects (men with proven oocyte activation capacity; n = 16) and men whose sperm resulted in recurrent intracytoplasmic sperm injection failure (oocyte activation deficient [OAD]; n = 5). Quantitative immunofluorescent analysis of PLC-ζ protein in human sperm. Total levels of PLC-ζ fluorescence, proportions of sperm exhibiting PLC-ζ immunoreactivity, and proportions of PLC-ζ localization patterns in sperm from control and OAD men. Sperm from control subjects presented a significantly higher proportion of sperm exhibiting PLC-ζ immunofluorescence compared with infertile men diagnosed with OAD (82.6% and 27.4%, respectively). Total levels of PLC-ζ in sperm from individual control and OAD patients exhibited significant variance, with sperm from 10 out of 16 (62.5%) exhibiting levels similar to OAD samples. Predominant PLC-ζ localization patterns varied between control and OAD samples with no predictable or consistent pattern. The results indicate that sperm from control men exhibited significant variance in total levels of PLC-ζ protein, as well as significant variance in the predominant localization pattern. Such variance may hinder the diagnostic application of quantitative PLC-ζ immunofluorescent analysis. Copyright © 2013 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Bureau, Alexandre; Duchesne, Thierry
2015-12-01
Splitting extended families into their component nuclear families to apply a genetic association method designed for nuclear families is a widespread practice in familial genetic studies. Dependence among genotypes and phenotypes of nuclear families from the same extended family arises because of genetic linkage of the tested marker with a risk variant or because of familial specificity of genetic effects due to gene-environment interaction. This raises concerns about the validity of inference conducted under the assumption of independence of the nuclear families. We indeed prove theoretically that, in a conditional logistic regression analysis applicable to disease cases and their genotyped parents, the naive model-based estimator of the variance of the coefficient estimates underestimates the true variance. However, simulations with realistic effect sizes of risk variants and variation of this effect from family to family reveal that the underestimation is negligible. The simulations also show the greater efficiency of the model-based variance estimator compared to a robust empirical estimator. Our recommendation is therefore, to use the model-based estimator of variance for inference on effects of genetic variants.
The Conduct System and Its Infuence on Student Learning
ERIC Educational Resources Information Center
Stimpson, Matthew T.; Janosik, Steven M.
2015-01-01
In this study, 7 items were used to define a composite variable that measures the perceived effectiveness of student conduct systems. Multivariate Analysis of Variance (MANOVA) was used to test the relationship between perceived level of system effectiveness and self-reported student learning. In the analyses, 49% of the variance in reported…
Combining Study Outcome Measures Using Dominance Adjusted Weights
ERIC Educational Resources Information Center
Makambi, Kepher H.; Lu, Wenxin
2013-01-01
Weighting of studies in meta-analysis is usually implemented by using the estimated inverse variances of treatment effect estimates. However, there is a possibility of one study dominating other studies in the estimation process by taking on a weight that is above some upper limit. We implement an estimator of the heterogeneity variance that takes…
Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu
2016-01-01
The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784
Habeeb, Christine M; Eklund, Robert C; Coffee, Pete
2017-06-01
This study explored person-related sources of variance in athletes' efficacy beliefs and performances when performing in pairs with distinguishable roles differing in partner dependence. College cheerleaders (n = 102) performed their role in repeated performance trials of two low- and two high-difficulty paired-stunt tasks with three different partners. Data were obtained on self-, other-, and collective efficacy beliefs and subjective performances, and objective performance assessments were obtained from digital recordings. Using the social relations model framework, total variance in each belief/assessment was partitioned, for each role, into numerical components of person-related variance relative to the self, the other, and the collective. Variance component by performance role by task-difficulty repeated-measures analysis of variances revealed that the largest person-related variance component differed by athlete role and increased in size in high-difficulty tasks. Results suggest that the extent the athlete's performance depends on a partner relates to the extent the partner is a source of self-, other-, and collective efficacy beliefs.
Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)
NASA Technical Reports Server (NTRS)
Gaposchkin, E. M.
1972-01-01
The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.
Sharif Nia, Hamid; Pahlevan Sharif, Saeed; Koocher, Gerald P; Yaghoobzadeh, Ameneh; Haghdoost, Ali Akbar; Mar Win, Ma Thin; Soleimani, Mohammad Ali
2017-01-01
This study aimed to evaluate the validity and reliability of the Persian version of Death Anxiety Scale-Extended (DAS-E). A total of 507 patients with end-stage renal disease completed the DAS-E. The factor structure of the scale was evaluated using exploratory factor analysis with an oblique rotation and confirmatory factor analysis. The content and construct validity of the DAS-E were assessed. Average variance extracted, maximum shared squared variance, and average shared squared variance were estimated to assess discriminant and convergent validity. Reliability was assessed using Cronbach's alpha coefficient (α = .839 and .831), composite reliability (CR = .845 and .832), Theta (θ = .893 and .867), and McDonald Omega (Ω = .796 and .743). The analysis indicated a two-factor solution. Reliability and discriminant validity of the factors was established. Findings revealed that the present scale was a valid and reliable instrument that can be used in assessment of death anxiety in Iranian patients with end-stage renal disease.
Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan
2015-01-01
Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance explains the insignificant narrow-sense and significant broad-sense heritability by using a combination of careful statistical epistatic analyses and functional genetic experiments.
RepExplore: addressing technical replicate variance in proteomics and metabolomics data analysis.
Glaab, Enrico; Schneider, Reinhard
2015-07-01
High-throughput omics datasets often contain technical replicates included to account for technical sources of noise in the measurement process. Although summarizing these replicate measurements by using robust averages may help to reduce the influence of noise on downstream data analysis, the information on the variance across the replicate measurements is lost in the averaging process and therefore typically disregarded in subsequent statistical analyses.We introduce RepExplore, a web-service dedicated to exploit the information captured in the technical replicate variance to provide more reliable and informative differential expression and abundance statistics for omics datasets. The software builds on previously published statistical methods, which have been applied successfully to biomedical omics data but are difficult to use without prior experience in programming or scripting. RepExplore facilitates the analysis by providing a fully automated data processing and interactive ranking tables, whisker plot, heat map and principal component analysis visualizations to interpret omics data and derived statistics. Freely available at http://www.repexplore.tk enrico.glaab@uni.lu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Stabilization of cat paw trajectory during locomotion
Klishko, Alexander N.; Farrell, Bradley J.; Beloozerova, Irina N.; Latash, Mark L.
2014-01-01
We investigated which of cat limb kinematic variables during swing of regular walking and accurate stepping along a horizontal ladder are stabilized by coordinated changes of limb segment angles. Three hypotheses were tested: 1) animals stabilize the entire swing trajectory of specific kinematic variables (performance variables); and 2) the level of trajectory stabilization is similar between regular and ladder walking and 3) is higher for forelimbs compared with hindlimbs. We used the framework of the uncontrolled manifold (UCM) hypothesis to quantify the structure of variance of limb kinematics in the limb segment orientation space across steps. Two components of variance were quantified for each potential performance variable, one of which affected it (“bad variance,” variance orthogonal to the UCM, VORT) while the other one did not (“good variance,” variance within the UCM, VUCM). The analysis of five candidate performance variables revealed that cats during both locomotor behaviors stabilize 1) paw vertical position during the entire swing (VUCM > VORT, except in mid-hindpaw swing of ladder walking) and 2) horizontal paw position in initial and terminal swing (except for the entire forepaw swing of regular walking). We also found that the limb length was typically stabilized in midswing, whereas limb orientation was not (VUCM ≤ VORT) for both limbs and behaviors during entire swing. We conclude that stabilization of paw position in early and terminal swing enables accurate and stable locomotion, while stabilization of vertical paw position in midswing helps paw clearance. This study is the first to demonstrate the applicability of the UCM-based analysis to nonhuman movement. PMID:24899676
McNamee, R L; Eddy, W F
2001-12-01
Analysis of variance (ANOVA) is widely used for the study of experimental data. Here, the reach of this tool is extended to cover the preprocessing of functional magnetic resonance imaging (fMRI) data. This technique, termed visual ANOVA (VANOVA), provides both numerical and pictorial information to aid the user in understanding the effects of various parts of the data analysis. Unlike a formal ANOVA, this method does not depend on the mathematics of orthogonal projections or strictly additive decompositions. An illustrative example is presented and the application of the method to a large number of fMRI experiments is discussed. Copyright 2001 Wiley-Liss, Inc.
Integrating mean and variance heterogeneities to identify differentially expressed genes.
Ouyang, Weiwei; An, Qiang; Zhao, Jinying; Qin, Huaizhen
2016-12-06
In functional genomics studies, tests on mean heterogeneity have been widely employed to identify differentially expressed genes with distinct mean expression levels under different experimental conditions. Variance heterogeneity (aka, the difference between condition-specific variances) of gene expression levels is simply neglected or calibrated for as an impediment. The mean heterogeneity in the expression level of a gene reflects one aspect of its distribution alteration; and variance heterogeneity induced by condition change may reflect another aspect. Change in condition may alter both mean and some higher-order characteristics of the distributions of expression levels of susceptible genes. In this report, we put forth a conception of mean-variance differentially expressed (MVDE) genes, whose expression means and variances are sensitive to the change in experimental condition. We mathematically proved the null independence of existent mean heterogeneity tests and variance heterogeneity tests. Based on the independence, we proposed an integrative mean-variance test (IMVT) to combine gene-wise mean heterogeneity and variance heterogeneity induced by condition change. The IMVT outperformed its competitors under comprehensive simulations of normality and Laplace settings. For moderate samples, the IMVT well controlled type I error rates, and so did existent mean heterogeneity test (i.e., the Welch t test (WT), the moderated Welch t test (MWT)) and the procedure of separate tests on mean and variance heterogeneities (SMVT), but the likelihood ratio test (LRT) severely inflated type I error rates. In presence of variance heterogeneity, the IMVT appeared noticeably more powerful than all the valid mean heterogeneity tests. Application to the gene profiles of peripheral circulating B raised solid evidence of informative variance heterogeneity. After adjusting for background data structure, the IMVT replicated previous discoveries and identified novel experiment-wide significant MVDE genes. Our results indicate tremendous potential gain of integrating informative variance heterogeneity after adjusting for global confounders and background data structure. The proposed informative integration test better summarizes the impacts of condition change on expression distributions of susceptible genes than do the existent competitors. Therefore, particular attention should be paid to explicitly exploit the variance heterogeneity induced by condition change in functional genomics analysis.
NASA Astrophysics Data System (ADS)
Aguirre, E. E.; Karchewski, B.
2017-12-01
DC resistivity surveying is a geophysical method that quantifies the electrical properties of the subsurface of the earth by applying a source current between two electrodes and measuring potential differences between electrodes at known distances from the source. Analytical solutions for a homogeneous half-space and simple subsurface models are well known, as the former is used to define the concept of apparent resistivity. However, in situ properties are heterogeneous meaning that simple analytical models are only an approximation, and ignoring such heterogeneity can lead to misinterpretation of survey results costing time and money. The present study examines the extent to which random variations in electrical properties (i.e. electrical conductivity) affect potential difference readings and therefore apparent resistivities, relative to an assumed homogeneous subsurface model. We simulate the DC resistivity survey using a Finite Difference (FD) approximation of an appropriate simplification of Maxwell's equations implemented in Matlab. Electrical resistivity values at each node in the simulation were defined as random variables with a given mean and variance, and are assumed to follow a log-normal distribution. The Monte Carlo analysis for a given variance of electrical resistivity was performed until the mean and variance in potential difference measured at the surface converged. Finally, we used the simulation results to examine the relationship between variance in resistivity and variation in surface potential difference (or apparent resistivity) relative to a homogeneous half-space model. For relatively low values of standard deviation in the material properties (<10% of mean), we observed a linear correlation between variance of resistivity and variance in apparent resistivity.
Cox, Simon R.; MacPherson, Sarah E.; Ferguson, Karen J.; Nissan, Jack; Royle, Natalie A.; MacLullich, Alasdair M.J.; Wardlaw, Joanna M.; Deary, Ian J.
2014-01-01
Both general fluid intelligence (gf) and performance on some ‘frontal tests’ of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and gf share neurobiological variance. To this end, we examined relationships between gf and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). gf correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for gf. Principal component analysis of test scores identified units of gf-common and gf-independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact. PMID:25278641
Cox, Simon R; MacPherson, Sarah E; Ferguson, Karen J; Nissan, Jack; Royle, Natalie A; MacLullich, Alasdair M J; Wardlaw, Joanna M; Deary, Ian J
2014-09-01
Both general fluid intelligence ( g f ) and performance on some 'frontal tests' of cognition decline with age. Both types of ability are at least partially dependent on the integrity of the frontal lobes, which also deteriorate with age. Overlap between these two methods of assessing complex cognition in older age remains unclear. Such overlap could be investigated using inter-test correlations alone, as in previous studies, but this would be enhanced by ascertaining whether frontal test performance and g f share neurobiological variance. To this end, we examined relationships between g f and 6 frontal tests (Tower, Self-Ordered Pointing, Simon, Moral Dilemmas, Reversal Learning and Faux Pas tests) in 90 healthy males, aged ~ 73 years. We interpreted their correlational structure using principal component analysis, and in relation to MRI-derived regional frontal lobe volumes (relative to maximal healthy brain size). g f correlated significantly and positively (.24 ≤ r ≤ .53) with the majority of frontal test scores. Some frontal test scores also exhibited shared variance after controlling for g f . Principal component analysis of test scores identified units of g f -common and g f -independent variance. The former was associated with variance in the left dorsolateral (DL) and anterior cingulate (AC) regions, and the latter with variance in the right DL and AC regions. Thus, we identify two biologically-meaningful components of variance in complex cognitive performance in older age and suggest that age-related changes to DL and AC have the greatest cognitive impact.
Dynamic Repertoire of Intrinsic Brain States Is Reduced in Propofol-Induced Unconsciousness
Liu, Xiping; Pillay, Siveshigan
2015-01-01
Abstract The richness of conscious experience is thought to scale with the size of the repertoire of causal brain states, and it may be diminished in anesthesia. We estimated the state repertoire from dynamic analysis of intrinsic functional brain networks in conscious sedated and unconscious anesthetized rats. Functional resonance images were obtained from 30-min whole-brain resting-state blood oxygen level-dependent (BOLD) signals at propofol infusion rates of 20 and 40 mg/kg/h, intravenously. Dynamic brain networks were defined at the voxel level by sliding window analysis of regional homogeneity (ReHo) or coincident threshold crossings (CTC) of the BOLD signal acquired in nine sagittal slices. The state repertoire was characterized by the temporal variance of the number of voxels with significant ReHo or positive CTC. From low to high propofol dose, the temporal variances of ReHo and CTC were reduced by 78%±20% and 76%±20%, respectively. Both baseline and propofol-induced reduction of CTC temporal variance increased from lateral to medial position. Group analysis showed a 20% reduction in the number of unique states at the higher propofol dose. Analysis of temporal variance in 12 anatomically defined regions of interest predicted that the largest changes occurred in visual cortex, parietal cortex, and caudate-putamen. The results suggest that the repertoire of large-scale brain states derived from the spatiotemporal dynamics of intrinsic networks is substantially reduced at an anesthetic dose associated with loss of consciousness. PMID:24702200
Andridge, Rebecca. R.
2011-01-01
In cluster randomized trials (CRTs), identifiable clusters rather than individuals are randomized to study groups. Resulting data often consist of a small number of clusters with correlated observations within a treatment group. Missing data often present a problem in the analysis of such trials, and multiple imputation (MI) has been used to create complete data sets, enabling subsequent analysis with well-established analysis methods for CRTs. We discuss strategies for accounting for clustering when multiply imputing a missing continuous outcome, focusing on estimation of the variance of group means as used in an adjusted t-test or ANOVA. These analysis procedures are congenial to (can be derived from) a mixed effects imputation model; however, this imputation procedure is not yet available in commercial statistical software. An alternative approach that is readily available and has been used in recent studies is to include fixed effects for cluster, but the impact of using this convenient method has not been studied. We show that under this imputation model the MI variance estimator is positively biased and that smaller ICCs lead to larger overestimation of the MI variance. Analytical expressions for the bias of the variance estimator are derived in the case of data missing completely at random (MCAR), and cases in which data are missing at random (MAR) are illustrated through simulation. Finally, various imputation methods are applied to data from the Detroit Middle School Asthma Project, a recent school-based CRT, and differences in inference are compared. PMID:21259309
Influence diagnostics in meta-regression model.
Shi, Lei; Zuo, ShanShan; Yu, Dalei; Zhou, Xiaohua
2017-09-01
This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology. Copyright © 2017 John Wiley & Sons, Ltd.
Yokoyama, Yoshie; Jelenkovic, Aline; Hur, Yoon-Mi; Sund, Reijo; Fagnani, Corrado; Stazi, Maria A; Brescianini, Sonia; Ji, Fuling; Ning, Feng; Pang, Zengchang; Knafo-Noam, Ariel; Mankuta, David; Abramson, Lior; Rebato, Esther; Hopper, John L; Cutler, Tessa L; Saudino, Kimberly J; Nelson, Tracy L; Whitfield, Keith E; Corley, Robin P; Huibregtse, Brooke M; Derom, Catherine A; Vlietinck, Robert F; Loos, Ruth J F; Llewellyn, Clare H; Fisher, Abigail; Bjerregaard-Andersen, Morten; Beck-Nielsen, Henning; Sodemann, Morten; Krueger, Robert F; McGue, Matt; Pahlen, Shandell; Bartels, Meike; van Beijsterveldt, Catharina E M; Willemsen, Gonneke; Harris, Jennifer R; Brandt, Ingunn; Nilsen, Thomas S; Craig, Jeffrey M; Saffery, Richard; Dubois, Lise; Boivin, Michel; Brendgen, Mara; Dionne, Ginette; Vitaro, Frank; Haworth, Claire M A; Plomin, Robert; Bayasgalan, Gombojav; Narandalai, Danshiitsoodol; Rasmussen, Finn; Tynelius, Per; Tarnoki, Adam D; Tarnoki, David L; Ooki, Syuichi; Rose, Richard J; Pietiläinen, Kirsi H; Sørensen, Thorkild I A; Boomsma, Dorret I; Kaprio, Jaakko; Silventoinen, Karri
2018-05-19
The genetic architecture of birth size may differ geographically and over time. We examined differences in the genetic and environmental contributions to birthweight, length and ponderal index (PI) across geographical-cultural regions (Europe, North America and Australia, and East Asia) and across birth cohorts, and how gestational age modifies these effects. Data from 26 twin cohorts in 16 countries including 57 613 monozygotic and dizygotic twin pairs were pooled. Genetic and environmental variations of birth size were estimated using genetic structural equation modelling. The variance of birthweight and length was predominantly explained by shared environmental factors, whereas the variance of PI was explained both by shared and unique environmental factors. Genetic variance contributing to birth size was small. Adjusting for gestational age decreased the proportions of shared environmental variance and increased the propositions of unique environmental variance. Genetic variance was similar in the geographical-cultural regions, but shared environmental variance was smaller in East Asia than in Europe and North America and Australia. The total variance and shared environmental variance of birth length and PI were greater from the birth cohort 1990-99 onwards compared with the birth cohorts from 1970-79 to 1980-89. The contribution of genetic factors to birth size is smaller than that of shared environmental factors, which is partly explained by gestational age. Shared environmental variances of birth length and PI were greater in the latest birth cohorts and differed also across geographical-cultural regions. Shared environmental factors are important when explaining differences in the variation of birth size globally and over time.
ERIC Educational Resources Information Center
Konold, Timothy R.; Glutting, Joseph J.
2008-01-01
This study employed a correlated trait-correlated method application of confirmatory factor analysis to disentangle trait and method variance from measures of attention-deficit/hyperactivity disorder obtained at the college level. The two trait factors were "Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition" ("DSM-IV")…
ERIC Educational Resources Information Center
Marx, Megan D.
2016-01-01
The purpose of this study was to determine variance in mean levels of teacher self-efficacy (TSE) and its three factors--efficacy in student engagement (EIS), efficacy in instructional strategies (EIS), and efficacy in classroom management (ECM)--based on participation and time spent in professional learning communities (PLCs). In this…
ERIC Educational Resources Information Center
Jackson, Dan; Bowden, Jack; Baker, Rose
2015-01-01
Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…
[Trait variability in ontogenesis of epiphytic lichen Hypogymnia physodes (L.) Nyl].
Suetina, Iu G; Glotov, N V
2014-01-01
Ontogenesis of the foliose lichen Hypogymniaphysodes has been described on the basis of the material obtained from natural populations. Ontogenetic dynamics (diameter of thallus and the number of lobes) and the features of reproductive structures (the number and diameter of labelloid and galeated sorales) were studied in ecologically different pine forests. We reasonably rejected the use of the variance analysis and nonparametric criteria for the result processing. It was shown that the median dynamics and trait variance may be either similar or different throughout the ontogenesis. The trait variances in ecologically different ecotopes were shown to be different.
A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.
Ben Taieb, Souhaib; Atiya, Amir F
2016-01-01
Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.
1951-05-01
prccedur&:s to be of hipn accuracy. Ambij;uity of subject responizes due to overlap of entries on tU,, record sheets vas negligible. Handwriting ...experimental variables on reading errors us carried out by analysis of variance methods. For this purpose it was convenient to consider different classes...on any scale - an error ofY one numbered division. For this reason, the result. of the analysis of variance of the /10’s errors by dial types may
Second-moment budgets in cloud topped boundary layers: A large-eddy simulation study
NASA Astrophysics Data System (ADS)
Heinze, Rieke; Mironov, Dmitrii; Raasch, Siegfried
2015-06-01
A detailed analysis of second-order moment budgets for cloud topped boundary layers (CTBLs) is performed using high-resolution large-eddy simulation (LES). Two CTBLs are simulated—one with trade wind shallow cumuli, and the other with nocturnal marine stratocumuli. Approximations to the ensemble-mean budgets of the Reynolds-stress components, of the fluxes of two quasi-conservative scalars, and of the scalar variances and covariance are computed by averaging the LES data over horizontal planes and over several hundred time steps. Importantly, the subgrid scale contributions to the budget terms are accounted for. Analysis of the LES-based second-moment budgets reveals, among other things, a paramount importance of the pressure scrambling terms in the Reynolds-stress and scalar-flux budgets. The pressure-strain correlation tends to evenly redistribute kinetic energy between the components, leading to the growth of horizontal-velocity variances at the expense of the vertical-velocity variance which is produced by buoyancy over most of both CTBLs. The pressure gradient-scalar covariances are the major sink terms in the budgets of scalar fluxes. The third-order transport proves to be of secondary importance in the scalar-flux budgets. However, it plays a key role in maintaining budgets of TKE and of the scalar variances and covariance. Results from the second-moment budget analysis suggest that the accuracy of description of the CTBL structure within the second-order closure framework strongly depends on the fidelity of parameterizations of the pressure scrambling terms in the flux budgets and of the third-order transport terms in the variance budgets. This article was corrected on 26 JUN 2015. See the end of the full text for details.
Genetic control of residual variance of yearling weight in Nellore beef cattle.
Iung, L H S; Neves, H H R; Mulder, H A; Carvalheiro, R
2017-04-01
There is evidence for genetic variability in residual variance of livestock traits, which offers the potential for selection for increased uniformity of production. Different statistical approaches have been employed to study this topic; however, little is known about the concordance between them. The aim of our study was to investigate the genetic heterogeneity of residual variance on yearling weight (YW; 291.15 ± 46.67) in a Nellore beef cattle population; to compare the results of the statistical approaches, the two-step approach and the double hierarchical generalized linear model (DHGLM); and to evaluate the effectiveness of power transformation to accommodate scale differences. The comparison was based on genetic parameters, accuracy of EBV for residual variance, and cross-validation to assess predictive performance of both approaches. A total of 194,628 yearling weight records from 625 sires were used in the analysis. The results supported the hypothesis of genetic heterogeneity of residual variance on YW in Nellore beef cattle and the opportunity of selection, measured through the genetic coefficient of variation of residual variance (0.10 to 0.12 for the two-step approach and 0.17 for DHGLM, using an untransformed data set). However, low estimates of genetic variance associated with positive genetic correlations between mean and residual variance (about 0.20 for two-step and 0.76 for DHGLM for an untransformed data set) limit the genetic response to selection for uniformity of production while simultaneously increasing YW itself. Moreover, large sire families are needed to obtain accurate estimates of genetic merit for residual variance, as indicated by the low heritability estimates (<0.007). Box-Cox transformation was able to decrease the dependence of the variance on the mean and decreased the estimates of genetic parameters for residual variance. The transformation reduced but did not eliminate all the genetic heterogeneity of residual variance, highlighting its presence beyond the scale effect. The DHGLM showed higher predictive ability of EBV for residual variance and therefore should be preferred over the two-step approach.
Read-noise characterization of focal plane array detectors via mean-variance analysis.
Sperline, R P; Knight, A K; Gresham, C A; Koppenaal, D W; Hieftje, G M; Denton, M B
2005-11-01
Mean-variance analysis is described as a method for characterization of the read-noise and gain of focal plane array (FPA) detectors, including charge-coupled devices (CCDs), charge-injection devices (CIDs), and complementary metal-oxide-semiconductor (CMOS) multiplexers (infrared arrays). Practical FPA detector characterization is outlined. The nondestructive readout capability available in some CIDs and FPA devices is discussed as a means for signal-to-noise ratio improvement. Derivations of the equations are fully presented to unify understanding of this method by the spectroscopic community.
Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R
2018-04-10
Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
New trends in gender and mathematics performance: a meta-analysis.
Lindberg, Sara M; Hyde, Janet Shibley; Petersen, Jennifer L; Linn, Marcia C
2010-11-01
In this article, we use meta-analysis to analyze gender differences in recent studies of mathematics performance. First, we meta-analyzed data from 242 studies published between 1990 and 2007, representing the testing of 1,286,350 people. Overall, d = 0.05, indicating no gender difference, and variance ratio = 1.08, indicating nearly equal male and female variances. Second, we analyzed data from large data sets based on probability sampling of U.S. adolescents over the past 20 years: the National Longitudinal Surveys of Youth, the National Education Longitudinal Study of 1988, the Longitudinal Study of American Youth, and the National Assessment of Educational Progress. Effect sizes for the gender difference ranged between -0.15 and +0.22. Variance ratios ranged from 0.88 to 1.34. Taken together, these findings support the view that males and females perform similarly in mathematics.
Budde, M.E.; Tappan, G.; Rowland, James; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Biochemical phenotypes to discriminate microbial subpopulations and improve outbreak detection.
Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F; Stelling, John
2013-01-01
Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as "nuisance" biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events.
ERIC Educational Resources Information Center
Watson, Stevie
2009-01-01
This study examined attitudinal and behavioral differences between internal and external locus of control (LOC) consumers on credit card misuse, the importance of money, and compulsive buying. Using multiple analysis of variance and separate analyses of variance, internal LOC consumers were found to have lower scores on credit card misuse and…
ERIC Educational Resources Information Center
Bakir, Saad T.
2010-01-01
We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…
A comparison of coronal and interplanetary current sheet inclinations
NASA Technical Reports Server (NTRS)
Behannon, K. W.; Burlaga, L. F.; Hundhausen, A. J.
1983-01-01
The HAO white light K-coronameter observations show that the inclination of the heliospheric current sheet at the base of the corona can be both large (nearly vertical with respect to the solar equator) or small during Cararington rotations 1660 - 1666 and even on a single solar rotation. Voyager 1 and 2 magnetic field observations of crossing of the heliospheric current sheet at distances from the Sun of 1.4 and 2.8 AU. Two cases are considered, one in which the corresponding coronameter data indicate a nearly vertical (north-south) current sheet and another in which a nearly horizontal, near equatorial current sheet is indicated. For the crossings of the vertical current sheet, a variance analysis based on hour averages of the magnetic field data gave a minimum variance direction consistent with a steep inclination. The horizontal current sheet was observed by Voyager as a region of mixed polarity and low speeds lasting several days, consistent with multiple crossings of a horizontal but irregular and fluctuating current sheet at 1.4 AU. However, variance analysis of individual current sheet crossings in this interval using 1.92 see averages did not give minimum variance directions consistent with a horizontal current sheet.
Structure analysis of simulated molecular clouds with the Δ-variance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.
Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less
Structure analysis of simulated molecular clouds with the Δ-variance
Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.
2015-05-27
Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n 0 = 30, 100 and 300 cm -3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and columnmore » density maps for various chemical components: the total, H 2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H 2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H 2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H 2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm -3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less
An Investigation of Collaborative Leadership
2013-01-01
businesses . The second will use an analysis of variance (ANOVA) to statistically compare the variance among organizations. Research regarding collaborative...horizontal column of the “T,” a leader is networking across the larger business model to understand how their organization’s core skills can be used in...the exchange of information or services among individuals, groups, or institutions in order to cultivate productive business relationships
Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
Jacobs, Perke; Viechtbauer, Wolfgang
2017-06-01
Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Use of Online Modules and the Effect on Student Outcomes in a High School Chemistry Class
NASA Astrophysics Data System (ADS)
Lamb, Richard L.; Annetta, Len
2013-10-01
The purpose of the study was to review the efficacy of online chemistry simulations in a high school chemistry class and provide discussion of the factors that may affect student learning. The sample consisted of 351 high school students exposed to online simulations. Researchers administered a pretest, intermediate test and posttest to measure chemistry content knowledge acquired during the use of online chemistry laboratory simulations. The authors also analyzed student journal entries as an attitudinal measure of chemistry during the simulation experience. The four analyses conducted were Repeated Time Measures Analysis of Variance, a three-way Analysis of Variance, Logistic Regression and Multiple Analysis of Variance. Each of these analyses provides for a slightly different aspect of factors regarding student attitudes and outcomes. Results indicate that there is a statistically significant main effect across grouping type (experimental versus control, p = 0.042, α = 0.05). Analysis of student journal entries suggests that attitudinal factors may affect student outcomes concerning the use of online supplemental instruction. Implications for this study show that the use of online simulations promotes increased understanding of chemistry content through open-ended and interactive questioning.
Ren, Jing; Bai, Ming; Yang, Xing-Ke; Zhang, Run-Zhi; Ge, Si-Qin
2017-01-01
The success of beetles is mainly attributed to the possibility to hide the hindwings under the sclerotised elytra. The acquisition of the transverse folding function of the hind wing is an important event in the evolutionary history of beetles. In this study, the morphological and functional variances in the hind wings of 94 leaf beetle species (Coleoptera: Chrysomelinae) is explored using geometric morphometrics based on 36 landmarks. Principal component analysis and Canonical variate analysis indicate that changes of apical area, anal area, and middle area are three useful phylogenetic features at a subtribe level of leaf beetles. Variances of the apical area are the most obvious, which strongly influence the entire venation variance. Partial least squares analysis indicates that the proximal and distal parts of hind wings are weakly associated. Modularity tests confirm that the proximal and distal compartments of hind wings are separate modules. It is deduced that for leaf beetles, or even other beetles, the hind wing possibly exhibits significant functional divergences that occurred during the evolution of transverse folding that resulted in the proximal and distal compartments of hind wings evolving into separate functional modules.
Applying the Hájek Approach in Formula-Based Variance Estimation. Research Report. ETS RR-17-24
ERIC Educational Resources Information Center
Qian, Jiahe
2017-01-01
The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…
ERIC Educational Resources Information Center
Nowell, Amy; Hedges, Larry V.
1998-01-01
Uses evidence from seven surveys of the U.S. 12th-grade population and the National Assessment of Educational Progress to show that gender differences in mean and variance in academic achievement are small from 1960 to 1994 but that differences in extreme scores are often substantial. (SLD)
Oregon ground-water quality and its relation to hydrogeological factors; a statistical approach
Miller, T.L.; Gonthier, J.B.
1984-01-01
An appraisal of Oregon ground-water quality was made using existing data accessible through the U.S. Geological Survey computer system. The data available for about 1,000 sites were separated by aquifer units and hydrologic units. Selected statistical moments were described for 19 constituents including major ions. About 96 percent of all sites in the data base were sampled only once. The sample data were classified by aquifer unit and hydrologic unit and analysis of variance was run to determine if significant differences exist between the units within each of these two classifications for the same 19 constituents on which statistical moments were determined. Results of the analysis of variance indicated both classification variables performed about the same, but aquifer unit did provide more separation for some constituents. Samples from the Rogue River basin were classified by location within the flow system and type of flow system. The samples were then analyzed using analysis of variance on 14 constituents to determine if there were significant differences between subsets classified by flow path. Results of this analysis were not definitive, but classification as to the type of flow system did indicate potential for segregating water-quality data into distinct subsets. (USGS)
Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.
Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V
2016-10-01
An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Yan; Hughes, Jan N.; Kwok, Oi-man; Hsu, Hsien-Yuan
2012-01-01
This study investigated the construct validity of measures of teacher-student support in a sample of 709 ethnically diverse second and third grade academically at-risk students. Confirmatory factor analysis investigated the convergent and discriminant validities of teacher, child, and peer reports of teacher-student support and child conduct problems. Results supported the convergent and discriminant validity of scores on the measures. Peer reports accounted for the largest proportion of trait variance and non-significant method variance. Child reports accounted for the smallest proportion of trait variance and the largest method variance. A model with two latent factors provided a better fit to the data than a model with one factor, providing further evidence of the discriminant validity of measures of teacher-student support. Implications for research, policy, and practice are discussed. PMID:21767024
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Smelter, Andrey; Rouchka, Eric C; Moseley, Hunter N B
2017-08-01
Peak lists derived from nuclear magnetic resonance (NMR) spectra are commonly used as input data for a variety of computer assisted and automated analyses. These include automated protein resonance assignment and protein structure calculation software tools. Prior to these analyses, peak lists must be aligned to each other and sets of related peaks must be grouped based on common chemical shift dimensions. Even when programs can perform peak grouping, they require the user to provide uniform match tolerances or use default values. However, peak grouping is further complicated by multiple sources of variance in peak position limiting the effectiveness of grouping methods that utilize uniform match tolerances. In addition, no method currently exists for deriving peak positional variances from single peak lists for grouping peaks into spin systems, i.e. spin system grouping within a single peak list. Therefore, we developed a complementary pair of peak list registration analysis and spin system grouping algorithms designed to overcome these limitations. We have implemented these algorithms into an approach that can identify multiple dimension-specific positional variances that exist in a single peak list and group peaks from a single peak list into spin systems. The resulting software tools generate a variety of useful statistics on both a single peak list and pairwise peak list alignment, especially for quality assessment of peak list datasets. We used a range of low and high quality experimental solution NMR and solid-state NMR peak lists to assess performance of our registration analysis and grouping algorithms. Analyses show that an algorithm using a single iteration and uniform match tolerances approach is only able to recover from 50 to 80% of the spin systems due to the presence of multiple sources of variance. Our algorithm recovers additional spin systems by reevaluating match tolerances in multiple iterations. To facilitate evaluation of the algorithms, we developed a peak list simulator within our nmrstarlib package that generates user-defined assigned peak lists from a given BMRB entry or database of entries. In addition, over 100,000 simulated peak lists with one or two sources of variance were generated to evaluate the performance and robustness of these new registration analysis and peak grouping algorithms.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses
Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.
2015-01-01
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576
Genetic basis of between-individual and within-individual variance of docility.
Martin, J G A; Pirotta, E; Petelle, M B; Blumstein, D T
2017-04-01
Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
EGSIEM combination service: combination of GRACE monthly K-band solutions on normal equation level
NASA Astrophysics Data System (ADS)
Meyer, Ulrich; Jean, Yoomin; Arnold, Daniel; Jäggi, Adrian
2017-04-01
The European Gravity Service for Improved Emergency Management (EGSIEM) project offers a scientific combination service, combining for the first time monthly GRACE gravity fields of different analysis centers (ACs) on normal equation (NEQ) level and thus taking all correlations between the gravity field coefficients and pre-eliminated orbit and instrument parameters correctly into account. Optimal weights for the individual NEQs are commonly derived by variance component estimation (VCE), as is the case for the products of the International VLBI Service (IVS) or the DTRF2008 reference frame realisation that are also derived by combination on NEQ-level. But variance factors are based on post-fit residuals and strongly depend on observation sampling and noise modeling, which both are very diverse in case of the individual EGSIEM ACs. These variance factors do not necessarily represent the true error levels of the estimated gravity field parameters that are still governed by analysis noise. We present a combination approach where weights are derived on solution level, thereby taking the analysis noise into account.
Hjerpe, Per; Ohlsson, Henrik; Lindblad, Ulf; Boström, Kristina Bengtsson; Merlo, Juan
2011-04-01
In Skaraborg, Sweden, the economic responsibility for tax-financed prescription drug costs was transferred from the regional administrative level to the local level (health care centre; HCC) in 2003. The aim of this study was to investigate the impact of this decentralization of economic responsibility on adherence to guidelines for prescribing lipid-lowering drugs. Data from all 24 public HCCs in Skaraborg on prescriptions for lipid-lowering drugs during 2003 and 2005 were extracted from the Skaraborg Primary Care Database (SPCD). Multilevel regression analysis (MLRA) was used to disentangle the variances at different levels of data (patient, physician, HCC). The outcome variable on the patient level was the prescription of the recommended statin (yes/no). Sex and age of the patients and sex, age and occupational status of the physician were included as fixed effects. The variance was expressed as the median odds ratio (MOR). The prevalence of adherence to guidelines for the prescription of statins increased from 77% in 2003 to 84% in 2005. The MLRA showed that in 2003 the variance was equally distributed between the HCC and physician levels (MOR(HCC2003)=1.89 vs. MOR(PHYSICIAN2003)=1.88). The variance between physicians and between HCCs decreased considerably between 2003 and 2005. The inclusion of individual and physician characteristics did not explain any of the remaining variance. The decentralized budget appears to have increased adherence to guidelines and reduced inefficient variation in prescribing.
Ma, Kaifeng; Sun, Lidan; Cheng, Tangren; Pan, Huitang; Wang, Jia; Zhang, Qixiang
2018-01-01
Increasing evidence shows that epigenetics plays an important role in phenotypic variance. However, little is known about epigenetic variation in the important ornamental tree Prunus mume. We used amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified polymorphism (MSAP) techniques, and association analysis and sequencing to investigate epigenetic variation and its relationships with genetic variance, environment factors, and traits. By performing leaf sampling, the relative total methylation level (29.80%) was detected in 96 accessions of P. mume. And the relative hemi-methylation level (15.77%) was higher than the relative full methylation level (14.03%). The epigenetic diversity (I∗ = 0.575, h∗ = 0.393) was higher than the genetic diversity (I = 0.484, h = 0.319). The cultivated population displayed greater epigenetic diversity than the wild populations in both southwest and southeast China. We found that epigenetic variance and genetic variance, and environmental factors performed cooperative structures, respectively. In particular, leaf length, width and area were positively correlated with relative full methylation level and total methylation level, indicating that the DNA methylation level played a role in trait variation. In total, 203 AFLP and 423 MSAP associated markers were detected and 68 of them were sequenced. Homologous analysis and functional prediction suggested that the candidate marker-linked genes were essential for leaf morphology development and metabolism, implying that these markers play critical roles in the establishment of leaf length, width, area, and ratio of length to width. PMID:29441078
Ma, Kaifeng; Sun, Lidan; Cheng, Tangren; Pan, Huitang; Wang, Jia; Zhang, Qixiang
2018-01-01
Increasing evidence shows that epigenetics plays an important role in phenotypic variance. However, little is known about epigenetic variation in the important ornamental tree Prunus mume . We used amplified fragment length polymorphism (AFLP) and methylation-sensitive amplified polymorphism (MSAP) techniques, and association analysis and sequencing to investigate epigenetic variation and its relationships with genetic variance, environment factors, and traits. By performing leaf sampling, the relative total methylation level (29.80%) was detected in 96 accessions of P . mume . And the relative hemi-methylation level (15.77%) was higher than the relative full methylation level (14.03%). The epigenetic diversity ( I ∗ = 0.575, h ∗ = 0.393) was higher than the genetic diversity ( I = 0.484, h = 0.319). The cultivated population displayed greater epigenetic diversity than the wild populations in both southwest and southeast China. We found that epigenetic variance and genetic variance, and environmental factors performed cooperative structures, respectively. In particular, leaf length, width and area were positively correlated with relative full methylation level and total methylation level, indicating that the DNA methylation level played a role in trait variation. In total, 203 AFLP and 423 MSAP associated markers were detected and 68 of them were sequenced. Homologous analysis and functional prediction suggested that the candidate marker-linked genes were essential for leaf morphology development and metabolism, implying that these markers play critical roles in the establishment of leaf length, width, area, and ratio of length to width.
Methods for Improving Information from ’Undesigned’ Human Factors Experiments.
Human factors engineering, Information processing, Regression analysis , Experimental design, Least squares method, Analysis of variance, Correlation techniques, Matrices(Mathematics), Multiple disciplines, Mathematical prediction
Self-perception and value system as possible predictors of stress.
Sivberg, B
1998-03-01
This study was directed towards personality-related, value system and sociodemographic variables of nursing students in a situation of change, using a longitudinal perspective to measure their improvement in principle-based moral judgement (Kohlberg; Rest) as possible predictors of stress. Three subgroups of students were included from the commencement of the first three-year academic nursing programme in 1993. The students came from the colleges of health at Jönköping, Växjö and Kristianstad in the south of Sweden. A principal component factor analysis (varimax) was performed using data obtained from the students in the spring of 1994 (n = 122) and in the spring of 1996 (n = 112). There were 23 variables, of which two were sociodemographic, eight represented self-image, six were self-values, six were interpersonal values, and one was principle-based moral judgement. The analysis of data from students in the first year of a three-year programme demonstrated eight factors that explained 68.8% of the variance. The most important factors were: (1) ascendant decisive disorderly sociability and nonpractical mindedness (18.1% of the variance); (2) original vigour person-related trust (13.3%) of the variance); (3) orderly nonvigour achievement (8.9% of the variance) and (4) independent leadership (7.9% of the variance). (The term 'ascendancy' refers to self-confidence, and 'vigour' denotes responding well to challenges and coping with stress.) The analysis in 1996 demonstrated nine factors, of which the most important were: (1) ascendant original sociability with decisive nonconformist leadership (18.2% of the variance); (2) cautious person-related responsibility (12.6% of the variance); (3) orderly nonvariety achievement (8.4% of the variance); and (4) nonsupportive benevolent conformity (7.2% of the variance). A comparison of the two most prominent factors in 1994 and 1996 showed the process of change to be stronger for 18.2% and weaker for 30% of the variance. Principle-based moral judgement was measured in March 1994 and in May 1996, using the Swedish version of the Defining Issues Test and Index P. The result was that Index P for the students at Jönköping changed significantly (paired samples t-test) between 1994 and 1996 (p = 0.028), but that for the Växjö and Kristianstad students did not. The mean of Index P was 44.3% at Växjö, which was greater than the international average for college students (42.3%) it differed significantly in the spring of 1996 (independent samples t-test), but not in 1994, from the students at Jönköping (p = 0.032) and Kristianstad (p = 0.025). Index P was very heterogeneous for the group of students at Växjö, with the result that the paired samples t-test reached a value close to significance only. The conclusion of this study was that, if self-perception and value system are predictors of stress, only one-third of the students had improved their ability to cope with stress at the end of the programme. This article contains the author's application to the teaching process of reflecting on the structure of expectations in professional ethical relationships.
NASA Astrophysics Data System (ADS)
Gadbury-Amyot, Cynthia C.
This study examined validity and reliability of portfolio assessment using Messick's (1996, 1995) unified framework of construct validity. Theoretical and empirical evidence was sought for six aspects of construct validity. The sample included twenty student portfolios. Each portfolio were evaluated by seven faculty raters using a primary trait analysis scoring rubric. There was a significant relationship (r = .81--.95; p < .01) between the seven subscales in the scoring rubric demonstrating measurement of a common construct. Item analysis was conducted to examine convergent and discriminant empirical relationships of the 35 items in the scoring rubric. There was a significant relationship between all items ( p < .01), and all but one item was more strongly correlated with its own subscale than with other subscales. However, correlations of items across subscales were predominantly moderate in strength indicating that items did not strongly discriminate between subscales. A fully crossed, two facet generalizability (G) study design was used to examine reliability. Analysis of variance demonstrated that the greatest source of variance was the scoring rubric itself, accounting for 78% of the total variance. The smallest source of variance was the interaction between portfolio and rubric (1.15%) indicating that while the seven subscales varied in difficulty level, the relative standing of individual portfolios was maintained across subscales. Faculty rater variance accounted for only 1.28% of total variance. A phi coefficient of .86, analogous to a reliability coefficient in classical test theory, was obtained in the Decision study by increasing the subscales to fourteen and decreasing faculty raters to three. There was a significant relationship between portfolios and grade point average (r = .70; p < .01), and the National Dental Hygiene Board Examination (r = .60; p < .01). The relationship between portfolios and the Central Regional Dental Testing Service examination was both weak and nonsignificant (r = .19; p > .05). An open-ended survey was used to elicit student feedback on portfolio development. A majority of the students (76%) perceived value in the development of programmatic portfolios. In conclusion, the pattern of findings from this study suggest that portfolios can serve as a valid and reliable measure for assessing student competency.
Seabed mapping and characterization of sediment variability using the usSEABED data base
Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.
2008-01-01
We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character.
[Exploration of influencing factors of price of herbal based on VAR model].
Wang, Nuo; Liu, Shu-Zhen; Yang, Guang
2014-10-01
Based on vector auto-regression (VAR) model, this paper takes advantage of Granger causality test, variance decomposition and impulse response analysis techniques to carry out a comprehensive study of the factors influencing the price of Chinese herbal, including herbal cultivation costs, acreage, natural disasters, the residents' needs and inflation. The study found that there is Granger causality relationship between inflation and herbal prices, cultivation costs and herbal prices. And in the total variance analysis of Chinese herbal and medicine price index, the largest contribution to it is from its own fluctuations, followed by the cultivation costs and inflation.
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT
2012-01-01
Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129
Applied Multiple Linear Regression: A General Research Strategy
ERIC Educational Resources Information Center
Smith, Brandon B.
1969-01-01
Illustrates some of the basic concepts and procedures for using regression analysis in experimental design, analysis of variance, analysis of covariance, and curvilinear regression. Applications to evaluation of instruction and vocational education programs are illustrated. (GR)
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
Microstructure of the IMF turbulences at 2.5 AU
NASA Technical Reports Server (NTRS)
Mavromichalaki, H.; Vassilaki, A.; Marmatsouri, L.; Moussas, X.; Quenby, J. J.; Smith, E. J.
1995-01-01
A detailed analysis of small period (15-900 sec) magnetohydrodynamic (MHD) turbulences of the interplanetary magnetic field (IMF) has been made using Pioneer-11 high time resolution data (0.75 sec) inside a Corotating Interaction Region (CIR) at a heliocentric distance of 2.5 AU in 1973. The methods used are the hodogram analysis, the minimum variance matrix analysis and the cohenrence analysis. The minimum variance analysis gives evidence of linear polarized wave modes. Coherence analysis has shown that the field fluctuations are dominated by the magnetosonic fast modes with periods 15 sec to 15 min. However, it is also shown that some small amplitude Alfven waves are present in the trailing edge of this region with characteristic periods (15-200 sec). The observed wave modes are locally generated and possibly attributed to the scattering of Alfven waves energy into random magnetosonic waves.
Hypothesis exploration with visualization of variance
2014-01-01
Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666
Goldfarb, Charles A; Strauss, Nicole L; Wall, Lindley B; Calfee, Ryan P
2011-02-01
The measurement technique for ulnar variance in the adolescent population has not been well established. The purpose of this study was to assess the reliability of a standard ulnar variance assessment in the adolescent population. Four orthopedic surgeons measured 138 adolescent wrist radiographs for ulnar variance using a standard technique. There were 62 male and 76 female radiographs obtained in a standardized fashion for subjects aged 12 to 18 years. Skeletal age was used for analysis. We determined mean variance and assessed for differences related to age and gender. We also determined the interrater reliability. The mean variance was -0.7 mm for boys and -0.4 mm for girls; there was no significant difference between the 2 groups overall. When subdivided by age and gender, the younger group (≤ 15 y of age) was significantly less negative for girls (boys, -0.8 mm and girls, -0.3 mm, p < .05). There was no significant difference between boys and girls in the older group. The greatest difference between any 2 raters was 1 mm; exact agreement was obtained in 72 subjects. Correlations between raters were high (r(p) 0.87-0.97 in boys and 0.82-0.96 for girls). Interrater reliability was excellent (Cronbach's alpha, 0.97-0.98). Standard assessment techniques for ulnar variance are reliable in the adolescent population. Open growth plates did not interfere with this assessment. Young adolescent boys demonstrated a greater degree of negative ulnar variance compared with young adolescent girls. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Bujkiewicz, Sylwia; Riley, Richard D
2016-01-01
Multivariate random-effects meta-analysis allows the joint synthesis of correlated results from multiple studies, for example, for multiple outcomes or multiple treatment groups. In a Bayesian univariate meta-analysis of one endpoint, the importance of specifying a sensible prior distribution for the between-study variance is well understood. However, in multivariate meta-analysis, there is little guidance about the choice of prior distributions for the variances or, crucially, the between-study correlation, ρB; for the latter, researchers often use a Uniform(−1,1) distribution assuming it is vague. In this paper, an extensive simulation study and a real illustrative example is used to examine the impact of various (realistically) vague prior distributions for ρB and the between-study variances within a Bayesian bivariate random-effects meta-analysis of two correlated treatment effects. A range of diverse scenarios are considered, including complete and missing data, to examine the impact of the prior distributions on posterior results (for treatment effect and between-study correlation), amount of borrowing of strength, and joint predictive distributions of treatment effectiveness in new studies. Two key recommendations are identified to improve the robustness of multivariate meta-analysis results. First, the routine use of a Uniform(−1,1) prior distribution for ρB should be avoided, if possible, as it is not necessarily vague. Instead, researchers should identify a sensible prior distribution, for example, by restricting values to be positive or negative as indicated by prior knowledge. Second, it remains critical to use sensible (e.g. empirically based) prior distributions for the between-study variances, as an inappropriate choice can adversely impact the posterior distribution for ρB, which may then adversely affect inferences such as joint predictive probabilities. These recommendations are especially important with a small number of studies and missing data. PMID:26988929
Lee, J-H; Han, G; Fulp, W J; Giuliano, A R
2012-06-01
The Poisson model can be applied to the count of events occurring within a specific time period. The main feature of the Poisson model is the assumption that the mean and variance of the count data are equal. However, this equal mean-variance relationship rarely occurs in observational data. In most cases, the observed variance is larger than the assumed variance, which is called overdispersion. Further, when the observed data involve excessive zero counts, the problem of overdispersion results in underestimating the variance of the estimated parameter, and thus produces a misleading conclusion. We illustrated the use of four models for overdispersed count data that may be attributed to excessive zeros. These are Poisson, negative binomial, zero-inflated Poisson and zero-inflated negative binomial models. The example data in this article deal with the number of incidents involving human papillomavirus infection. The four models resulted in differing statistical inferences. The Poisson model, which is widely used in epidemiology research, underestimated the standard errors and overstated the significance of some covariates.
Optimal distribution of integration time for intensity measurements in Stokes polarimetry.
Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng
2015-10-19
We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.
NASA Astrophysics Data System (ADS)
Wang, Ting; Xiang, Jie; Fei, Jianfang; Wang, Yi; Liu, Chunxia; Li, Yuanxiang
2017-12-01
This paper presents an evaluation of the observational impacts on blended sea surface winds from a two-dimensional variational data assimilation (2D-Var) scheme. We begin by briefly introducing the analysis sensitivity with respect to observations in variational data assimilation systems and its relationship with the degrees of freedom for signal (DFS), and then the DFS concept is applied to the 2D-Var sea surface wind blending scheme. Two methods, a priori and a posteriori, are used to estimate the DFS of the zonal ( u) and meridional ( v) components of winds in the 2D-Var blending scheme. The a posteriori method can obtain almost the same results as the a priori method. Because only by-products of the blending scheme are used for the a posteriori method, the computation time is reduced significantly. The magnitude of the DFS is critically related to the observational and background error statistics. Changing the observational and background error variances can affect the DFS value. Because the observation error variances are assumed to be uniform, the observational influence at each observational location is related to the background error variance, and the observations located at the place where there are larger background error variances have larger influences. The average observational influence of u and v with respect to the analysis is about 40%, implying that the background influence with respect to the analysis is about 60%.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Poplová, Michaela; Sovka, Pavel
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal. PMID:29216207
Nelson, Jason M; Canivez, Gary L; Watkins, Marley W
2013-06-01
Structural and incremental validity of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV; Wechsler, 2008a) was examined with a sample of 300 individuals referred for evaluation at a university-based clinic. Confirmatory factor analysis indicated that the WAIS-IV structure was best represented by 4 first-order factors as well as a general intelligence factor in a direct hierarchical model. The general intelligence factor accounted for the most common and total variance among the subtests. Incremental validity analyses indicated that the Full Scale IQ (FSIQ) generally accounted for medium to large portions of academic achievement variance. For all measures of academic achievement, the first-order factors combined accounted for significant achievement variance beyond that accounted for by the FSIQ, but individual factor index scores contributed trivial amounts of achievement variance. Implications for interpreting WAIS-IV results are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Finkel, Deborah; Pedersen, Nancy L
2014-01-01
Intraindividual variability (IIV) in reaction time has been related to cognitive decline, but questions remain about the nature of this relationship. Mean and range in movement and decision time for simple reaction time were available from 241 individuals aged 51-86 years at the fifth testing wave of the Swedish Adoption/Twin Study of Aging. Cognitive performance on four factors was also available: verbal, spatial, memory, and speed. Analyses indicated that range in reaction time could be used as an indicator of IIV. Heritability estimates were 35% for mean reaction and 20% for range in reaction. Multivariate analysis indicated that the genetic variance on the memory, speed, and spatial factors is shared with genetic variance for mean or range in reaction time. IIV shares significant genetic variance with fluid ability in late adulthood, over and above and genetic variance shared with mean reaction time.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla
2016-03-01
The present study aimed to analyze the clinical usefulness of the thinnest cross section of the nerve fibers in the optic nerve head averaged over the circumference of the optic nerve head. 3D volumes of the optic nerve head of the same eye was captured at two different visits spaced in time by 1-4 weeks, in 13 subjects diagnosed with early to moderate glaucoma. At each visit 3 volumes containing the optic nerve head were captured independently with a Topcon OCT- 2000 system. In each volume, the average shortest distance between the inner surface of the retina and the central limit of the pigment epithelium around the optic nerve head circumference, PIMD-Average [02π], was determined semiautomatically. The measurements were analyzed with an analysis of variance for estimation of the variance components for subjects, visits, volumes and semi-automatic measurements of PIMD-Average [0;2π]. It was found that the variance for subjects was on the order of five times the variance for visits, and the variance for visits was on the order of 5 times higher than the variance for volumes. The variance for semi-automatic measurements of PIMD-Average [02π] was 3 orders of magnitude lower than the variance for volumes. A 95 % confidence interval for mean PIMD-Average [02π] was estimated to 1.00 +/-0.13 mm (D.f. = 12). The variance estimates indicate that PIMD-Average [02π] is not suitable for comparison between a onetime estimate in a subject and a population reference interval. Cross-sectional independent group comparisons of PIMD-Average [02π] averaged over subjects will require inconveniently large sample sizes. However, cross-sectional independent group comparison of averages of within subject difference between baseline and follow-up can be made with reasonable sample sizes. Assuming a loss rate of 0.1 PIMD-Average [02π] per year and 4 visits per year it was found that approximately 18 months follow up is required before a significant change of PIMDAverage [02π] can be observed with a power of 0.8. This is shorter than what has been observed both for HRT measurements and automated perimetry measurements with a similar observation rate. It is concluded that PIMDAverage [02π] has the potential to detect deterioration of glaucoma quicker than currently available primary diagnostic instruments. To increase the efficiency of PIMD-Average [02π] further, the variation among visits within subject has to be reduced.
Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q
2017-03-22
Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.
NASA Astrophysics Data System (ADS)
Oguntunde, Philip G.; Lischeid, Gunnar; Dietrich, Ottfried
2018-03-01
This study examines the variations of climate variables and rice yield and quantifies the relationships among them using multiple linear regression, principal component analysis, and support vector machine (SVM) analysis in southwest Nigeria. The climate and yield data used was for a period of 36 years between 1980 and 2015. Similar to the observed decrease ( P < 0.001) in rice yield, pan evaporation, solar radiation, and wind speed declined significantly. Eight principal components exhibited an eigenvalue > 1 and explained 83.1% of the total variance of predictor variables. The SVM regression function using the scores of the first principal component explained about 75% of the variance in rice yield data and linear regression about 64%. SVM regression between annual solar radiation values and yield explained 67% of the variance. Only the first component of the principal component analysis (PCA) exhibited a clear long-term trend and sometimes short-term variance similar to that of rice yield. Short-term fluctuations of the scores of the PC1 are closely coupled to those of rice yield during the 1986-1993 and the 2006-2013 periods thereby revealing the inter-annual sensitivity of rice production to climate variability. Solar radiation stands out as the climate variable of highest influence on rice yield, and the influence was especially strong during monsoon and post-monsoon periods, which correspond to the vegetative, booting, flowering, and grain filling stages in the study area. The outcome is expected to provide more in-depth regional-specific climate-rice linkage for screening of better cultivars that can positively respond to future climate fluctuations as well as providing information that may help optimized planting dates for improved radiation use efficiency in the study area.
Miyake, Masahiro; Yamashiro, Kenji; Akagi-Kurashige, Yumiko; Oishi, Akio; Tsujikawa, Akitaka; Hangai, Masanori; Yoshimura, Nagahisa
2014-01-01
Purpose To evaluate fundus shape in highly myopic eyes using color maps created through optical coherence tomography (OCT) image analysis. Methods We retrospectively evaluated 182 highly myopic eyes from 113 patients. After obtaining 12 lines of 9-mm radial OCT scans with the fovea at the center, the Bruch’s membrane line was plotted and its curvature was measured at 1-µm intervals in each image, which was reflected as a color topography map. For the quantitative analysis of the eye shape, mean absolute curvature and variance of curvature were calculated. Results The color maps allowed staphyloma visualization as a ring of green color at the edge and as that of orange-red color at the bottom. Analyses of mean and variance of curvature revealed that eyes with myopic choroidal neovascularization tended to have relatively flat posterior poles with smooth surfaces, while eyes with chorioretinal atrophy exhibited a steep, curved shape with an undulated surface (P<0.001). Furthermore, eyes with staphylomas and those without clearly differed in terms of mean curvature and the variance of curvature: 98.4% of eyes with staphylomas had mean curvature ≥7.8×10−5 [1/µm] and variance of curvature ≥0.26×10−8 [1/µm]. Conclusions We established a novel method to analyze posterior pole shape by using OCT images to construct curvature maps. Our quantitative analysis revealed that fundus shape is associated with myopic complications. These values were also effective in distinguishing eyes with staphylomas from those without. This tool for the quantitative evaluation of eye shape should facilitate future research of myopic complications. PMID:25259853
Murphy, Alistair P; Duffield, Rob; Kellett, Aaron; Reid, Machar
2014-09-01
To investigate the discrepancy between coach and athlete perceptions of internal load and notational analysis of external load in elite junior tennis. Fourteen elite junior tennis players and 6 international coaches were recruited. Ratings of perceived exertion (RPEs) were recorded for individual drills and whole sessions, along with a rating of mental exertion, coach rating of intended session exertion, and athlete heart rate (HR). Furthermore, total stroke count and unforced-error count were notated using video coding after each session, alongside coach and athlete estimations of shots and errors made. Finally, regression analyses explained the variance in the criterion variables of athlete and coach RPE. Repeated-measures analyses of variance and interclass correlation coefficients revealed that coaches significantly (P < .01) underestimated athlete session RPE, with only moderate correlation (r = .59) demonstrated between coach and athlete. However, athlete drill RPE (P = .14; r = .71) and mental exertion (P = .44; r = .68) were comparable and substantially correlated. No significant differences in estimated stroke count were evident between athlete and coach (P = .21), athlete notational analysis (P = .06), or coach notational analysis (P = .49). Coaches estimated significantly greater unforced errors than either athletes or notational analysis (P < .01). Regression analyses found that 54.5% of variance in coach RPE was explained by intended session exertion and coach drill RPE, while drill RPE and peak HR explained 45.3% of the variance in athlete session RPE. Coaches misinterpreted session RPE but not drill RPE, while inaccurately monitoring error counts. Improved understanding of external- and internal-load monitoring may help coach-athlete relationships in individual sports like tennis avoid maladaptive training.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
The evolution and consequences of sex-specific reproductive variance.
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.
The Evolution and Consequences of Sex-Specific Reproductive Variance
Mullon, Charles; Reuter, Max; Lehmann, Laurent
2014-01-01
Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130
Sources and implications of whole-brain fMRI signals in humans
Power, Jonathan D; Plitt, Mark; Laumann, Timothy O; Martin, Alex
2016-01-01
Whole-brain fMRI signals are a subject of intense interest: variance in the global fMRI signal (the spatial mean of all signals in the brain) indexes subject arousal, and psychiatric conditions such as schizophrenia and autism have been characterized by differences in the global fMRI signal. Further, vigorous debates exist on whether global signals ought to be removed from fMRI data. However, surprisingly little research has focused on the empirical properties of whole-brain fMRI signals. Here we map the spatial and temporal properties of the global signal, individually, in 1000+ fMRI scans. Variance in the global fMRI signal is strongly linked to head motion, to hardware artifacts, and to respiratory patterns and their attendant physiologic changes. Many techniques used to prepare fMRI data for analysis fail to remove these uninteresting kinds of global signal fluctuations. Thus, many studies include, at the time of analysis, prominent global effects of yawns, breathing changes, and head motion, among other signals. Such artifacts will mimic dynamic neural activity and will spuriously alter signal covariance throughout the brain. Methods capable of isolating and removing global artifactual variance while preserving putative “neural” variance are needed; this paper adopts no position on the topic of global signal regression. PMID:27751941
Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores
NASA Astrophysics Data System (ADS)
Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.
2015-12-01
Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture where, and to what extent, the burrow tubes deviate from the sediment matrix. Future research will correlate changes in variance due to bioturbation to other features indicating ocean temperatures and nutrient flux, such as foraminifera counts and oxygen isotope data.
Repeatability and reproducibility of ribotyping and its computer interpretation.
Lefresne, Gwénola; Latrille, Eric; Irlinger, Françoise; Grimont, Patrick A D
2004-04-01
Many molecular typing methods are difficult to interpret because their repeatability (within-laboratory variance) and reproducibility (between-laboratory variance) have not been thoroughly studied. In the present work, ribotyping of coryneform bacteria was the basis of a study involving within-gel and between-gel repeatability and between-laboratory reproducibility (two laboratories involved). The effect of different technical protocols, different algorithms, and different software for fragment size determination was studied. Analysis of variance (ANOVA) showed, within a laboratory, that there was no significant added variance between gels. However, between-laboratory variance was significantly higher than within-laboratory variance. This may be due to the use of different protocols. An experimental function was calculated to transform the data and make them compatible (i.e., erase the between-laboratory variance). The use of different interpolation algorithms (spline, Schaffer and Sederoff) was a significant source of variation in one laboratory only. The use of either Taxotron (Institut Pasteur) or GelCompar (Applied Maths) was not a significant source of added variation when the same algorithm (spline) was used. However, the use of Bio-Gene (Vilber Lourmat) dramatically increased the error (within laboratory, within gel) in one laboratory, while decreasing the error in the other laboratory; this might be due to automatic normalization attempts. These results were taken into account for building a database and performing automatic pattern identification using Taxotron. Conversion of the data considerably improved the identification of patterns irrespective of the laboratory in which the data were obtained.
Systems Engineering Programmatic Estimation Using Technology Variance
NASA Technical Reports Server (NTRS)
Mog, Robert A.
2000-01-01
Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.
Analysis of variances of quasirapidities in collisions of gold nuclei with track-emulsion nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V., E-mail: lugovoi@uzsci.net
2012-08-15
A new method of an analysis of variances was developed for studying n-particle correlations of quasirapidities in nucleus-nucleus collisions for a large constant number n of particles. Formulas that generalize the results of the respective analysis to various values of n were derived. Calculations on the basis of simple models indicate that the method is applicable, at least for n {>=} 100. Quasirapidity correlations statistically significant at a level of 36 standard deviations were discovered in collisions between gold nuclei and track-emulsion nuclei at an energy of 10.6 GeV per nucleon. The experimental data obtained in our present study aremore » contrasted against the theory of nucleus-nucleus collisions.« less
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series
Fransson, Peter
2016-01-01
Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176
On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.
Thompson, William Hedley; Fransson, Peter
2016-12-01
Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.
Trend Analysis Using Microcomputers.
ERIC Educational Resources Information Center
Berger, Carl F.
A trend analysis statistical package and additional programs for the Apple microcomputer are presented. They illustrate strategies of data analysis suitable to the graphics and processing capabilities of the microcomputer. The programs analyze data sets using examples of: (1) analysis of variance with multiple linear regression; (2) exponential…
A new stratification of mourning dove call-count routes
Blankenship, L.H.; Humphrey, A.B.; MacDonald, D.
1971-01-01
The mourning dove (Zenaidura macroura) call-count survey is a nationwide audio-census of breeding mourning doves. Recent analyses of the call-count routes have utilized a stratification based upon physiographic regions of the United States. An analysis of 5 years of call-count data, based upon stratification using potential natural vegetation, has demonstrated that this uew stratification results in strata with greater homogeneity than the physiographic strata, provides lower error variance, and hence generates greatet precision in the analysis without an increase in call-count routes. Error variance was reduced approximately 30 percent for the contiguous United States. This indicates that future analysis based upon the new stratification will result in an increased ability to detect significant year-to-year changes.
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
Large amplitude MHD waves upstream of the Jovian bow shock
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Smith, C. W.; Matthaeus, W. H.
1983-01-01
Observations of large amplitude magnetohydrodynamics (MHD) waves upstream of Jupiter's bow shock are analyzed. The waves are found to be right circularly polarized in the solar wind frame which suggests that they are propagating in the fast magnetosonic mode. A complete spectral and minimum variance eigenvalue analysis of the data was performed. The power spectrum of the magnetic fluctuations contains several peaks. The fluctuations at 2.3 mHz have a direction of minimum variance along the direction of the average magnetic field. The direction of minimum variance of these fluctuations lies at approximately 40 deg. to the magnetic field and is parallel to the radial direction. We argue that these fluctuations are waves excited by protons reflected off the Jovian bow shock. The inferred speed of the reflected protons is about two times the solar wind speed in the plasma rest frame. A linear instability analysis is presented which suggests an explanation for many of the observed features of the observations.
Bonetti, Debbie; Johnston, Marie; Clarkson, Jan E; Grimshaw, Jeremy; Pitts, Nigel B; Eccles, Martin; Steen, Nick; Thomas, Ruth; Maclennan, Graeme; Glidewell, Liz; Walker, Anne
2010-04-08
Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs) in Scotland. Outcomes were behavioural simulation (scenario decision-making), and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB), Social Cognitive Theory (SCT), Common Sense Self-regulation Model (CS-SRM), Operant Learning Theory (OLT), Implementation Intention (II), Stage Model, and knowledge (a non-theoretical construct). Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value. Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT), timeline acute (CS-SRM), and outcome expectancy (SCT) entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT) and attitude (TPB) entered the equation, together explaining 68% of the variance in intention. The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for identifying factors that may predict clinical behaviour and so provide possible targets for knowledge translation interventions. Results suggest that more evidence-based behaviour may be achieved by influencing beliefs about the positive outcomes of placing fissure sealants and building a habit of placing them as part of patient management. However a number of conceptual and methodological challenges remain.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
Piazza, Alexander M; Binversie, Emily E; Baker, Lauren A; Nemke, Brett; Sample, Susannah J; Muir, Peter
2017-04-01
OBJECTIVE To determine whether walking at specific ranges of absolute and relative (V*) velocity would aid efficient capture of gait trial data with low ground reaction force (GRF) variance in a heterogeneous sample of dogs. ANIMALS 17 clinically normal dogs of various breeds, ages, and sexes. PROCEDURES Each dog was walked across a force platform at its preferred velocity, with controlled acceleration within 0.5 m/s 2 . Ranges in V* were created for height at the highest point of the shoulders (withers; WHV*). Variance effects from 8 walking absolute velocity ranges and associated WHV* ranges were examined by means of repeated-measures ANCOVA. RESULTS The individual dog effect provided the greatest contribution to variance. Narrow velocity ranges typically resulted in capture of a smaller percentage of valid trials and were not consistently associated with lower variance. The WHV* range of 0.33 to 0.46 allowed capture of valid trials efficiently, with no significant effects on peak vertical force and vertical impulse. CONCLUSIONS AND CLINICAL RELEVANCE Dogs with severe lameness may be unable to trot or may have a decline in mobility with gait trial repetition. Gait analysis involving evaluation of individual dogs at their preferred absolute velocity, such that dogs are evaluated at a similar V*, may facilitate efficient capture of valid trials without significant effects on GRF. Use of individual velocity ranges derived from a WHV* range of 0.33 to 0.46 can account for heterogeneity and appears suitable for use in clinical trials involving dogs at a walking gait.
Time-Frequency Variability of Kuroshio Meanders in Tokara Strait
NASA Astrophysics Data System (ADS)
Nakamura, H.; Yamashiro, T.; Nishina, A.; Ichikawa, H.
2006-12-01
The Kuroshio path in the northern Okinawa Trough, Japan, located between the continental slope and Tokara Strait, exhibits meandering motions with largest displacements in the East China Sea; these motions have dominant periods in the broad range of 30-90 days. Understanding the dynamic nature of such meanders is crucial to predicting small and large meanders of the Kuroshio path off the south coast of Japan. Previous numerical simulations suggest that the Kuroshio path meanders in the northern Okinawa Trough become nonstationary in variance because of changes in background states of the Kuroshio in the northern Okinawa Trough, but a detailed analysis based on observed data has yet to be performed. The purpose of the present study is to provide a detailed description of the time-frequency variability of Kuroshio path meanders observed in Tokara Strait. Three Kuroshio indicators were subjected to wavelet analysis for the period 1984-2004: the Kuroshio Position Index (KPI) in Tokara Strait, Kuroshio Volume Transport (KVT) in Tokara Strait, and the basal current velocity of the Kuroshio on the continental slope in the northern Okinawa Trough. The 30-90 day variance of the KPI shows a season-fixed nature, with larger amplitudes in the period December-July. The amplitude of the variance in this phenomenon is also modulated by interannual variations, with small variance recorded during 1989-1992, large variance during 1993-1998, and a return to small variance from 1999-2003. This interannual variation is positively correlated with that of the KVT. The largest variance of the KPI during February-April precedes the largest volume transport in April-May by about 1 month, suggesting that eddy vorticity flux strengthens the mean current field. Previous numerical simulations reproduce the recirculation gyre as a cyclonic eddy in the area between the continental slope and Tokara Strait; this gyre is analogous to the northern recirculation gyre associated with the eastward jet. On the basis of data from a moored current-meter situated on the continental slope, the genesis of the 30-90 day meanders within Tokara Strait is ascribed to nonlinear energy transfer from 8-25 day meanders on the continental slope.
Pieterse, Alex L; Carter, Robert T; Evans, Sarah A; Walter, Rebecca A
2010-07-01
In this study, we examined the association among perceptions of racial and/or ethnic discrimination, racial climate, and trauma-related symptoms among 289 racially diverse college undergraduates. Study measures included the Perceived Stress Scale, the Perceived Ethnic Discrimination Questionnaire, the Posttraumatic Stress Disorder Checklist-Civilian Version, and the Racial Climate Scale. Results of a multivariate analysis of variance (MANOVA) indicated that Asian and Black students reported more frequent experiences of discrimination than did White students. Additionally, the MANOVA indicated that Black students perceived the campus racial climate as being more negative than did White and Asian students. A hierarchical regression analysis showed that when controlling for generic life stress, perceptions of discrimination contributed an additional 10% of variance in trauma-related symptoms for Black students, and racial climate contributed an additional 7% of variance in trauma symptoms for Asian students. (c) 2010 APA, all rights reserved.
Taylor, Jacquelyn Y.; Washington, Olivia G. M.; Artinian, Nancy T.; Lichtenberg, Peter
2010-01-01
African Americans are at greater risk for hypertension than are other ethnic groups. This study examined relationships among hypertension, stress, and depression among 120 urban African American parents and grandparents. This study is a secondary analysis of a larger nurse-managed randomized clinical trial testing the effectiveness of a telemonitoring intervention. Baseline data used in analyses, with the exception of medication compliance, were collected at 3 months' follow-up. Health indicators, perceived stress, and social support were examined to determine their relationship with depressive symptoms. A total of 48% of the variance in depressive symptomology was explained by perceived stress and support. Health indicators including average systolic blood pressure explained 21% of the variance in depressive symptomology. The regression analysis using average diastolic blood pressure explained 26% of the variance in depressive symptomology. Based on study results, African Americans should be assessed for perceived stress and social support to alleviate depressive symptomology. PMID:18843828
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Ghosh, Debasree; Chattopadhyay, Parimal
2012-06-01
The objective of the work was to use the method of quantitative descriptive analysis (QDA) to describe the sensory attributes of the fermented food products prepared with the incorporation of lactic cultures. Panellists were selected and trained to evaluate various attributes specially color and appearance, body texture, flavor, overall acceptability and acidity of the fermented food products like cow milk curd and soymilk curd, idli, sauerkraut and probiotic ice cream. Principal component analysis (PCA) identified the six significant principal components that accounted for more than 90% of the variance in the sensory attribute data. Overall product quality was modelled as a function of principal components using multiple least squares regression (R (2) = 0.8). The result from PCA was statistically analyzed by analysis of variance (ANOVA). These findings demonstrate the utility of quantitative descriptive analysis for identifying and measuring the fermented food product attributes that are important for consumer acceptability.
Stability of steady hand force production explored across spaces and methods of analysis.
de Freitas, Paulo B; Freitas, Sandra M S F; Lewis, Mechelle M; Huang, Xuemei; Latash, Mark L
2018-06-01
We used the framework of the uncontrolled manifold (UCM) hypothesis and explored the reliability of several outcome variables across different spaces of analysis during a very simple four-finger accurate force production task. Fourteen healthy, young adults performed the accurate force production task with each hand on 3 days. Small spatial finger perturbations were generated by the "inverse piano" device three times per trial (lifting the fingers 1 cm/0.5 s and lowering them). The data were analyzed using the following main methods: (1) computation of indices of the structure of inter-trial variance and motor equivalence in the space of finger forces and finger modes, and (2) analysis of referent coordinates and apparent stiffness values for the hand. Maximal voluntary force and the index of enslaving (unintentional finger force production) showed good to excellent reliability. Strong synergies stabilizing total force were reflected in both structure of variance and motor equivalence indices. Variance within the UCM and the index of motor equivalent motion dropped over the trial duration and showed good to excellent reliability. Variance orthogonal to the UCM and the index of non-motor equivalent motion dropped over the 3 days and showed poor to moderate reliability. Referent coordinate and apparent stiffness indices co-varied strongly and both showed good reliability. In contrast, the computed index of force stabilization showed poor reliability. The findings are interpreted within the scheme of neural control with referent coordinates involving the hierarchy of two basic commands, the r-command and c-command. The data suggest natural drifts in the finger force space, particularly within the UCM. We interpret these drifts as reflections of a trade-off between stability and optimization of action. The implications of these findings for the UCM framework and future clinical applications are explored in the discussion. Indices of the structure of variance and motor equivalence show good reliability and can be recommended for applied studies.
Development of the Seasonal Migrant Agricultural Worker Stress Scale in Sanliurfa, Southeast Turkey.
Simsek, Zeynep; Ersin, Fatma; Kirmizitoprak, Evin
2016-01-01
Stress is one of the main causes of health problems, especially mental disorders. These health problems cause a significant amount of ability loss and increase cost. It is estimated that by 2020, mental disorders will constitute 15% of the total disease burden, and depression will rank second only after ischemic heart disease. Environmental experiences are paramount in increasing the liability of mental disorders in those who constantly face sustained high levels of stress. The objective of this study was to develop a stress scale for seasonal migrant agricultural workers aged 18 years and older. The sample consisted of 270 randomly selected seasonal migrant agricultural workers. The average age of the participants was 33.1 ± 14, and 50.7% were male. The Cronbach alpha coefficient and test-retest methods were used for reliability analyses. Although the factor analysis was performed for the structure validity of the scale, the Kaiser-Meyer-Olkin coefficient and Bartlett test were used to determine the convenience of the data for the factor analysis. In the reliability analyses, the Cronbach alpha coefficient of internal consistency was calculated as .96, and the test-retest reliability coefficient was .81. In the exploratory factor analysis for validity of the scale, four factors were obtained, and the factors represented workplace physical conditions (25.7% of the total variance), workplace psychosocial and economic factors (19.3% of the total variance), workplace health problems (15.2% of the total variance), and school problems (10.1% of the total variance). The four factors explained 70.3% of the total variance. As a result of the expert opinions and analyses, a stress scale with 48 items was developed. The highest score to be obtained from the scale was 144, and the lowest score was 0. The increase in the score indicates the increase in the stress levels. The findings show that the scale is a valid and reliable assessment instrument that can be used in epidemiological research and planning interventions.
Bankruptcy Prediction in the Construction Industry: Financial Ratio Analysis
1989-08-01
financial reporting between the two industries. Using this information an effort will be made to modifying the models that can be applicable to the construction industry. Keywords: Analysis of variance,
Analysis of longitudinal "time series" data in toxicology.
Cox, C; Cory-Slechta, D A
1987-02-01
Studies focusing on chronic toxicity or on the time course of toxicant effect often involve repeated measurements or longitudinal observations of endpoints of interest. Experimental design considerations frequently necessitate between-group comparisons of the resulting trends. Typically, procedures such as the repeated-measures analysis of variance have been used for statistical analysis, even though the required assumptions may not be satisfied in some circumstances. This paper describes an alternative analytical approach which summarizes curvilinear trends by fitting cubic orthogonal polynomials to individual profiles of effect. The resulting regression coefficients serve as quantitative descriptors which can be subjected to group significance testing. Randomization tests based on medians are proposed to provide a comparison of treatment and control groups. Examples from the behavioral toxicology literature are considered, and the results are compared to more traditional approaches, such as repeated-measures analysis of variance.
Jha, Dilip Kumar; Vinithkumar, Nambali Valsalan; Sahu, Biraja Kumar; Dheenan, Palaiya Sukumaran; Das, Apurba Kumar; Begum, Mehmuna; Devi, Marimuthu Prashanthi; Kirubagaran, Ramalingam
2015-07-15
Chidiyatappu Bay is one of the least disturbed marine environments of Andaman & Nicobar Islands, the union territory of India. Oceanic flushing from southeast and northwest direction is prevalent in this bay. Further, anthropogenic activity is minimal in the adjoining environment. Considering the pristine nature of this bay, seawater samples collected from 12 sampling stations covering three seasons were analyzed. Principal Component Analysis (PCA) revealed 69.9% of total variance and exhibited strong factor loading for nitrite, chlorophyll a and phaeophytin. In addition, analysis of variance (ANOVA-one way), regression analysis, box-whisker plots and Geographical Information System based hot spot analysis further simplified and supported multivariate results. The results obtained are important to establish reference conditions for comparative study with other similar ecosystems in the region. Copyright © 2015 Elsevier Ltd. All rights reserved.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Variance of foot biomechanical parameters across age groups for the elderly people in Romania
NASA Astrophysics Data System (ADS)
Deselnicu, D. C.; Vasilescu, A. M.; Militaru, G.
2017-10-01
The paper presents the results of a fieldwork study conducted in order to analyze major causal factors that influence the foot deformities and pathologies of elderly women in Romania. The study has an exploratory and descriptive nature and uses quantitative methodology. The sample consisted of 100 elderly women from Romania, ranging from 55 to over 75 years of age. The collected data was analyzed on multiple dimensions using a statistic analysis software program. The analysis of variance demonstrated significant differences across age groups in terms of several biomechanical parameters such as travel speed, toe off phase and support phase in the case of elderly women.
Minimum-variance Brownian motion control of an optically trapped probe.
Huang, Yanan; Zhang, Zhipeng; Menq, Chia-Hsiang
2009-10-20
This paper presents a theoretical and experimental investigation of the Brownian motion control of an optically trapped probe. The Langevin equation is employed to describe the motion of the probe experiencing random thermal force and optical trapping force. Since active feedback control is applied to suppress the probe's Brownian motion, actuator dynamics and measurement delay are included in the equation. The equation of motion is simplified to a first-order linear differential equation and transformed to a discrete model for the purpose of controller design and data analysis. The derived model is experimentally verified by comparing the model prediction to the measured response of a 1.87 microm trapped probe subject to proportional control. It is then employed to design the optimal controller that minimizes the variance of the probe's Brownian motion. Theoretical analysis is derived to evaluate the control performance of a specific optical trap. Both experiment and simulation are used to validate the design as well as theoretical analysis, and to illustrate the performance envelope of the active control. Moreover, adaptive minimum variance control is implemented to maintain the optimal performance in the case in which the system is time varying when operating the actively controlled optical trap in a complex environment.
A soil alteration index based on phospholipid fatty acids.
Puglisi, Edoardo; Nicelli, Marco; Capri, Ettore; Trevisan, Marco; Del Re, Attilio A M
2005-12-01
Phospholipid fatty acid (PLFA) analysis has gained great importance in the study of soil microbial community structure. This structure can give indication of the soil status. Purpose of the present paper is to analyse PLFA patterns in altered agricultural soils in order to develop a soil status alteration index. Soils subjected either to intensive agricultural exploitation, or to overflow by municipal and industrial wastes, or to irrigation with saline waters were analysed for PLFA content and compared to adjacent untreated soils by means of different statistical techniques. Principal component analysis separated PLFAs in three groups: unsaturated PLFAs (first axis, 48% of total variance), monounsaturated and cyclopropane PLFAs (second axis, 28% of total variance) and polyunsaturated PLFAs (third axis, 24% of total variance). By means of canonical discriminant analysis, a soil alteration index (SAI) was produced from 15 PLFAs using two data sets. A third data set was used to test the SAI general validity together with other data sets reported in literature. The index validity was confirmed in most cases: SAI gave higher scores for control soils and was generally able to classify soils according to their reported degree of alteration.
Lifestyle Factors in U.S. Residential Electricity Consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanquist, Thomas F.; Orr, Heather M.; Shui, Bin
2012-03-30
A multivariate statistical approach to lifestyle analysis of residential electricity consumption is described and illustrated. Factor analysis of selected variables from the 2005 U.S. Residential Energy Consumption Survey (RECS) identified five lifestyle factors reflecting social and behavioral choices associated with air conditioning, laundry usage, personal computer usage, climate zone of residence, and TV use. These factors were also estimated for 2001 RECS data. Multiple regression analysis using the lifestyle factors yields solutions accounting for approximately 40% of the variance in electricity consumption for both years. By adding the associated household and market characteristics of income, local electricity price and accessmore » to natural gas, variance accounted for is increased to approximately 54%. Income contributed only {approx}1% unique variance to the 2005 and 2001 models, indicating that lifestyle factors reflecting social and behavioral choices better account for consumption differences than income. This was not surprising given the 4-fold range of energy use at differing income levels. Geographic segmentation of factor scores is illustrated, and shows distinct clusters of consumption and lifestyle factors, particularly in suburban locations. The implications for tailored policy and planning interventions are discussed in relation to lifestyle issues.« less
Predictability Experiments With the Navy Operational Global Atmospheric Prediction System
NASA Astrophysics Data System (ADS)
Reynolds, C. A.; Gelaro, R.; Rosmond, T. E.
2003-12-01
There are several areas of research in numerical weather prediction and atmospheric predictability, such as targeted observations and ensemble perturbation generation, where it is desirable to combine information about the uncertainty of the initial state with information about potential rapid perturbation growth. Singular vectors (SVs) provide a framework to accomplish this task in a mathematically rigorous and computationally feasible manner. In this study, SVs are calculated using the tangent and adjoint models of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The analysis error variance information produced by the NRL Atmospheric Variational Data Assimilation System is used as the initial-time SV norm. These VAR SVs are compared to SVs for which total energy is both the initial and final time norms (TE SVs). The incorporation of analysis error variance information has a significant impact on the structure and location of the SVs. This in turn has a significant impact on targeted observing applications. The utility and implications of such experiments in assessing the analysis error variance estimates will be explored. Computing support has been provided by the Department of Defense High Performance Computing Center at the Naval Oceanographic Office Major Shared Resource Center at Stennis, Mississippi.
Biochemical Phenotypes to Discriminate Microbial Subpopulations and Improve Outbreak Detection
Galar, Alicia; Kulldorff, Martin; Rudnick, Wallis; O'Brien, Thomas F.; Stelling, John
2013-01-01
Background Clinical microbiology laboratories worldwide constitute an invaluable resource for monitoring emerging threats and the spread of antimicrobial resistance. We studied the growing number of biochemical tests routinely performed on clinical isolates to explore their value as epidemiological markers. Methodology/Principal Findings Microbiology laboratory results from January 2009 through December 2011 from a 793-bed hospital stored in WHONET were examined. Variables included patient location, collection date, organism, and 47 biochemical and 17 antimicrobial susceptibility test results reported by Vitek 2. To identify biochemical tests that were particularly valuable (stable with repeat testing, but good variability across the species) or problematic (inconsistent results with repeat testing), three types of variance analyses were performed on isolates of K. pneumonia: descriptive analysis of discordant biochemical results in same-day isolates, an average within-patient variance index, and generalized linear mixed model variance component analysis. Results: 4,200 isolates of K. pneumoniae were identified from 2,485 patients, 32% of whom had multiple isolates. The first two variance analyses highlighted SUCT, TyrA, GlyA, and GGT as “nuisance” biochemicals for which discordant within-patient test results impacted a high proportion of patient results, while dTAG had relatively good within-patient stability with good heterogeneity across the species. Variance component analyses confirmed the relative stability of dTAG, and identified additional biochemicals such as PHOS with a large between patient to within patient variance ratio. A reduced subset of biochemicals improved the robustness of strain definition for carbapenem-resistant K. pneumoniae. Surveillance analyses suggest that the reduced biochemical profile could improve the timeliness and specificity of outbreak detection algorithms. Conclusions The statistical approaches explored can improve the robust recognition of microbial subpopulations with routinely available biochemical test results, of value in the timely detection of outbreak clones and evolutionarily important genetic events. PMID:24391936
A Technique for Developing Probabilistic Properties of Earth Materials
1988-04-01
Department of Civil Engineering. Responsibility for coordi- nating this program was assigned to Mr. A. E . Jackson, Jr., GD, under the supervision of Dr...assuming deformation as a right circular cylinder E = expected value F = ratio of the between sample variance and the within sample variance F = area...radial strain = true radial strain rT e = axial strainz = number of increments in the covariance analysis VL = loading Poisson’s ratio VUN = unloading
1980-11-01
21 Table 2 - Analysis of Variance Summary Table: Leader Attributions . 21 Table 3 - Mean Leader Attributions to Four Factors under Success...been examined. Studies (Julian, Hollander, and Regula, 1969; !4ichener and Lawler, 1975) have indicated that leaders of groups which fail suffer signif ...the collective failure induction, one-way analyses of variance vere performed comparing the three performance evaluation conditions with the no
Random effects coefficient of determination for mixed and meta-analysis models
Demidenko, Eugene; Sargent, James; Onega, Tracy
2011-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, Rr2, that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If Rr2 is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of Rr2 apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects—the model can be estimated using the dummy variable approach. We derive explicit formulas for Rr2 in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine. PMID:23750070
FMRI group analysis combining effect estimates and their variances
Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.
2012-01-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach practical. We recommend its use in lieu of the less accurate approach in the conventional group analysis. PMID:22245637
Rücker, Gerta; Schwarzer, Guido; Carpenter, James; Olkin, Ingram
2009-02-28
For clinical trials with binary endpoints there are a variety of effect measures, for example risk difference, risk ratio and odds ratio (OR). The choice of metric is not always straightforward and should reflect the clinical question. Additional issues arise if the event of interest is rare. In systematic reviews, trials with zero events in both arms are encountered and often excluded from the meta-analysis.The arcsine difference (AS) is a measure which is rarely considered in the medical literature. It appears to have considerable promise, because it handles zeros naturally, and its asymptotic variance does not depend on the event probability.This paper investigates the pros and cons of using the AS as a measure of intervention effect. We give a pictorial representation of its meaning and explore its properties in relation to other measures. Based on analytical calculation of the variance of the arcsine transformation, a more conservative variance estimate for the rare event setting is proposed. Motivated by a published meta-analysis in cardiac surgery, we examine the statistical properties of the various metrics in the rare event setting.We find the variance estimate of the AS to be more stable than that of the log-OR, even if events are rare. However, parameter estimation is biased if the groups are markedly unbalanced. Though, from a theoretical viewpoint, the AS is a natural choice, its practical use is likely to continue to be limited by its less direct interpretation. Copyright (c) 2008 John Wiley & Sons, Ltd.
Dexter, Franklin; Bayman, Emine O; Dexter, Elisabeth U
2017-12-01
We examined type I and II error rates for analysis of (1) mean hospital length of stay (LOS) versus (2) percentage of hospital LOS that are overnight. These 2 end points are suitable for when LOS is treated as a secondary economic end point. We repeatedly resampled LOS for 5052 discharges of thoracoscopic wedge resections and lung lobectomy at 26 hospitals. Unequal variances t test (Welch method) and Fisher exact test both were conservative (ie, type I error rate less than nominal level). The Wilcoxon rank sum test was included as a comparator; the type I error rates did not differ from the nominal level of 0.05 or 0.01. Fisher exact test was more powerful than the unequal variances t test at detecting differences among hospitals; estimated odds ratio for obtaining P < .05 with Fisher exact test versus unequal variances t test = 1.94, with 95% confidence interval, 1.31-3.01. Fisher exact test and Wilcoxon-Mann-Whitney had comparable statistical power in terms of differentiating LOS between hospitals. For studies with LOS to be used as a secondary end point of economic interest, there is currently considerable interest in the planned analysis being for the percentage of patients suitable for ambulatory surgery (ie, hospital LOS equals 0 or 1 midnight). Our results show that there need not be a loss of statistical power when groups are compared using this binary end point, as compared with either Welch method or Wilcoxon rank sum test.
Variance analysis refines overhead cost control.
Cooper, J C; Suver, J D
1992-02-01
Many healthcare organizations may not fully realize the benefits of standard cost accounting techniques because they fail to routinely report volume variances in their internal reports. If overhead allocation is routinely reported on internal reports, managers can determine whether billing remains current or lost charges occur. Healthcare organizations' use of standard costing techniques can lead to more realistic performance measurements and information system improvements that alert management to losses from unrecovered overhead in time for corrective action.
NASA Astrophysics Data System (ADS)
Farshadfar, M.; Farshadfar, E.
The present research was conducted to determine the genetic variability of 18 Lucerne cultivars, based on morphological and biochemical markers. The traits studied were plant height, tiller number, biomass, dry yield, dry yield/biomass, dry leaf/dry yield, macro and micro elements, crude protein, dry matter, crude fiber and ash percentage and SDS- PAGE in seed and leaf samples. Field experiments included 18 plots of two meter rows. Data based on morphological, chemical and SDS-PAGE markers were analyzed using SPSSWIN soft ware and the multivariate statistical procedures: cluster analysis (UPGMA), principal component. Analysis of analysis of variance and mean comparison for morphological traits reflected significant differences among genotypes. Genotype 13 and 15 had the greatest values for most traits. The Genotypic Coefficient of Variation (GCV), Phenotypic Coefficient of Variation (PCV) and Heritability (Hb) parameters for different characters raged from 12.49 to 26.58% for PCV, hence the GCV ranged from 6.84 to 18.84%. The greatest value of Hb was 0.94 for stem number. Lucerne genotypes could be classified, based on morphological traits, into four clusters and 94% of the variance among the genotypes was explained by two PCAs: Based on chemical traits they were classified into five groups and 73.492% of variance was explained by four principal components: Dry matter, protein, fiber, P, K, Na, Mg and Zn had higher variance. Genotypes based on the SDS-PAGE patterns all genotypes were classified into three clusters. The greatest genetic distance was between cultivar 10 and others, therefore they would be suitable parent in a breeding program.
Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers
ERIC Educational Resources Information Center
Keiffer, Greggory L.; Lane, Forrest C.
2016-01-01
Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
Meta-Analysis for Primary and Secondary Data Analysis: The Super-Experiment Metaphor.
ERIC Educational Resources Information Center
Jackson, Sally
1991-01-01
Considers the relation between meta-analysis statistics and analysis of variance statistics. Discusses advantages and disadvantages as a primary data analysis tool. Argues that the two approaches are partial paraphrases of one another. Advocates an integrative approach that introduces the best of meta-analytic thinking into primary analysis…
NASA Astrophysics Data System (ADS)
Bao, Yuan; Wang, Yan; Gao, Kun; Wang, Zhi-Li; Zhu, Pei-Ping; Wu, Zi-Yu
2015-10-01
The relationship between noise variance and spatial resolution in grating-based x-ray phase computed tomography (PCT) imaging is investigated with reverse projection extraction method, and the noise variances of the reconstructed absorption coefficient and refractive index decrement are compared. For the differential phase contrast method, the noise variance in the differential projection images follows the same inverse-square law with spatial resolution as in conventional absorption-based x-ray imaging projections. However, both theoretical analysis and simulations demonstrate that in PCT the noise variance of the reconstructed refractive index decrement scales with spatial resolution follows an inverse linear relationship at fixed slice thickness, while the noise variance of the reconstructed absorption coefficient conforms with the inverse cubic law. The results indicate that, for the same noise variance level, PCT imaging may enable higher spatial resolution than conventional absorption computed tomography (ACT), while ACT benefits more from degraded spatial resolution. This could be a useful guidance in imaging the inner structure of the sample in higher spatial resolution. Project supported by the National Basic Research Program of China (Grant No. 2012CB825800), the Science Fund for Creative Research Groups, the Knowledge Innovation Program of the Chinese Academy of Sciences (Grant Nos. KJCX2-YW-N42 and Y4545320Y2), the National Natural Science Foundation of China (Grant Nos. 11475170, 11205157, 11305173, 11205189, 11375225, 11321503, 11179004, and U1332109).
Brier, Matthew R; Mitra, Anish; McCarthy, John E; Ances, Beau M; Snyder, Abraham Z
2015-11-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. Copyright © 2015 Elsevier Inc. All rights reserved.
Brier, Matthew R.; Mitra, Anish; McCarthy, John E.; Ances, Beau M.; Snyder, Abraham Z.
2015-01-01
Functional connectivity refers to shared signals among brain regions and is typically assessed in a task free state. Functional connectivity commonly is quantified between signal pairs using Pearson correlation. However, resting-state fMRI is a multivariate process exhibiting a complicated covariance structure. Partial covariance assesses the unique variance shared between two brain regions excluding any widely shared variance, hence is appropriate for the analysis of multivariate fMRI datasets. However, calculation of partial covariance requires inversion of the covariance matrix, which, in most functional connectivity studies, is not invertible owing to rank deficiency. Here we apply Ledoit-Wolf shrinkage (L2 regularization) to invert the high dimensional BOLD covariance matrix. We investigate the network organization and brain-state dependence of partial covariance-based functional connectivity. Although RSNs are conventionally defined in terms of shared variance, removal of widely shared variance, surprisingly, improved the separation of RSNs in a spring embedded graphical model. This result suggests that pair-wise unique shared variance plays a heretofore unrecognized role in RSN covariance organization. In addition, application of partial correlation to fMRI data acquired in the eyes open vs. eyes closed states revealed focal changes in uniquely shared variance between the thalamus and visual cortices. This result suggests that partial correlation of resting state BOLD time series reflect functional processes in addition to structural connectivity. PMID:26208872
Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images
NASA Astrophysics Data System (ADS)
Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.
2017-10-01
The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.
Mulder, Herman A.; Hill, William G.; Knol, Egbert F.
2015-01-01
There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318
One-shot estimate of MRMC variance: AUC.
Gallas, Brandon D
2006-03-01
One popular study design for estimating the area under the receiver operating characteristic curve (AUC) is the one in which a set of readers reads a set of cases: a fully crossed design in which every reader reads every case. The variability of the subsequent reader-averaged AUC has two sources: the multiple readers and the multiple cases (MRMC). In this article, we present a nonparametric estimate for the variance of the reader-averaged AUC that is unbiased and does not use resampling tools. The one-shot estimate is based on the MRMC variance derived by the mechanistic approach of Barrett et al. (2005), as well as the nonparametric variance of a single-reader AUC derived in the literature on U statistics. We investigate the bias and variance properties of the one-shot estimate through a set of Monte Carlo simulations with simulated model observers and images. The different simulation configurations vary numbers of readers and cases, amounts of image noise and internal noise, as well as how the readers are constructed. We compare the one-shot estimate to a method that uses the jackknife resampling technique with an analysis of variance model at its foundation (Dorfman et al. 1992). The name one-shot highlights that resampling is not used. The one-shot and jackknife estimators behave similarly, with the one-shot being marginally more efficient when the number of cases is small. We have derived a one-shot estimate of the MRMC variance of AUC that is based on a probabilistic foundation with limited assumptions, is unbiased, and compares favorably to an established estimate.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.
Dazard, Jean-Eudes; Rao, J Sunil
2012-07-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.
Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data
Dazard, Jean-Eudes; Rao, J. Sunil
2012-01-01
The paper addresses a common problem in the analysis of high-dimensional high-throughput “omics” data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel “similarity statistic”-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called ‘MVR’ (‘Mean-Variance Regularization’), downloadable from the CRAN website. PMID:22711950
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time series to detect when the geophysical signal, here the ground motion, emerges from the noise.
Li, Ji; Gray, B.R.; Bates, D.M.
2008-01-01
Partitioning the variance of a response by design levels is challenging for binomial and other discrete outcomes. Goldstein (2003) proposed four definitions for variance partitioning coefficients (VPC) under a two-level logistic regression model. In this study, we explicitly derived formulae for multi-level logistic regression model and subsequently studied the distributional properties of the calculated VPCs. Using simulations and a vegetation dataset, we demonstrated associations between different VPC definitions, the importance of methods for estimating VPCs (by comparing VPC obtained using Laplace and penalized quasilikehood methods), and bivariate dependence between VPCs calculated at different levels. Such an empirical study lends an immediate support to wider applications of VPC in scientific data analysis.
Yu, Marcia M L; Sandercock, P Mark L
2012-01-01
During the forensic examination of textile fibers, fibers are usually mounted on glass slides for visual inspection and identification under the microscope. One method that has the capability to accurately identify single textile fibers without subsequent demounting is Raman microspectroscopy. The effect of the mountant Entellan New on the Raman spectra of fibers was investigated to determine if it is suitable for fiber analysis. Raman spectra of synthetic fibers mounted in three different ways were collected and subjected to multivariate analysis. Principal component analysis score plots revealed that while spectra from different fiber classes formed distinct groups, fibers of the same class formed a single group regardless of the mounting method. The spectra of bare fibers and those mounted in Entellan New were found to be statistically indistinguishable by analysis of variance calculations. These results demonstrate that fibers mounted in Entellan New may be identified directly by Raman microspectroscopy without further sample preparation. © 2011 American Academy of Forensic Sciences.
Predicting research use in nursing organizations: a multilevel analysis.
Estabrooks, Carole A; Midodzi, William K; Cummings, Greta G; Wallin, Lars
2007-01-01
No empirical literature was found that explained how organizational context (operationalized as a composite of leadership, culture, and evaluation) influences research utilization. Similarly, no work was found on the interaction of individuals and contextual factors, or the relative importance or contribution of forces at different organizational levels to either such proposed interactions or, ultimately, to research utilization. To determine independent factors that predict research utilization among nurses, taking into account influences at individual nurse, specialty, and hospital levels. Cross-sectional survey data for 4,421 registered nurses in Alberta, Canada were used in a series of multilevel (three levels) modeling analyses to predict research utilization. A multilevel model was developed in MLwiN version 2.0 and used to: (a) estimate simultaneous effects of several predictors and (b) quantify the amount of explained variance in research utilization that could be apportioned to individual, specialty, and hospital levels. There was significant variation in research utilization (p <.05). Factors (remaining in the final model at statistically significant levels) found to predict more research utilization at the three levels of analysis were as follows. At the individual nurse level (Level 1): time spent on the Internet and lower levels of emotional exhaustion. At the specialty level (Level 2): facilitation, nurse-to-nurse collaboration, a higher context (i.e., of nursing culture, leadership, and evaluation), and perceived ability to control policy. At the hospital level (Level 3): only hospital size was significant in the final model. The total variance in research utilization was 1.04, and the intraclass correlations (the percent contribution by contextual factors) were 4% (variance = 0.04, p <.01) at the hospital level and 8% (variance = 0.09, p <.05) at the specialty level. The contribution attributable to individual factors alone was 87% (variance = 0.91, p <.01). Variation in research utilization was explained mainly by differences in individual characteristics, with specialty- and organizational-level factors contributing relatively little by comparison. Among hospital-level factors, hospital size was the only significant determinant of research utilization. Although organizational determinants explained less variance in the model, they were still statistically significant when analyzed alone. These findings suggest that investigations into mechanisms that influence research utilization must address influences at multiple levels of the organization. Such investigations will require careful attention to both methodological and interpretative challenges present when dealing with multiple units of analysis.
On the error in crop acreage estimation using satellite (LANDSAT) data
NASA Technical Reports Server (NTRS)
Chhikara, R. (Principal Investigator)
1983-01-01
The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.
Social capital and health-purely a question of context?
Giordano, Giuseppe Nicola; Ohlsson, Henrik; Lindström, Martin
2011-07-01
Debate still surrounds which level of analysis (individual vs. contextual) is most appropriate to investigate the effects of social capital on health. Applying multilevel ecometric analyses to British Household Panel Survey data, we estimated fixed and random effects between five individual-, household- and small area-level social capital indicators and general health. We further compared the variance in health attributable to each level using intraclass correlations. Our results demonstrate that association between social capital and health depends on indicator type and level investigated, with one quarter of total individual-level health variance found at the household level. However, individual-level social capital variables and other health determinants appear to influence contextual-level variance the most. Copyright © 2011 Elsevier Ltd. All rights reserved.
A new variance stabilizing transformation for gene expression data analysis.
Kelmansky, Diana M; Martínez, Elena J; Leiva, Víctor
2013-12-01
In this paper, we introduce a new family of power transformations, which has the generalized logarithm as one of its members, in the same manner as the usual logarithm belongs to the family of Box-Cox power transformations. Although the new family has been developed for analyzing gene expression data, it allows a wider scope of mean-variance related data to be reached. We study the analytical properties of the new family of transformations, as well as the mean-variance relationships that are stabilized by using its members. We propose a methodology based on this new family, which includes a simple strategy for selecting the family member adequate for a data set. We evaluate the finite sample behavior of different classical and robust estimators based on this strategy by Monte Carlo simulations. We analyze real genomic data by using the proposed transformation to empirically show how the new methodology allows the variance of these data to be stabilized.
Canonical Commonality Analysis.
ERIC Educational Resources Information Center
Leister, K. Dawn
Commonality analysis is a method of partitioning variance that has advantages over more traditional "OVA" methods. Commonality analysis indicates the amount of explanatory power that is "unique" to a given predictor variable and the amount of explanatory power that is "common" to or shared with at least one predictor…
K-Fold Crossvalidation in Canonical Analysis.
ERIC Educational Resources Information Center
Liang, Kun-Hsia; And Others
1995-01-01
A computer-assisted, K-fold cross-validation technique is discussed in the framework of canonical correlation analysis of randomly generated data sets. Analysis results suggest that this technique can effectively reduce the contamination of canonical variates and canonical correlations by sample-specific variance components. (Author/SLD)
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
A Canonical Analysis of Career Choice Crystallization and Vocational Maturity.
ERIC Educational Resources Information Center
Blustein, David L.
1988-01-01
Administered measures of vocational maturity and career choice crystallization to 158 community college students. Used canonical analysis to identify relationships between age, gender, career choice crystallization, and vocational maturity. Analysis yielded one significant canonical root, indicating most shared variance between variables was…
Some New Results on Grubbs’ Estimators.
1983-06-01
8217 ESTIMATORS DENNIS A. BRINDLEY AND RALPH A. BRADLEY* Consider a two-way classification with n rows and r columns and the usual model of analysis of variance...except that the error components of the model may have heterogeneous variances, by columns. -Grubbs provided unbiased estimators Q. of a . that depend...of observations yij, i = 1, ... , n, j 1, ... , r, and the model , Yij = Ili + ij + Ej, (1) when Vi represents the mean response of row i, . represents
Sadi, M V; Barrack, E R
1993-04-15
Reliable predictors of the response of prostate cancer to androgen ablation therapy are lacking. The goals of this study were to determine whether nuclear androgen receptor (AR) concentrations in metastatic prostate cancer varied within and between specimens and to correlate this information with the response to therapy. AR concentration was evaluated by computer-assisted image analysis of immunohistochemical staining intensity in 200 malignant epithelial nuclei of each of 17 specimens of Stage D2 prostate cancer obtained before hormonal therapy. The data were correlated with the time to tumor progression (relapse) after hormonal therapy. AR staining intensity varied within specimens, and the variance of staining intensity was significantly greater (P = 0.03) in the poor responders (n = 8; time to progression, < 20 months) than in the good responders (n = 9; time to progression, > or = 20 months). The kurtosis was significantly lower in poor responders (P = 0.04). However, the mean AR staining intensity was not significantly different among patients. The frequency distribution plots of good responders were generally uniform and unimodal, but those of poor responders were flattened (more platykurtic), dispersed, and highly variable. Thus, the AR concentration per cell was significantly more heterogeneous in poor responders. Variance was a significant predictor of response. Five of 6 patients with a high variance (defined as variance greater than the mean) were poor responders, whereas 8 of 11 patients with a low variance were good responders (an overall classification accuracy of 13 of 17, 76%). The greater AR heterogeneity in poor responders may reflect a greater genetic instability in tumors that have progressed further toward androgen independence and may be a valuable predictor of progression.
The pyramid system for multiscale raster analysis
De Cola, L.; Montagne, N.
1993-01-01
Geographical research requires the management and analysis of spatial data at multiple scales. As part of the U.S. Geological Survey's global change research program a software system has been developed that reads raster data (such as an image or digital elevation model) and produces a pyramid of aggregated lattices as well as various measurements of spatial complexity. For a given raster dataset the system uses the pyramid to report: (1) mean, (2) variance, (3) a spatial autocorrelation parameter based on multiscale analysis of variance, and (4) a monofractal scaling parameter based on the analysis of isoline lengths. The system is applied to 1-km digital elevation model (DEM) data for a 256-km2 region of central California, as well as to 64 partitions of the region. PYRAMID, which offers robust descriptions of data complexity, also is used to describe the behavior of topographic aspect with scale. ?? 1993.
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
Mauer, Michael; Caramori, Maria Luiza; Fioretto, Paola; Najafian, Behzad
2015-06-01
Studies of structural-functional relationships have improved understanding of the natural history of diabetic nephropathy (DN). However, in order to consider structural end points for clinical trials, the robustness of the resultant models needs to be verified. This study examined whether structural-functional relationship models derived from a large cohort of type 1 diabetic (T1D) patients with a wide range of renal function are robust. The predictability of models derived from multiple regression analysis and piecewise linear regression analysis was also compared. T1D patients (n = 161) with research renal biopsies were divided into two equal groups matched for albumin excretion rate (AER). Models to explain AER and glomerular filtration rate (GFR) by classical DN lesions in one group (T1D-model, or T1D-M) were applied to the other group (T1D-test, or T1D-T) and regression analyses were performed. T1D-M-derived models explained 70 and 63% of AER variance and 32 and 21% of GFR variance in T1D-M and T1D-T, respectively, supporting the substantial robustness of the models. Piecewise linear regression analyses substantially improved predictability of the models with 83% of AER variance and 66% of GFR variance explained by classical DN glomerular lesions alone. These studies demonstrate that DN structural-functional relationship models are robust, and if appropriate models are used, glomerular lesions alone explain a major proportion of AER and GFR variance in T1D patients. © The Author 2014. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Gallina, Alessio; Garland, S Jayne; Wakeling, James M
2018-05-22
In this study, we investigated whether principal component analysis (PCA) and non-negative matrix factorization (NMF) perform similarly for the identification of regional activation within the human vastus medialis. EMG signals from 64 locations over the VM were collected from twelve participants while performing a low-force isometric knee extension. The envelope of the EMG signal of each channel was calculated by low-pass filtering (8 Hz) the monopolar EMG signal after rectification. The data matrix was factorized using PCA and NMF, and up to 5 factors were considered for each algorithm. Association between explained variance, spatial weights and temporal scores between the two algorithms were compared using Pearson correlation. For both PCA and NMF, a single factor explained approximately 70% of the variance of the signal, while two and three factors explained just over 85% or 90%. The variance explained by PCA and NMF was highly comparable (R > 0.99). Spatial weights and temporal scores extracted with non-negative reconstruction of PCA and NMF were highly associated (all p < 0.001, mean R > 0.97). Regional VM activation can be identified using high-density surface EMG and factorization algorithms. Regional activation explains up to 30% of the variance of the signal, as identified through both PCA and NMF. Copyright © 2018 Elsevier Ltd. All rights reserved.
The relationship between observational scale and explained variance in benthic communities
Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.
2018-01-01
This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746
2004-03-01
constant variance via an analysis of the residuals, as well as the Breusch - Pagan test (see Figure 3 below). As a result, we follow the footsteps of...reasonably normal, which ensures that our residuals meet the assumption of constant variance by passing the Breusch - Pagan test (see Figure 4 below...sections for Research and Development, Test and Evaluation (RDT&E), procurement and military construction (Jarvaise, 1996:3). While differing
Alcohol hangover symptoms and their contribution to the overall hangover severity.
Penning, Renske; McKinney, Adele; Verster, Joris C
2012-01-01
Scientific literature suggests a large number of symptoms that may be present the day after excessive alcohol consumption. The purpose of this study was to explore the presence and severity of hangover symptoms, and determine their interrelationship. A survey was conducted among n = 1410 Dutch students examining their drinking behavior and latest alcohol hangover. The severity of 47 presumed hangover symptoms were scored on a 10-point scale ranging from 0 (absent) to 10 (maximal). Factor analysis was conducted to summarize the data into groups of associated symptoms that contribute significantly to the alcohol hangover and symptoms that do not. About half of the participants (56.1%, n = 791) reported having had a hangover during the past month. Most commonly reported and most severe hangover symptoms were fatigue (95.5%) and thirst (89.1%). Factor analysis revealed 11 factors that together account for 62% of variance. The most prominent factor 'drowsiness' (explained variance 28.8%) included symptoms such as drowsiness, fatigue, sleepiness and weakness. The second factor 'cognitive problems' (explained variance 5.9%) included symptoms such as reduced alertness, memory and concentration problems. Other factors, including the factor 'disturbed water balance' comprising frequently reported symptoms such as 'dry mouth' and 'thirst', contributed much less to the overall hangover (explained variance <5%). Drowsiness and impaired cognitive functioning are the two dominant features of alcohol hangover.
Smoothing of the bivariate LOD score for non-normal quantitative traits.
Buil, Alfonso; Dyer, Thomas D; Almasy, Laura; Blangero, John
2005-12-30
Variance component analysis provides an efficient method for performing linkage analysis for quantitative traits. However, type I error of variance components-based likelihood ratio testing may be affected when phenotypic data are non-normally distributed (especially with high values of kurtosis). This results in inflated LOD scores when the normality assumption does not hold. Even though different solutions have been proposed to deal with this problem with univariate phenotypes, little work has been done in the multivariate case. We present an empirical approach to adjust the inflated LOD scores obtained from a bivariate phenotype that violates the assumption of normality. Using the Collaborative Study on the Genetics of Alcoholism data available for the Genetic Analysis Workshop 14, we show how bivariate linkage analysis with leptokurtotic traits gives an inflated type I error. We perform a novel correction that achieves acceptable levels of type I error.
Szczepankiewicz, Filip; van Westen, Danielle; Englund, Elisabet; Westin, Carl-Fredrik; Ståhlberg, Freddy; Lätt, Jimmy; Sundgren, Pia C; Nilsson, Markus
2016-11-15
The structural heterogeneity of tumor tissue can be probed by diffusion MRI (dMRI) in terms of the variance of apparent diffusivities within a voxel. However, the link between the diffusional variance and the tissue heterogeneity is not well-established. To investigate this link we test the hypothesis that diffusional variance, caused by microscopic anisotropy and isotropic heterogeneity, is associated with variable cell eccentricity and cell density in brain tumors. We performed dMRI using a novel encoding scheme for diffusional variance decomposition (DIVIDE) in 7 meningiomas and 8 gliomas prior to surgery. The diffusional variance was quantified from dMRI in terms of the total mean kurtosis (MK T ), and DIVIDE was used to decompose MK T into components caused by microscopic anisotropy (MK A ) and isotropic heterogeneity (MK I ). Diffusion anisotropy was evaluated in terms of the fractional anisotropy (FA) and microscopic fractional anisotropy (μFA). Quantitative microscopy was performed on the excised tumor tissue, where structural anisotropy and cell density were quantified by structure tensor analysis and cell nuclei segmentation, respectively. In order to validate the DIVIDE parameters they were correlated to the corresponding parameters derived from microscopy. We found an excellent agreement between the DIVIDE parameters and corresponding microscopy parameters; MK A correlated with cell eccentricity (r=0.95, p<10 -7 ) and MK I with the cell density variance (r=0.83, p<10 -3 ). The diffusion anisotropy correlated with structure tensor anisotropy on the voxel-scale (FA, r=0.80, p<10 -3 ) and microscopic scale (μFA, r=0.93, p<10 -6 ). A multiple regression analysis showed that the conventional MK T parameter reflects both variable cell eccentricity and cell density, and therefore lacks specificity in terms of microstructure characteristics. However, specificity was obtained by decomposing the two contributions; MK A was associated only to cell eccentricity, and MK I only to cell density variance. The variance in meningiomas was caused primarily by microscopic anisotropy (mean±s.d.) MK A =1.11±0.33 vs MK I =0.44±0.20 (p<10 -3 ), whereas in the gliomas, it was mostly caused by isotropic heterogeneity MK I =0.57±0.30 vs MK A =0.26±0.11 (p<0.05). In conclusion, DIVIDE allows non-invasive mapping of parameters that reflect variable cell eccentricity and density. These results constitute convincing evidence that a link exists between specific aspects of tissue heterogeneity and parameters from dMRI. Decomposing effects of microscopic anisotropy and isotropic heterogeneity facilitates an improved interpretation of tumor heterogeneity as well as diffusion anisotropy on both the microscopic and macroscopic scale. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Exploring learners' self-efficacy, autonomy and motivation toward e-learning.
Huang, Hsiu-Mei; Liaw, Shu-Sheng
2007-10-01
A questionnaire survey was conducted with 116 college students (47 men, 69 women) in Central Taiwan to investigate predictive relationships among four attitudinal variables, perceived self-efficacy, learners' autonomy, intrinsic motivation, and extrinsic motivation toward e-learning. Analysis showed learners' autonomy was predictive of both intrinsic (57% independent variance explained) and extrinsic motivation (61% independent variance explained). Although perceived self-efficacy was not a predictor of intrinsic motivation and extrinsic motivation, it correlated significantly with extrinsic motivation.
Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M
2015-11-01
This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com. Copyright © 2015 Elsevier Inc. All rights reserved.
Cost accounting, management control, and planning in health care.
Siegrist, R B; Blish, C S
1988-02-01
Advantages and pharmacy applications of computerized hospital management-control and planning systems are described. Hospitals must define their product lines; patient cases, not tests or procedures, are the end product. Management involves operational control, management control, and strategic planning. Operational control deals with day-to-day management on the task level. Management control involves ensuring that managers use resources effectively and efficiently to accomplish the organization's objectives. Management control includes both control of unit costs of intermediate products, which are procedures and services used to treat patients and are managed by hospital department heads, and control of intermediate product use per case (managed by the clinician). Information from the operation and management levels feeds into the strategic plan; conversely, the management level controls the plan and the operational level carries it out. In the system developed at New England Medical Center, Boston, Massachusetts, the intermediate product-management system enables managers to identify intermediate products, develop standard costs, simulate changes in departmental costs, and perform variance analysis. The end-product management system creates a patient-level data-base, identifies end products (patient-care groupings), develops standard resource protocols, models alternative assumptions, performs variance analysis, and provides concurrent reporting. Examples are given of pharmacy managers' use of such systems to answer questions in the areas of product costing, product pricing, variance analysis, productivity monitoring, flexible budgeting, modeling and planning, and comparative analysis.(ABSTRACT TRUNCATED AT 250 WORDS)
Influential Observations in Principal Factor Analysis.
ERIC Educational Resources Information Center
Tanaka, Yutaka; Odaka, Yoshimasa
1989-01-01
A method is proposed for detecting influential observations in iterative principal factor analysis. Theoretical influence functions are derived for two components of the common variance decomposition. The major mathematical tool is the influence function derived by Tanaka (1988). (SLD)
Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
Skills for the 21st Century Supervisor: What Factory Personnel Think.
ERIC Educational Resources Information Center
Hotek, Douglas R.
2002-01-01
Discusses supervisory skills that factory personnel believe are important for leading and improving employee performance in complex manufacturing environments. Highlights include a historical perspective; manufacturing technologies; results of Pareto analysis, comparative analysis, and analysis of variance; and a Taxonomy of Supervisory Skills.…
Bayesian Meta-Analysis of Coefficient Alpha
ERIC Educational Resources Information Center
Brannick, Michael T.; Zhang, Nanhua
2013-01-01
The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
ERIC Educational Resources Information Center
Games, Paul A.
1975-01-01
A brief introduction is presented on how multiple regression and linear model techniques can handle data analysis situations that most educators and psychologists think of as appropriate for analysis of variance. (Author/BJG)
Social comparison processes and catastrophising in fibromyalgia: A path analysis.
Cabrera-Perona, V; Buunk, A P; Terol-Cantero, M C; Quiles-Marcos, Y; Martín-Aragón, M
2017-06-01
In addition to coping strategies, social comparison may play a role in illness adjustment. However, little is known about the role of contrast and identification in social comparison in adaptation to fibromyalgia. To evaluate through a path analysis in a sample of fibromyalgia patients, the association between identification and contrast in social comparison, catastrophising and specific health outcomes (fibromyalgia illness impact and psychological distress). 131 Spanish fibromyalgia outpatients (mean age: 50.15, SD = 11.1) filled out a questionnaire. We present a model that explained 33% of the variance in catastrophising by direct effects of more use of upward contrast and downward identification. In addition, 35% of fibromyalgia illness impact variance was explained by less upward identification, more upward contrast and more catastrophising and 42% of the variance in psychological distress by a direct effect of more use of upward contrast together with higher fibromyalgia illness impact. We suggest that intervention programmes with chronic pain and fibromyalgia patients should focus on enhancing the use of upward identification in social comparison, and on minimising the use of upward contrast and downward identification in social comparison.
Yuan, Yuan-Yuan; Zhou, Yu-Bi; Sun, Jing; Deng, Juan; Bai, Ying; Wang, Jie; Lu, Xue-Feng
2017-06-01
The content of elements in fifteen different regions of Nitraria roborowskii samples were determined by inductively coupled plasma-atomic emission spectrometry(ICP-OES), and its elemental characteristics were analyzed by principal component analysis. The results indicated that 18 mineral elements were detected in N. roborowskii of which V cannot be detected. In addition, contents of Na, K and Ca showed high concentration. Ti showed maximum content variance, while K is minimum. Four principal components were gained from the original data. The cumulative variance contribution rate is 81.542% and the variance contribution of the first principal component was 44.997%, indicating that Cr, Fe, P and Ca were the characteristic elements of N. roborowskii.Thus, the established method was simple, precise and can be used for determination of mineral elements in N.roborowskii Kom. fruits. The elemental distribution characteristics among N.roborowskii fruits are related to geographical origins which were clearly revealed by PCA. All the results will provide good basis for comprehensive utilization of N.roborowskii. Copyright© by the Chinese Pharmaceutical Association.
Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities
NASA Astrophysics Data System (ADS)
Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco
2017-03-01
Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
[Again review of research design and statistical methods of Chinese Journal of Cardiology].
Kong, Qun-yu; Yu, Jin-ming; Jia, Gong-xian; Lin, Fan-li
2012-11-01
To re-evaluate and compare the research design and the use of statistical methods in Chinese Journal of Cardiology. Summary the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology all over the year of 2011, and compared the result with the evaluation of 2008. (1) There is no difference in the distribution of the design of researches of between the two volumes. Compared with the early volume, the use of survival regression and non-parameter test are increased, while decreased in the proportion of articles with no statistical analysis. (2) The proportions of articles in the later volume are significant lower than the former, such as 6(4%) with flaws in designs, 5(3%) with flaws in the expressions, 9(5%) with the incomplete of analysis. (3) The rate of correction of variance analysis has been increased, so as the multi-group comparisons and the test of normality. The error rate of usage has been decreased form 17% to 25% without significance in statistics due to the ignorance of the test of homogeneity of variance. Many improvements showed in Chinese Journal of Cardiology such as the regulation of the design and statistics. The homogeneity of variance should be paid more attention in the further application.
Robust analysis of semiparametric renewal process models
Lin, Feng-Chang; Truong, Young K.; Fine, Jason P.
2013-01-01
Summary A rate model is proposed for a modulated renewal process comprising a single long sequence, where the covariate process may not capture the dependencies in the sequence as in standard intensity models. We consider partial likelihood-based inferences under a semiparametric multiplicative rate model, which has been widely studied in the context of independent and identical data. Under an intensity model, gap times in a single long sequence may be used naively in the partial likelihood with variance estimation utilizing the observed information matrix. Under a rate model, the gap times cannot be treated as independent and studying the partial likelihood is much more challenging. We employ a mixing condition in the application of limit theory for stationary sequences to obtain consistency and asymptotic normality. The estimator's variance is quite complicated owing to the unknown gap times dependence structure. We adapt block bootstrapping and cluster variance estimators to the partial likelihood. Simulation studies and an analysis of a semiparametric extension of a popular model for neural spike train data demonstrate the practical utility of the rate approach in comparison with the intensity approach. PMID:24550568
Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.
Bregni, Stefano
2016-04-01
The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.
Genetic analysis of motor milestones attainment in early childhood.
Peter, I; Vainder, M; Livshits, G
1999-03-01
The age of attainment for four motor developmental traits, such as turning over, sitting up without support, pulling up to a standing position and walking without support, was examined in 822 children, including 626 siblings from families with 2 to 6 children, 68 pairs of dizygotic twins and 30 pairs of monozygotic twins. Correlation analysis, carried out separately for each type of sibship, showed the highest pairwise correlations in monozygotic twins and the lowest correlation in non-twin siblings for all motor milestones. Variance component analysis was used to decompose the different independent components forming the variation of the studied trait, such as genetic effect, common twin environment, common sib environment and residual factors. The results revealed that the major proportion of the total variance after adjustment for gestation age for the attainment of each motor skill, except pulling up to standing position, is explained by the common twin environment (50.5 to 66.6%), whilst a moderate proportion is explained by additive genetic factors (22.2 to 33.5%). Gestational age was found to be an important predictor of appearance of all motor milestones, affecting delay of 4.5 to 8.6 days for the attainment of the motor abilities for each week of earlier gestation. The age of attainment of the standing position was affected only by shared sibs environment (33.3% of the total variance) and showed no influence of either genetic or common twin environment. Phenotypic between trait correlations were high and significant for all studied traits (range between 0.40 and 0.67, P < 0.01 in all instances). Genetic cross correlations, however, were not easily interpreted and did not show clear variance trends among the different groups of children.
NASA Astrophysics Data System (ADS)
Kohán, Balázs; Tyler, Jonathan; Jones, Matthew; Kern, Zoltán
2017-04-01
Water stable isotopes are important natural tracers in the hydrological cycle on global, regional and local scales. Daily precipitation water samples were collected from 70 sites over the British Isles on the 23rd, 24th, and 25th January, 2012 [1]. Samples were collected as part of a pilot study for the British Isotopes in Rainfall Project, a community engagement initiative, in collaboration with volunteer weather observers and the UK Met Office. Spatial correlation structure of daily precipitation stable oxygen isotope composition (δ18OP) has been explored by variogram analysis [2]. Since the variograms from the raw data suggested a pronounced trend, owing to the spatial trend discussed in the original study [1], a second order polynomial trend was removed from the raw δ18OP data and variograms were calculated on the residuals. Directional experimental semivariograms were calculated (steps: 10°, tolerance: 30°) and aggregated into variogram surface plots to explore the spatial dependence structure of daily δ18OP. Each daily data set produced distinct variogram plots. -A well expressed anisotropic structure can be seen for Jan 23. The lowest and highest variance was observed in the SW-NE and NNE-SSW direction, respectively. Meteorological observations showed that the majority of the atmospheric flow was SW on this day, so the direction of low variance seems to reflect this flow direction, while the maximum variance might reflect the moisture variance near the elongation of the frontal system. -A less characteristic but still expressed anisotropic structure was found for Jan 24 when a warm front passed the British Isles perpendicular to the east coast, leading to a characteristic east-west δ18OP gradient suggestive of progressive rainout. The low variance central zone has a 100 km radius which might correspond well to the width of the warm front zone. Although, the axis of minimum variance was similarly SW-NE, the zone of maximum variance was broader and practically perpendicular to it. In this case, however, directions of the axes appear misaligned with the flow direction. -We could not observe similar characteristic patterns in the last variogram calculated from the Jan 25 data set. These preliminary results suggest that variogram analysis is a promising approach to link δ18OP patterns to atmospheric processes. NKFIH: SNN118205/ARRS: N1-0054 References 1.Tyler, J. J., Jones, M., Arrowsmith, C., Allott, T., & Leng, M. J. (2016). Spatial patterns in the oxygen isotope composition of daily rainfall in the British Isles. Climate Dynamics 47:1971-1987 2.Webster, R. Oliver M.A. (2007) Geostatistics for Environmental Scientists. John Wiley & Sons, Chichester
Random effects coefficient of determination for mixed and meta-analysis models.
Demidenko, Eugene; Sargent, James; Onega, Tracy
2012-01-01
The key feature of a mixed model is the presence of random effects. We have developed a coefficient, called the random effects coefficient of determination, [Formula: see text], that estimates the proportion of the conditional variance of the dependent variable explained by random effects. This coefficient takes values from 0 to 1 and indicates how strong the random effects are. The difference from the earlier suggested fixed effects coefficient of determination is emphasized. If [Formula: see text] is close to 0, there is weak support for random effects in the model because the reduction of the variance of the dependent variable due to random effects is small; consequently, random effects may be ignored and the model simplifies to standard linear regression. The value of [Formula: see text] apart from 0 indicates the evidence of the variance reduction in support of the mixed model. If random effects coefficient of determination is close to 1 the variance of random effects is very large and random effects turn into free fixed effects-the model can be estimated using the dummy variable approach. We derive explicit formulas for [Formula: see text] in three special cases: the random intercept model, the growth curve model, and meta-analysis model. Theoretical results are illustrated with three mixed model examples: (1) travel time to the nearest cancer center for women with breast cancer in the U.S., (2) cumulative time watching alcohol related scenes in movies among young U.S. teens, as a risk factor for early drinking onset, and (3) the classic example of the meta-analysis model for combination of 13 studies on tuberculosis vaccine.
Lundy, J Jason; Coons, Stephen Joel; Wendel, Christopher; Hornbrook, Mark C; Herrinton, Lisa; Grant, Marcia; Krouse, Robert S
2009-03-01
The purpose of this analysis was to determine the unique contribution of household income to the variance explained in psychological well-being (PWB) among a sample of colorectal cancer (CRC) survivors. This study is a secondary analysis of data collected as part of the Health-Related Quality of Life in Long-Term Colorectal Cancer Survivors Study, which included CRC survivors with (cases) and without (controls) ostomies. The dataset included socio-demographic, health status, and health-related quality of life (HRQOL) information. HRQOL was assessed with the modified City of Hope Quality of Life (mCOH-QOL)-Ostomy questionnaire and SF-36v2. To assess the relationship between income and PWB, a hierarchical linear regression model was constructed combining data from both cases and controls. After accounting for the proportion of variance in PWB explained by the other independent variables in the model, the additional variance explained by income was significant (R (2) increased from 0.228 to 0.250; P = 0.006). Although the study design does not allow causal inference, these results demonstrate a significant relationship between income and PWB in CRC survivors. The findings suggest that for non-randomized group comparisons of HRQOL, income should, at the very least, be included as a control variable in the analysis.
The Relationship between Social Capital in Hospitals and Physician Job Satisfaction
Ommen, Oliver; Driller, Elke; Köhler, Thorsten; Kowalski, Christoph; Ernstmann, Nicole; Neumann, Melanie; Steffen, Petra; Pfaff, Holger
2009-01-01
Background Job satisfaction in the hospital is an important predictor for many significant management ratios. Acceptance in professional life or high workload are known as important predictors for job satisfaction. The influence of social capital in hospitals on job satisfaction within the health care system, however, remains to be determined. Thus, this article aimed at analysing the relationship between overall job satisfaction of physicians and social capital in hospitals. Methods The results of this study are based upon questionnaires sent by mail to 454 physicians working in the field of patient care in 4 different German hospitals in 2002. 277 clinicians responded to the poll, for a response rate of 61%. Analysis was performed using three linear regression models with physician overall job satisfaction as the dependent variable and age, gender, professional experience, workload, and social capital as independent variables. Results The first regression model explained nearly 9% of the variance of job satisfaction. Whereas job satisfaction increased slightly with age, gender and professional experience were not identified as significant factors to explain the variance. Setting up a second model with the addition of subjectively-perceived workload to the analysis, the explained variance increased to 18% and job satisfaction decreased significantly with increasing workload. The third model including social capital in hospital explained 36% of the variance with social capital, professional experience and workload as significant factors. Conclusion This analysis demonstrated that the social capital of an organisation, in addition to professional experience and workload, represents a significant predictor of overall job satisfaction of physicians working in the field of patient care. Trust, mutual understanding, shared aims, and ethical values are qualities of social capital that unify members of social networks and communities and enable them to act cooperatively. PMID:19445692
Variance fluctuations in nonstationary time series: a comparative study of music genres
NASA Astrophysics Data System (ADS)
Jennings, Heather D.; Ivanov, Plamen Ch.; De Martins, Allan M.; da Silva, P. C.; Viswanathan, G. M.
2004-05-01
An important problem in physics concerns the analysis of audio time series generated by transduced acoustic phenomena. Here, we develop a new method to quantify the scaling properties of the local variance of nonstationary time series. We apply this technique to analyze audio signals obtained from selected genres of music. We find quantitative differences in the correlation properties of high art music, popular music, and dance music. We discuss the relevance of these objective findings in relation to the subjective experience of music.
NASA Astrophysics Data System (ADS)
Motsepa, Tanki; Aziz, Taha; Fatima, Aeeman; Khalique, Chaudry Masood
2018-03-01
The optimal investment-consumption problem under the constant elasticity of variance (CEV) model is investigated from the perspective of Lie group analysis. The Lie symmetry group of the evolution partial differential equation describing the CEV model is derived. The Lie point symmetries are then used to obtain an exact solution of the governing model satisfying a standard terminal condition. Finally, we construct conservation laws of the underlying equation using the general theorem on conservation laws.
Butera, Katie A; George, Steven Z; Borsa, Paul A; Dover, Geoffrey C
2018-03-05
Transcutaneous electrical nerve stimulation (TENS) is commonly used for reducing musculoskeletal pain to improve function. However, peripheral nerve stimulation using TENS can alter muscle motor output. Few studies examine motor outcomes following TENS in a human pain model. Therefore, this study investigated the influence of TENS sensory stimulation primarily on motor output (strength) and secondarily on pain and disability following exercise-induced delayed-onset muscle soreness (DOMS). Thirty-six participants were randomized to a TENS treatment, TENS placebo, or control group after completing a standardized DOMS protocol. Measures included shoulder strength, pain, mechanical pain sensitivity, and disability. TENS treatment and TENS placebo groups received 90 minutes of active or sham treatment 24, 48, and 72 hours post-DOMS. All participants were assessed daily. A repeated measures analysis of variance and post-hoc analysis indicated that, compared to the control group, strength remained reduced in the TENS treatment group (48 hours post-DOMS, P < 0.05) and TENS placebo group (48 hours post-DOMS, P < 0.05; 72 hours post-DOMS, P < 0.05). A mixed-linear modeling analysis was conducted to examine the strength (motor) change. Randomization group explained 5.6% of between-subject strength variance (P < 0.05). Independent of randomization group, pain explained 8.9% of within-subject strength variance and disability explained 3.3% of between-subject strength variance (both P < 0.05). While active and placebo TENS resulted in prolonged strength inhibition, the results were nonsignificant for pain. Results indicated that higher pain and higher disability were independently related to decreased strength. Regardless of the impact on pain, TENS, or even the perception of TENS, may act as a nocebo for motor output. © 2018 World Institute of Pain.
Comprehensive Analysis of LC/MS Data Using Pseudocolor Plots
NASA Astrophysics Data System (ADS)
Crutchfield, Christopher A.; Olson, Matthew T.; Gourgari, Evgenia; Nesterova, Maria; Stratakis, Constantine A.; Yergey, Alfred L.
2013-02-01
We have developed new applications of the pseudocolor plot for the analysis of LC/MS data. These applications include spectral averaging, analysis of variance, differential comparison of spectra, and qualitative filtering by compound class. These applications have been motivated by the need to better understand LC/MS data generated from analysis of human biofluids. The examples presented use data generated to profile steroid hormones in urine extracts from a Cushing's disease patient relative to a healthy control, but are general to any discovery-based scanning mass spectrometry technique. In addition to new visualization techniques, we introduce a new metric of variance: the relative maximum difference from the mean. We also introduce the concept of substructure-dependent analysis of steroid hormones using precursor ion scans. These new analytical techniques provide an alternative approach to traditional untargeted metabolomics workflow. We present an approach to discovery using MS that essentially eliminates alignment or preprocessing of spectra. Moreover, we demonstrate the concept that untargeted metabolomics can be achieved using low mass resolution instrumentation.
An Analysis of the Readability of Financial Accounting Textbooks.
ERIC Educational Resources Information Center
Smith, Gerald; And Others
1981-01-01
The Flesch formula was used to calculate the readability of 15 financial accounting textbooks. The 15 textbooks represented introductory, intermediate, and advanced levels and also were classified by five different publishers. Two-way analysis of variance and Tukey's post hoc analysis revealed some significant differences. (Author/CT)
Seidel, Clemens; Lautenschläger, Christine; Dunst, Jürgen; Müller, Arndt-Christian
2012-04-20
To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity.
Hierarchical multivariate covariance analysis of metabolic connectivity.
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-12-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).
Estimates of tropical analysis differences in daily values produced by two operational centers
NASA Technical Reports Server (NTRS)
Kasahara, Akira; Mizzi, Arthur P.
1992-01-01
To assess the uncertainty of daily synoptic analyses for the atmospheric state, the intercomparison of three First GARP Global Experiment level IIIb datasets is performed. Daily values of divergence, vorticity, temperature, static stability, vertical motion, mixing ratio, and diagnosed diabatic heating rate are compared for the period of 26 January-11 February 1979. The spatial variance and mean, temporal mean and variance, 2D wavenumber power spectrum, anomaly correlation, and normalized square difference are employed for comparison.
Austin, Peter C
2016-12-30
Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
The Coopersmith Self-Esteem Inventory in an Adult Sample.
ERIC Educational Resources Information Center
Noller, Patricia; Shugm, David
1988-01-01
The reliability and validity of the Self-Esteem Inventory developed by S. C. Coopersmith (1975) were evaluated via item-total correlation, discriminant analysis, factor analysis, and analysis of variance of data for 352 Australian adults. The instrument had high internal consistency and discriminated well between subjects with high and low…
Bitzen, Alexander; Sternickel, Karsten; Lewalter, Thorsten; Schwab, Jörg Otto; Yang, Alexander; Schrickel, Jan Wilko; Linhart, Markus; Wolpert, Christian; Jung, Werner; David, Peter; Lüderitz, Berndt; Nickenig, Georg; Lickfett, Lars
2007-10-01
Patients with atrial fibrillation (AF) often exhibit abnormalities of P wave morphology during sinus rhythm. We examined a novel method for automatic P wave analysis in the 24-hour-Holter-ECG of 60 patients with paroxysmal or persistent AF and 12 healthy subjects. Recorded ECG signals were transferred to the analysis program where 5-10 P and R waves were manually marked. A wavelet transform performed a time-frequency decomposition to train neural networks. Afterwards, the detected P waves were described using a Gauss function optimized to fit the individual morphology and providing amplitude and duration at half P wave height. >96% of P waves were detected, 47.4 +/- 20.7% successfully analyzed afterwards. In the patient population, the mean amplitude was 0.073 +/- 0.028 mV (mean variance 0.020 +/- 0.008 mV(2)), the mean duration at half height 23.5 +/- 2.7 ms (mean variance 4.2 +/- 1.6 ms(2)). In the control group, the mean amplitude (0.105 +/- 0.020 ms) was significantly higher (P < 0.0005), the mean variance of duration at half height (2.9 +/- 0.6 ms(2)) significantly lower (P < 0.0085). This method shows promise for identification of triggering factors of AF.
Development of rotation sample designs for the estimation of crop acreages
NASA Technical Reports Server (NTRS)
Lycthuan-Lee, T. G. (Principal Investigator)
1981-01-01
The idea behind the use of rotation sample designs is that the variation of the crop acreage of a particular sample unit from year to year is usually less than the variation of crop acreage between units within a particular year. The estimation theory is based on an additive mixed analysis of variance model with years as fixed effects, (a sub t), and sample units as a variable factor. The rotation patterns are decided upon according to: (1) the number of sample units in the design each year; (2) the number of units retained in the following years; and (3) the number of years to complete the rotation pattern. Different analytic formulae for the variance of (a sub t) and the variance comparisons in using a complete survey of the rotation patterns.
Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model
NASA Astrophysics Data System (ADS)
Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.
2013-12-01
We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Asquith, William H.; Barbie, Dana L.
2014-01-01
Selected summary statistics (L-moments) and estimates of respective sampling variances were computed for the 35 streamgages lacking statistically significant trends. From the L-moments and estimated sampling variances, weighted means or regional values were computed for each L-moment. An example application is included demonstrating how the L-moments could be used to evaluate the magnitude and frequency of annual mean streamflow.
Xu, Beibei; Yang, Guanyi; Ge, Shufan; Yin, Taijun; Hu, Ming; Gao, Song
2013-11-01
The purpose of this study is to develop an UPLC-MS/MS method to quantify 3-hydroxyflavone (3-HF) and its metabolite, 3-hydroxyflavone-glucuronide (3-HFG) from biological samples. A Waters BEH C8 column was used with acetonitrile/0.1% formic acid in water as mobile phases. The mass analysis was performed in an API 5500 Qtrap mass spectrometer via multiple reaction monitoring (MRM) with positive scan mood. The one-step protein precipitation by acetonitrile was used to extract the analytes from blood. The results showed that the linear response range was 0.61-2500.00 nM for 3-HF and 0.31-2500.00 nM for 3-HFG. The intra-day variance is less than 16.5% and accuracy is in 77.7-90.6% for 3-HF and variance less than 15.9%, accuracy in 85.1-114.7% for 3-HFG. The inter-day variance is less than 20.2%, accuracy is in 110.6-114.2% for 3-HF and variance less than 15.6%, accuracy in 83.0-89.4% for 3-HFG. The analysis was done within 4.0 min. Only 10 μl of blood is needed due to the high sensitivity of this method. The validated method was successfully used to pharmacokinetic study in A/J mouse, transport study in the Caco-2 cell culture model, and glucuronidation study using mice liver and intestine microsomes. The applications revealed that this method can be used for 3-HF and 3-HFG analysis in blood as well as in bioequivalent buffers such HBSS and KPI. Copyright © 2013 Elsevier B.V. All rights reserved.
Xu, Beibei; Yang, Guanyi; Ge, Shufan; Yin, Taijun; Hu, Ming; Gao, Song
2015-01-01
The purpose of this study is to develop an UPLC-MS/MS method to quantify 3-hydroxy-flavone (3-HF) and its metabolite, 3-hydroxyflavone-glucuronide (3-HFG) from biological samples. A Waters BEH C8 column was used with acetonitrile/0.1 % formic acid in water as mobile phases. The mass analysis was performed in an API 5500 Qtrap mass spectrometer via multiple reaction monitoring (MRM) with positive scan mood. The one-step protein precipitation by acetonitrile was used to extract the analytes from plasma. The results showed that the linear response range was 0.61– 2,500.00 nM for 3-HF and 0.31– 2,500.00 nM for 3-HFG. The intra-day variance is less 16.48 % and accuracy is in 77.70–90.64 % for 3-HF and variance less than 15.86%, accuracy in 85.08–114.70 % for 3-HFG. The inter-day variance is less than 20.23 %, accuracy is in 110.58–114.2 % for 3-HF and variance less than 15.59 %, accuracy in 83.00–89.40% for 3-HFG. The analysis was done within 4.0 min. Only 10 μL of blood is needed due to the high sensitivity of this method. The validated method was successfully used to pharmacokinetic study in A/J mouse, transport study in the Caco-2 cell culture model, and glucuronidation study using mice liver and intestine microsomes. The applications revealed that this method can be used for 3-HF and 3-HFG analysis in blood as well as in bioequivalent buffers such HBSS and KPI. PMID:23973631
German, Alina; Livshits, Gregory; Peter, Inga; Malkin, Ida; Dubnov, Jonathan; Akons, Hannah; Shmoish, Michael; Hochberg, Ze'ev
2015-03-01
Using a twins study, we sought to assess the contribution of genetic against environmental factor as they affect the age at transition from infancy to childhood (ICT). The subjects were 56 pairs of monozygotic twins, 106 pairs of dizygotic twins, and 106 pairs of regular siblings (SBs), for a total of 536 children. Their ICT was determined, and a variance component analysis was implemented to estimate components of the familial variance, with simultaneous adjustment for potential covariates. We found substantial contribution of the common environment shared by all types of SBs that explained 27.7% of the total variance in ICT, whereas the common twin environment explained 9.2% of the variance, gestational age 3.5%, and birth weight 1.8%. In addition, 8.7% was attributable to sex difference, but we found no detectable contribution of genetic factors to inter-individual variation in ICT age. Developmental plasticity impacts much of human growth. Here we show that of the ∼50% of the variance provided to adult height by the ICT, 42.2% is attributable to adaptive cues represented by shared twin and SB environment, with no detectable genetic involvement. Copyright © 2015 Elsevier Inc. All rights reserved.
D'Acremont, Mathieu; Bossaerts, Peter
2008-12-01
When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.
NASA Astrophysics Data System (ADS)
Pratama Wahyu Hidayat, Putra; Hary Murti, Antonius; Sudarmaji; Shirly, Agung; Tiofan, Bani; Damayanti, Shinta
2018-03-01
Geometry is an important parameter for the field of hydrocarbon exploration and exploitation, it has significant effect to the amount of resources or reserves, rock spreading, and risk analysis. The existence of geological structure or fault becomes one factor affecting geometry. This study is conducted as an effort to enhance seismic image quality in faults dominated area namely offshore Madura Strait. For the past 10 years, Oligo-Miocene carbonate rock has been slightly explored on Madura Strait area, the main reason because migration and trap geometry still became risks to be concern. This study tries to determine the boundary of each fault zone as subsurface image generated by converting seismic data into variance attribute. Variance attribute is a multitrace seismic attribute as the derivative result from amplitude seismic data. The result of this study shows variance section of Madura Strait area having zero (0) value for seismic continuity and one (1) value for discontinuity of seismic data. Variance section shows the boundary of RMKS fault zone with Kendeng zone distinctly. Geological structure and subsurface geometry for Oligo-Miocene carbonate rock could be identified perfectly using this method. Generally structure interpretation to identify the boundary of fault zones could be good determined by variance attribute.
ANALYSES OF NEUROBEHAVIORAL SCREENING DATA: BENCHMARK DOSE ESTIMATION.
Analysis of neurotoxicological screening data such as those of the functional observational battery (FOB) traditionally relies on analysis of variance (ANOVA) with repeated measurements, followed by determination of a no-adverse-effect level (NOAEL). The US EPA has proposed the ...
Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients
Krause, Bryan M.
2013-01-01
The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Mapping carcass and meat quality QTL on Sus Scrofa chromosome 2 in commercial finishing pigs
Heuven, Henri CM; van Wijk, Rik HJ; Dibbits, Bert; van Kampen, Tony A; Knol, Egbert F; Bovenhuis, Henk
2009-01-01
Quantitative trait loci (QTL) affecting carcass and meat quality located on SSC2 were identified using variance component methods. A large number of traits involved in meat and carcass quality was detected in a commercial crossbred population: 1855 pigs sired by 17 boars from a synthetic line, which where homozygous (A/A) for IGF2. Using combined linkage and linkage disequilibrium mapping (LDLA), several QTL significantly affecting loin muscle mass, ham weight and ham muscles (outer ham and knuckle ham) and meat quality traits, such as Minolta-L* and -b*, ultimate pH and Japanese colour score were detected. These results agreed well with previous QTL-studies involving SSC2. Since our study is carried out on crossbreds, different QTL may be segregating in the parental lines. To address this question, we compared models with a single QTL-variance component with models allowing for separate sire and dam QTL-variance components. The same QTL were identified using a single QTL variance component model compared to a model allowing for separate variances with minor differences with respect to QTL location. However, the variance component method made it possible to detect QTL segregating in the paternal line (e.g. HAMB), the maternal lines (e.g. Ham) or in both (e.g. pHu). Combining association and linkage information among haplotypes improved slightly the significance of the QTL compared to an analysis using linkage information only. PMID:19284675
Monte Carlo isotopic inventory analysis for complex nuclear systems
NASA Astrophysics Data System (ADS)
Phruksarojanakun, Phiphat
Monte Carlo Inventory Simulation Engine (MCise) is a newly developed method for calculating isotopic inventory of materials. It offers the promise of modeling materials with complex processes and irradiation histories, which pose challenges for current, deterministic tools, and has strong analogies to Monte Carlo (MC) neutral particle transport. The analog method, including considerations for simple, complex and loop flows, is fully developed. In addition, six variance reduction tools provide unique capabilities of MCise to improve statistical precision of MC simulations. Forced Reaction forces an atom to undergo a desired number of reactions in a given irradiation environment. Biased Reaction Branching primarily focuses on improving statistical results of the isotopes that are produced from rare reaction pathways. Biased Source Sampling aims at increasing frequencies of sampling rare initial isotopes as the starting particles. Reaction Path Splitting increases the population by splitting the atom at each reaction point, creating one new atom for each decay or transmutation product. Delta Tracking is recommended for high-frequency pulsing to reduce the computing time. Lastly, Weight Window is introduced as a strategy to decrease large deviations of weight due to the uses of variance reduction techniques. A figure of merit is necessary to compare the efficiency of different variance reduction techniques. A number of possibilities for figure of merit are explored, two of which are robust and subsequently used. One is based on the relative error of a known target isotope (1/R 2T) and the other on the overall detection limit corrected by the relative error (1/DkR 2T). An automated Adaptive Variance-reduction Adjustment (AVA) tool is developed to iteratively define parameters for some variance reduction techniques in a problem with a target isotope. Sample problems demonstrate that AVA improves both precision and accuracy of a target result in an efficient manner. Potential applications of MCise include molten salt fueled reactors and liquid breeders in fusion blankets. As an example, the inventory analysis of a liquid actinide fuel in the In-Zinerator, a sub-critical power reactor driven by a fusion source, is examined. The result reassures MCise as a reliable tool for inventory analysis of complex nuclear systems.
Factor Analysis of Drawings: Application to College Student Models of the Greenhouse Effect
ERIC Educational Resources Information Center
Libarkin, Julie C.; Thomas, Stephen R.; Ording, Gabriel
2015-01-01
Exploratory factor analysis was used to identify models underlying drawings of the greenhouse effect made by over 200 entering university freshmen. Initial content analysis allowed deconstruction of drawings into salient features, with grouping of these features via factor analysis. A resulting 4-factor solution explains 62% of the data variance,…
The influence of acceleration loading curve characteristics on traumatic brain injury.
Post, Andrew; Blaine Hoshizaki, T; Gilchrist, Michael D; Brien, Susan; Cusimano, Michael D; Marshall, Shawn
2014-03-21
To prevent brain trauma, understanding the mechanism of injury is essential. Once the mechanism of brain injury has been identified, prevention technologies could then be developed to aid in their prevention. The incidence of brain injury is linked to how the kinematics of a brain injury event affects the internal structures of the brain. As a result it is essential that an attempt be made to describe how the characteristics of the linear and rotational acceleration influence specific traumatic brain injury lesions. As a result, the purpose of this study was to examine the influence of the characteristics of linear and rotational acceleration pulses and how they account for the variance in predicting the outcome of TBI lesions, namely contusion, subdural hematoma (SDH), subarachnoid hemorrhage (SAH), and epidural hematoma (EDH) using a principal components analysis (PCA). Monorail impacts were conducted which simulated falls which caused the TBI lesions. From these reconstructions, the characteristics of the linear and rotational acceleration were determined and used for a PCA analysis. The results indicated that peak resultant acceleration variables did not account for any of the variance in predicting TBI lesions. The majority of the variance was accounted for by duration of the resultant and component linear and rotational acceleration. In addition, the components of linear and rotational acceleration characteristics on the x, y, and z axes accounted for the majority of the remainder of the variance after duration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Variance and Predictability of Precipitation at Seasonal-to-Interannual Timescales
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Suarez, Max J.; Heiser, Mark
1999-01-01
A series of atmospheric general circulation model (AGCM) simulations, spanning a total of several thousand years, is used to assess the impact of land-surface and ocean boundary conditions on the seasonal-to-interannual variability and predictability of precipitation in a coupled modeling system. In the first half of the analysis, which focuses on precipitation variance, we show that the contributions of ocean, atmosphere, and land processes to this variance can be characterized, to first order, with a simple linear model. This allows a clean separation of the contributions, from which we find: (1) land and ocean processes have essentially different domains of influence, i.e., the amplification of precipitation variance by land-atmosphere feedback is most important outside of the regions (mainly in the tropics) that are most affected by sea surface temperatures; and (2) the strength of land-atmosphere feedback in a given region is largely controlled by the relative availability of energy and water there. In the second half of the analysis, the potential for seasonal-to-interannual predictability of precipitation is quantified under the assumption that all relevant surface boundary conditions (in the ocean and on land) are known perfectly into the future. We find that the chaotic nature of the atmospheric circulation imposes fundamental limits on predictability in many extratropical regions. Associated with this result is an indication that soil moisture initialization or assimilation in a seasonal-to-interannual forecasting system would be beneficial mainly in transition zones between dry and humid regions.
Jäckle, Sebastian; Wenzelburger, Georg
2015-01-01
Although attitudes toward homosexuality have become more liberal, particularly in industrialized Western countries, there is still a great deal of variance in terms of worldwide levels of homonegativity. Using data from the two most recent waves of the World Values Survey (1999-2004, 2005-2009), this article seeks to explain this variance by means of a multilevel analysis of 79 countries. We include characteristics on the individual level, as age or gender, as well as aggregate variables linked to specificities of the nation-states. In particular, we focus on the religious denomination of a person and her religiosity to explain her attitude toward homosexuality. We find clear differences in levels of homonegativity among the followers of the individual religions.
Roberts, M A; Milich, R; Loney, J; Caputo, J
1981-09-01
The convergent and discriminant validities of three teacher rating scale measures of the traits of hyperactivity, aggression, and inattention were explored, using the multitrait-multimethod matrix approach of Campbell and Fiske (1959), as well as an analysis of variance procedure (Stanley, 1961). In the present study teachers rated children from their elementary school classrooms on the above traits. The results provided strong evidence for convergent validity. Data also indicated that these traits can be reliable differentiated by teachers, suggesting that research aimed at better understanding the unique contributions of hyperactivity, aggression, and inattention is warranted. The respective benefits of analyzing multitrait-multimethod matrices by employing the ANOVA procedure or by using the Campbell and Fiske (1959) criteria were discussed.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
Lindström, Martin; Lindström, Christine; Moghaddassi, Mahnaz; Merlo, Juan
2006-12-01
The aim of this study was to investigate the influence of contextual (social capital and neo-materialist) and individual factors on sense of insecurity in the neighbourhood. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (median odds ratios, intra-class correlation, cross-level modification and odds ratios) of individual and municipality/city quarter (social capital and police district) factors on sense of insecurity was analysed. The crude variance between municipalities/city quarters was not affected by individual factors. The introduction of administrative police district in the model reduced the municipality variance, although some of the significant variance between municipalities remained. The introduction of social capital did not affect the municipality variance. This study suggests that the neo-materialist factor administrative police district may partly explain the individual's sense of insecurity in the neighbourhood.
Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M
2014-01-20
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Evaluation of assumptions in soil moisture triple collocation analysis
USDA-ARS?s Scientific Manuscript database
Triple collocation analysis (TCA) enables estimation of error variances for three or more products that retrieve or estimate the same geophysical variable using mutually-independent methods. Several statistical assumptions regarding the statistical nature of errors (e.g., mutual independence and ort...
Kirkpatrick, Robert M; McGue, Matt; Iacono, William G
2015-03-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES-an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research.
Kirkpatrick, Robert M.; McGue, Matt; Iacono, William G.
2015-01-01
The present study of general cognitive ability attempts to replicate and extend previous investigations of a biometric moderator, family-of-origin socioeconomic status (SES), in a sample of 2,494 pairs of adolescent twins, non-twin biological siblings, and adoptive siblings assessed with individually administered IQ tests. We hypothesized that SES would covary positively with additive-genetic variance and negatively with shared-environmental variance. Important potential confounds unaddressed in some past studies, such as twin-specific effects, assortative mating, and differential heritability by trait level, were found to be negligible. In our main analysis, we compared models by their sample-size corrected AIC, and base our statistical inference on model-averaged point estimates and standard errors. Additive-genetic variance increased with SES—an effect that was statistically significant and robust to model specification. We found no evidence that SES moderated shared-environmental influence. We attempt to explain the inconsistent replication record of these effects, and provide suggestions for future research. PMID:25539975
Development and Initial Validation of the Multicultural Personality Inventory (MPI).
Ponterotto, Joseph G; Fietzer, Alexander W; Fingerhut, Esther C; Woerner, Scott; Stack, Lauren; Magaldi-Dopman, Danielle; Rust, Jonathan; Nakao, Gen; Tsai, Yu-Ting; Black, Natasha; Alba, Renaldo; Desai, Miraj; Frazier, Chantel; LaRue, Alyse; Liao, Pei-Wen
2014-01-01
Two studies summarize the development and initial validation of the Multicultural Personality Inventory (MPI). In Study 1, the 115-item prototype MPI was administered to 415 university students where exploratory factor analysis resulted in a 70-item, 7-factor model. In Study 2, the 70-item MPI and theoretically related companion instruments were administered to a multisite sample of 576 university students. Confirmatory factory analysis found the 7-factor structure to be a relatively good fit to the data (Comparative Fit Index =.954; root mean square error of approximation =.057), and MPI factors predicted variance in criterion variables above and beyond the variance accounted for by broad personality traits (i.e., Big Five). Study limitations and directions for further validation research are specified.
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.
ERIC Educational Resources Information Center
Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman
1998-01-01
Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…
Regression Analysis of Physician Distribution to Identify Areas of Need: Some Preliminary Findings.
ERIC Educational Resources Information Center
Morgan, Bruce B.; And Others
A regression analysis was conducted of factors that help to explain the variance in physician distribution and which identify those factors that influence the maldistribution of physicians. Models were developed for different geographic areas to determine the most appropriate unit of analysis for the Western Missouri Area Health Education Center…
New Trends in Gender and Mathematics Performance: A Meta-Analysis
Lindberg, Sara M.; Hyde, Janet Shibley; Petersen, Jennifer L.; Linn, Marcia C.
2010-01-01
In this paper, we use meta-analysis to analyze gender differences in recent studies of mathematics performance. First, we meta-analyzed data from 242 studies published between 1990 and 2007, representing the testing of 1,286,350 people. Overall, d = .05, indicating no gender difference, and VR = 1.08, indicating nearly equal male and female variances. Second, we analyzed data from large data sets based on probability sampling of U.S. adolescents over the past 20 years: the NLSY, NELS88, LSAY, and NAEP. Effect sizes for the gender difference ranged between −0.15 and +0.22. Variance ratios ranged from 0.88 to 1.34. Taken together these findings support the view that males and females perform similarly in mathematics. PMID:21038941
2012-06-01
Communalities – Factor Analysis .................................................................... 30 Table 3. Total Variance Explained – Factor Analysis...43 Table 12. Pearson Correlation...standards of corporate values and allying oneself with exchange partners having similar values, (3) communicating valuable information, including
Wygant, Dustin B; Arbisi, Paul A; Bianchini, Kevin J; Umlauf, Robert L
2017-04-01
Waddell et al. identified a set of eight non-organic signs in 1980. There has been controversy about their meaning, particularly with respect to their use as validity indicators. The current study examined the Waddell signs in relation to measures of somatic amplification or over-reporting in a sample of outpatient chronic pain patients. We examined the degree to which these signs were associated with measures of over-reporting. This study examined scores on the Waddell signs in relation to over-reporting indicators in an outpatient chronic pain sample. We examined 230 chronic pain patients treated at a multidisciplinary pain clinic. The majority of these patients presented with primary back or spinal injuries. The outcome measures used in the study were Waddell signs, Modified Somatic Perception Questionnaire, Pain Disability Index, and the Minnesota Multiphasic Personality Inventory-2 Restructured Form. We examined Waddell signs using multivariate analysis of variance (MANOVA) and analysis of variance (ANOVA), receiver operating characteristic analysis, classification accuracy, and relative risk ratios. Multivariate analysis of variance and ANOVA showed a significant association between Waddell signs and somatic amplification. Classification analyses showed increased odds of somatic amplification at a Waddell score of 2 or 3. Our results found significant evidence of an association between Waddell signs and somatic over-reporting. Elevated scores on the Waddell signs (particularly scores higher than 2 and 3) were associated with increased odds of exhibiting somatic over-reporting. Copyright © 2016 Elsevier Inc. All rights reserved.
Factor Analysis of the Aberrant Behavior Checklist in Individuals with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Brinkley, Jason; Nations, Laura; Abramson, Ruth K.; Hall, Alicia; Wright, Harry H.; Gabriels, Robin; Gilbert, John R.; Pericak-Vance, Margaret A. O.; Cuccaro, Michael L.
2007-01-01
Exploratory factor analysis (varimax and promax rotations) of the aberrant behavior checklist-community version (ABC) in 275 individuals with Autism spectrum disorder (ASD) identified four- and five-factor solutions which accounted for greater than 70% of the variance. Confirmatory factor analysis (Lisrel 8.7) revealed indices of moderate fit for…
McGivney, C L; Gough, K F; McGivney, B A; Farries, G; Hill, E W; Katz, L M
2018-06-23
Conflicting results have been reported for risk factors for recurrent laryngeal neuropathy (RLN) based on resting endoscopic evaluation and comparison of single conformation traits, with many traits correlated to one another. To simplify identification of signalment and conformation traits (i.e. variables) associated with RLN cases and controls diagnosed with exercising overground endoscopy (OGE) using exploratory factor analysis (EFA). Prospective cohort. Pearson's rank correlation was used to establish significance and association between variables collected from n = 188 Thoroughbreds from one stable by observers blinded to OGE results. Exploratory factor analysis was conducted on 9 variables for cases and controls; common elements between variables developed a factor, with variables grouped into 3 factors for cases and controls, respectively. Correlation (loading) between each variable and factor was calculated to rank relationships between variables and cases/controls, with factors retrospectively named based on their underlying correlations with variables. Numerous inter-correlations were present between variables. Most strongly correlated in cases were wither height with body weight (r = 0.70) and ventral neck length (r = 0.68) and in controls body weight with rostral neck circumference (r = 0.58). Wither height (r = 0.61) significantly loaded the top-ranked factor for cases ('height RLN '), explaining 25% of conformational variance. Ventral neck length (r = 0.69) and age (r = 0.57) significantly loaded the second-ranked factor for cases ('neck length RLN '), explaining 16% of conformational variance. Rostral neck circumference (r = 0.86) and body weight (r = 0.6) significantly loaded the top-ranked factor for controls ('body size CON '), explaining 19% of the variance. Wither height (r = 0.84) significantly loaded the second-ranked factor for controls ('height CON '), explaining 13% of the variance. Horses had not reached skeletal maturity. Exploratory factor analysis allowed weightings to be determined for each variable. Wither height was the predominant conformational feature associated with RLN. Exploratory factor analysis confirms aggregated conformational differences exist between RLN cases and controls, suitable for future evaluations. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Ghosh, Sudipta; Dosaev, Tasbulat; Prakash, Jai; Livshits, Gregory
2017-04-01
The major aim of this study was to conduct comparative quantitative-genetic analysis of the body composition (BCP) and somatotype (STP) variation, as well as their correlations with blood pressure (BP) in two ethnically, culturally and geographically different populations: Santhal, indigenous ethnic group from India and Chuvash, indigenous population from Russia. Correspondently two pedigree-based samples were collected from 1,262 Santhal and1,558 Chuvash individuals, respectively. At the first stage of the study, descriptive statistics and a series of univariate regression analyses were calculated. Finally, multiple and multivariate regression (MMR) analyses, with BP measurements as dependent variables and age, sex, BCP and STP as independent variables were carried out in each sample separately. The significant and independent covariates of BP were identified and used for re-examination in pedigree-based variance decomposition analysis. Despite clear and significant differences between the populations in BCP/STP, both Santhal and Chuvash were found to be predominantly mesomorphic irrespective of their sex. According to MMR analyses variation of BP significantly depended on age and mesomorphic component in both samples, and in addition on sex, ectomorphy and fat mass index in Santhal and on fat free mass index in Chuvash samples, respectively. Additive genetic component contributes to a substantial proportion of blood pressure and body composition variance. Variance component analysis in addition to above mentioned results suggests that additive genetic factors influence BP and BCP/STP associations significantly. © 2017 Wiley Periodicals, Inc.
Yourganov, Grigori; Schmah, Tanya; Churchill, Nathan W; Berman, Marc G; Grady, Cheryl L; Strother, Stephen C
2014-08-01
The field of fMRI data analysis is rapidly growing in sophistication, particularly in the domain of multivariate pattern classification. However, the interaction between the properties of the analytical model and the parameters of the BOLD signal (e.g. signal magnitude, temporal variance and functional connectivity) is still an open problem. We addressed this problem by evaluating a set of pattern classification algorithms on simulated and experimental block-design fMRI data. The set of classifiers consisted of linear and quadratic discriminants, linear support vector machine, and linear and nonlinear Gaussian naive Bayes classifiers. For linear discriminant, we used two methods of regularization: principal component analysis, and ridge regularization. The classifiers were used (1) to classify the volumes according to the behavioral task that was performed by the subject, and (2) to construct spatial maps that indicated the relative contribution of each voxel to classification. Our evaluation metrics were: (1) accuracy of out-of-sample classification and (2) reproducibility of spatial maps. In simulated data sets, we performed an additional evaluation of spatial maps with ROC analysis. We varied the magnitude, temporal variance and connectivity of simulated fMRI signal and identified the optimal classifier for each simulated environment. Overall, the best performers were linear and quadratic discriminants (operating on principal components of the data matrix) and, in some rare situations, a nonlinear Gaussian naïve Bayes classifier. The results from the simulated data were supported by within-subject analysis of experimental fMRI data, collected in a study of aging. This is the first study that systematically characterizes interactions between analysis model and signal parameters (such as magnitude, variance and correlation) on the performance of pattern classifiers for fMRI. Copyright © 2014 Elsevier Inc. All rights reserved.
Cyber-T web server: differential analysis of high-throughput data.
Kayala, Matthew A; Baldi, Pierre
2012-07-01
The Bayesian regularization method for high-throughput differential analysis, described in Baldi and Long (A Bayesian framework for the analysis of microarray expression data: regularized t-test and statistical inferences of gene changes. Bioinformatics 2001: 17: 509-519) and implemented in the Cyber-T web server, is one of the most widely validated. Cyber-T implements a t-test using a Bayesian framework to compute a regularized variance of the measurements associated with each probe under each condition. This regularized estimate is derived by flexibly combining the empirical measurements with a prior, or background, derived from pooling measurements associated with probes in the same neighborhood. This approach flexibly addresses problems associated with low replication levels and technology biases, not only for DNA microarrays, but also for other technologies, such as protein arrays, quantitative mass spectrometry and next-generation sequencing (RNA-seq). Here we present an update to the Cyber-T web server, incorporating several useful new additions and improvements. Several preprocessing data normalization options including logarithmic and (Variance Stabilizing Normalization) VSN transforms are included. To augment two-sample t-tests, a one-way analysis of variance is implemented. Several methods for multiple tests correction, including standard frequentist methods and a probabilistic mixture model treatment, are available. Diagnostic plots allow visual assessment of the results. The web server provides comprehensive documentation and example data sets. The Cyber-T web server, with R source code and data sets, is publicly available at http://cybert.ics.uci.edu/.
Codifference as a practical tool to measure interdependence
NASA Astrophysics Data System (ADS)
Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.
2015-03-01
Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.
Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory
ERIC Educational Resources Information Center
Muthen, Bengt; Asparouhov, Tihomir
2012-01-01
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed…
Shadish, William R; Hedges, Larry V; Pustejovsky, James E
2014-04-01
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Kula, Katherine; Hale, Lindsay N; Ghoneima, Ahmed; Tholpady, Sunil; Starbuck, John M
2016-11-01
To compare maxillary mucosal thickening and sinus volumes of unilateral cleft lip and palate subjects (UCLP) with noncleft (nonCLP) controls. Randomized, retrospective study of cone-beam computed tomographs (CBCT). University. Fifteen UCLP subjects and 15 sex- and age-matched non-CLP controls, aged 8 to 14 years. Following institutional review board approval and reliability tests, Dolphin three-dimensional imaging software was used to segment and slice maxillary sinuses on randomly selected CBCTs. The surface area (SA) of bony sinus and airspace on all sinus slices was determined using Dolphin and multiplied by slice thickness (0.4 mm) to calculate volume. Mucosal thickening was the difference between bony sinus and airspace volumes. The number of slices with bony sinus and airspace outlines was totaled. Right and left sinus values for each group were pooled (t tests, P > .05; n = 30 each group). All measures were compared (principal components analysis, multivariate analysis of variance, analysis of variance) by group and age (P ≤ .016 was considered significant). Principal components analysis axis 1 and 2 explained 89.6% of sample variance. Principal components analysis showed complete separation based on the sample on axis 1 only. Age groups showed some separation on axis 2. Unilateral cleft lip and palate subjects had significantly smaller bony sinus and airspace volumes, fewer bony and airspace slices, and greater mucosal thickening and percentage mucosal thickening when compared with controls. Older subjects had significantly greater bony sinus and airspace volumes than younger subjects. Children with UCLP have significantly more maxillary sinus mucosal thickening and smaller sinuses than controls.
Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas
2016-09-01
An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.
Multivariate analysis of selected metals in tannery effluents and related soil.
Tariq, Saadia R; Shah, Munir H; Shaheen, N; Khalique, A; Manzoor, S; Jaffar, M
2005-06-30
Effluent and relevant soil samples from 38 tanning units housed in Kasur, Pakistan, were obtained for metal analysis by flame atomic absorption spectrophotometric method. The levels of 12 metals, Na, Ca, K, Mg, Fe, Mn, Cr, Co, Cd, Ni, Pb and Zn were determined in the two media. The data were evaluated towards metal distribution and metal-to-metal correlations. The study evidenced enhanced levels of Cr (391, 16.7 mg/L) and Na (25,519, 9369 mg/L) in tannery effluents and relevant soil samples, respectively. The effluent versus soil trace metal content relationship confirmed that the effluent Cr was strongly correlated with soil Cr. For metal source identification the techniques of principal component analysis, and cluster analysis were applied. The principal component analysis yielded two factors for effluents: factor 1 (49.6% variance) showed significant loading for Ca, Fe, Mn, Cr, Cd, Ni, Pb and Zn, referring to a tanning related source for these metals, and factor 2 (12.6% variance) with higher loadings of Na, K, Mg and Co, was associated with the processes during the skin/hide treatment. Similarly, two factors with a cumulative variance of 34.8% were obtained for soil samples: factor 1 manifested the contribution from Mg, Mn, Co, Cd, Ni and Pb, which though soil-based is basically effluent-derived, while factor 2 was found associated with Na, K, Ca, Cr and Zn which referred to a tannery-based source. The dendograms obtained from cluster analysis, also support the observed results. The study exhibits a gross pollution of soils with Cr at levels far exceeding the stipulated safe limit laid down for tannery effluents.
Doping Among Professional Athletes in Iran: A Test of Akers's Social Learning Theory.
Kabiri, Saeed; Cochran, John K; Stewart, Bernadette J; Sharepour, Mahmoud; Rahmati, Mohammad Mahdi; Shadmanfaat, Syede Massomeh
2018-04-01
The use of performance-enhancing drugs (PED) is common among Iranian professional athletes. As this phenomenon is a social problem, the main purpose of this research is to explain why athletes engage in "doping" activity, using social learning theory. For this purpose, a sample of 589 professional athletes from Rasht, Iran, was used to test assumptions related to social learning theory. The results showed that there are positive and significant relationships between the components of social learning theory (differential association, differential reinforcement, imitation, and definitions) and doping behavior (past, present, and future use of PED). The structural modeling analysis indicated that the components of social learning theory accounts for 36% of the variance in past doping behavior, 35% of the variance in current doping behavior, and 32% of the variance in future use of PED.
2012-01-01
Background To investigate whether different conditions of DNA structure and radiation treatment could modify heterogeneity of response. Additionally to study variance as a potential parameter of heterogeneity for radiosensitivity testing. Methods Two-hundred leukocytes per sample of healthy donors were split into four groups. I: Intact chromatin structure; II: Nucleoids of histone-depleted DNA; III: Nucleoids of histone-depleted DNA with 90 mM DMSO as antioxidant. Response to single (I-III) and twice (IV) irradiation with 4 Gy and repair kinetics were evaluated using %Tail-DNA. Heterogeneity of DNA damage was determined by calculation of variance of DNA-damage (V) and mean variance (Mvar), mutual comparisons were done by one-way analysis of variance (ANOVA). Results Heterogeneity of initial DNA-damage (I, 0 min repair) increased without histones (II). Absence of histones was balanced by addition of antioxidants (III). Repair reduced heterogeneity of all samples (with and without irradiation). However double irradiation plus repair led to a higher level of heterogeneity distinguishable from single irradiation and repair in intact cells. Increase of mean DNA damage was associated with a similarly elevated variance of DNA damage (r = +0.88). Conclusions Heterogeneity of DNA-damage can be modified by histone level, antioxidant concentration, repair and radiation dose and was positively correlated with DNA damage. Experimental conditions might be optimized by reducing scatter of comet assay data by repair and antioxidants, potentially allowing better discrimination of small differences. Amount of heterogeneity measured by variance might be an additional useful parameter to characterize radiosensitivity. PMID:22520045
Buma, Brian; Costanza, Jennifer K; Riitters, Kurt
2017-11-21
The scale of investigation for disturbance-influenced processes plays a critical role in theoretical assumptions about stability, variance, and equilibrium, as well as conservation reserve and long-term monitoring program design. Critical consideration of scale is required for robust planning designs, especially when anticipating future disturbances whose exact locations are unknown. This research quantified disturbance proportion and pattern (as contagion) at multiple scales across North America. This pattern of scale-associated variability can guide selection of study and management extents, for example, to minimize variance (measured as standard deviation) between any landscapes within an ecoregion. We identified the proportion and pattern of forest disturbance (30 m grain size) across multiple landscape extents up to 180 km 2 . We explored the variance in proportion of disturbed area and the pattern of that disturbance between landscapes (within an ecoregion) as a function of the landscape extent. In many ecoregions, variance between landscapes within an ecoregion was minimal at broad landscape extents (low standard deviation). Gap-dominated regions showed the least variance, while fire-dominated showed the largest. Intensively managed ecoregions displayed unique patterns. A majority of the ecoregions showed low variance between landscapes at some scale, indicating an appropriate extent for incorporating natural regimes and unknown future disturbances was identified. The quantification of the scales of disturbance at the ecoregion level provides guidance for individuals interested in anticipating future disturbances which will occur in unknown spatial locations. Information on the extents required to incorporate disturbance patterns into planning is crucial for that process.
Validation study of the Questionnaire on School Maladjustment Problems (QSMP).
de la Fuente Arias, Jesús; Peralta Sánchez, Francisco Javier; Sánchez Roda, María Dolores; Trianes Torres, María Victoria
2012-05-01
The aim of this study was to analyze the exploratory and confirmatory structure, as well as other psychometric properties, of the Cuestionario de Problemas de Convivencia Escolar (CPCE; in Spanish, the Questionnaire on School Maladjustment Problems [QSMP]), using a sample of Spanish adolescents. The instrument was administered to 60 secondary education teachers (53.4% females and 46.6% males) between the ages of 28 and 54 years (M= 41.2, SD= 11.5), who evaluated a total of 857 adolescent students. The first-order exploratory factor analysis identified 7 factors, explaining a total variance of 62%. A second-order factor analysis yielded three dimensions that explain 84% of the variance. A confirmatory factor analysis was subsequently performed in order to reduce the number of factors obtained in the exploratory analysis as well as the number of items. Lastly, we present the results of reliability, internal consistency, and validity indices. These results and their implications for future research and for the practice of educational guidance and intervention are discussed in the conclusions.
The Role of Rainfall Patterns in Seasonal Malaria Transmission
NASA Astrophysics Data System (ADS)
Bomblies, A.
2010-12-01
Seasonal total precipitation is well known to affect malaria transmission because Anopheles mosquitoes depend on standing water for breeding habitat. However, the within-season temporal pattern of the rainfall influences persistence of standing water and thus rainfall patterns also affect mosquito population dynamics. In this talk, I show that intraseasonal rainfall pattern describes 40% of the variance in simulated mosquito abundance in a Niger Sahel village where malaria is endemic but highly seasonal, demonstrating the necessity for detailed distributed hydrology modeling to explain the variance from this important effect. I apply a field validated, high spatial- and temporal-resolution hydrology model coupled with an entomology model. Using synthetic rainfall time series generated using a stationary first-order Markov Chain model, I hold all variables except hourly rainfall constant, thus isolating the contribution of rainfall pattern to variance in mosquito abundance. I further show the utility of hydrology modeling to assess precipitation effects by analyzing collected water. Time-integrated surface area of pools explains 70% of the variance in mosquito abundance, and time-integrated surface area of pools persisting longer than seven days explains 82% of the variance, showing an improved predictive ability when pool persistence is explicitly modeled at high spatio-temporal resolution. I extend this analysis to investigate the impacts of this effect on malaria vector mosquito populations under climate shift scenarios, holding all climate variables except precipitation constant. In these scenarios, rainfall mean and variance change with climatic change, and the modeling approach evaluates the impact of non-stationarity in rainfall and the associated rainfall patterns on expected mosquito activity.
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.
Analysis of Levene's Test under Design Imbalance.
ERIC Educational Resources Information Center
Keyes, Tim K.; Levy, Martin S.
1997-01-01
H. Levene (1960) proposed a heuristic test for heteroscedasticity in the case of a balanced two-way layout, based on analysis of variance of absolute residuals. Conditions under which design imbalance affects the test's characteristics are identified, and a simple correction involving leverage is proposed. (SLD)
Analyses of mean and turbulent motion in the tropics with the use of unequally spaced data
NASA Technical Reports Server (NTRS)
Kao, S. K.; Nimmo, E. J.
1979-01-01
Wind velocities from 25 km to 60 km over Ascension Island, Fort Sherman and Kwajalein for the period January 1970 to December 1971 are analyzed in order to achieve a better understanding of the mean flow, the eddy kinetic energy and the Eulerian time spectra of the eddy kinetic energy. Since the data are unequally spaced in time, techniques of one-dimensional covariance theory were utilized and an unequally spaced time series analysis was accomplished. The theoretical equations for two-dimensional analysis or wavenumber frequency analysis of unequally spaced data were developed. Analysis of the turbulent winds and the average seasonal variance and eddy kinetic energy of the turbulent winds indicated that maximum total variance and energy is associated with the east-west velocity component. This is particularly true for long period seasonal waves which dominate the total energy spectrum. Additionally, there is an energy shift for the east-west component into the longer period waves with altitude increasing from 30 km to 50 km.
Can Twitter be used to predict county excessive alcohol consumption rates?
Ashford, Robert D.; Hemmons, Jessie; Summers, Dan; Hamilton, Casey
2018-01-01
Objectives The current study analyzes a large set of Twitter data from 1,384 US counties to determine whether excessive alcohol consumption rates can be predicted by the words being posted from each county. Methods Data from over 138 million county-level tweets were analyzed using predictive modeling, differential language analysis, and mediating language analysis. Results Twitter language data captures cross-sectional patterns of excessive alcohol consumption beyond that of sociodemographic factors (e.g. age, gender, race, income, education), and can be used to accurately predict rates of excessive alcohol consumption. Additionally, mediation analysis found that Twitter topics (e.g. ‘ready gettin leave’) can explain much of the variance associated between socioeconomics and excessive alcohol consumption. Conclusions Twitter data can be used to predict public health concerns such as excessive drinking. Using mediation analysis in conjunction with predictive modeling allows for a high portion of the variance associated with socioeconomic status to be explained. PMID:29617408
Shape variation in the human pelvis and limb skeleton: Implications for obstetric adaptation.
Kurki, Helen K; Decrausaz, Sarah-Louise
2016-04-01
Under the obstetrical dilemma (OD) hypothesis, selection acts on the human female pelvis to ensure a sufficiently sized obstetric canal for birthing a large-brained, broad shouldered neonate, while bipedal locomotion selects for a narrower and smaller pelvis. Despite this female-specific stabilizing selection, variability of linear dimensions of the pelvic canal and overall size are not reduced in females, suggesting shape may instead be variable among females of a population. Female canal shape has been shown to vary among populations, while male canal shape does not. Within this context, we examine within-population canal shape variation in comparison with that of noncanal aspects of the pelvis and the limbs. Nine skeletal samples (total female n = 101, male n = 117) representing diverse body sizes and shapes were included. Principal components analysis was applied to size-adjusted variables of each skeletal region. A multivariate variance was calculated using the weighted PC scores for all components in each model and F-ratios used to assess differences in within-population variances between sexes and skeletal regions. Within both sexes, multivariate canal shape variance is significantly greater than noncanal pelvis and limb variances, while limb variance is greater than noncanal pelvis variance in some populations. Multivariate shape variation is not consistently different between the sexes in any of the skeletal regions. Diverse selective pressures, including obstetrics, locomotion, load carrying, and others may act on canal shape, as well as genetic drift and plasticity, thus increasing variation in morphospace while protecting obstetric sufficiency. © 2015 Wiley Periodicals, Inc.
Determination of the STIS CCD Gain
NASA Astrophysics Data System (ADS)
Riley, Allyssa; Monroe, TalaWanda; Lockwood, Sean
2016-09-01
This report summarizes the analysis and absolute gain results of the STIS Cycle 23 special calibration program 14424 that was designed to measure the gain of amplifiers A, C and D at nominal gain settings of 1 and 4 e-/DN. We used the mean-variance technique and the results indicate a <3.5% change in the gain for amplifier D from when it was originally calculated pre-flight. We compared these values to previous measurements from Cycles 17 through 23. This report outlines the observations, methodology, and results of the mean-variance technique.
SMALL COLOUR VISION VARIATIONS AND THEIR EFFECT IN VISUAL COLORIMETRY,
COLOR VISION, PERFORMANCE(HUMAN), TEST EQUIPMENT, PERFORMANCE(HUMAN), CORRELATION TECHNIQUES, STATISTICAL PROCESSES, COLORS, ANALYSIS OF VARIANCE, AGING(MATERIALS), COLORIMETRY , BRIGHTNESS, ANOMALIES, PLASTICS, UNITED KINGDOM.
Coons, Stephen Joel; Wendel, Christopher; Hornbrook, Mark C; Herrinton, Lisa; Grant, Marcia; Krouse, Robert S
2009-01-01
Purpose The purpose of this analysis was to determine the unique contribution of household income to the variance explained in psychological well-being (PWB) among a sample of colorectal cancer (CRC) survivors. Methods This study is a secondary analysis of data collected as part of the Health-Related Quality of Life in Long-Term Colorectal Cancer Survivors Study, which included CRC survivors with (cases) and without (controls) ostomies. The dataset included socio-demographic, health status, and health-related quality of life (HRQOL) information. HRQOL was assessed with the modified City of Hope Quality of Life (mCOH-QOL)-Ostomy questionnaire and SF-36v2. To assess the relationship between income and PWB, a hierarchical linear regression model was constructed combining data from both cases and controls. Results After accounting for the proportion of variance in PWB explained by the other independent variables in the model, the additional variance explained by income was significant (R2 increased from 0.228 to 0.250; p = 0.006). Conclusions Although the study design does not allow causal inference, these results demonstrate a significant relationship between income and PWB in CRC survivors. The findings suggest that for non-randomized group comparisons of HRQOL, income should, at the very least, be included as a control variable in the analysis. PMID:19132550
77 FR 3121 - Program Integrity: Gainful Employment-Debt Measures; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
...On June 13, 2011, the Secretary of Education (Secretary) published a notice of final regulations in the Federal Register for Program Integrity: Gainful Employment--Debt Measures (Gainful Employment--Debt Measures) (76 FR 34386). In the preamble of the final regulations, we used the wrong data to calculate the percent of total variance in institutions' repayment rates that may be explained by race/ethnicity. Our intent was to use the data that included all minority students per institution. However, we mistakenly used the data for a subset of minority students per institution. We have now recalculated the total variance using the data that includes all minority students. Through this document, we correct, in the preamble of the Gainful Employment--Debt Measures final regulations, the errors resulting from this misapplication. We do not change the regression analysis model itself; we are using the same model with the appropriate data. Through this notice we also correct, in the preamble of the Gainful Employment--Debt Measures final regulations, our description of one component of the regression analysis. The preamble referred to use of an institutional variable measuring acceptance rates. This description was incorrect; in fact we used an institutional variable measuring retention rates. Correcting this language does not change the regression analysis model itself or the variance explained by the model. The text of the final regulations remains unchanged.
Irreducible Uncertainty in Terrestrial Carbon Projections
NASA Astrophysics Data System (ADS)
Lovenduski, N. S.; Bonan, G. B.
2016-12-01
We quantify and isolate the sources of uncertainty in projections of carbon accumulation by the ocean and terrestrial biosphere over 2006-2100 using output from Earth System Models participating in the 5th Coupled Model Intercomparison Project. We consider three independent sources of uncertainty in our analysis of variance: (1) internal variability, driven by random, internal variations in the climate system, (2) emission scenario, driven by uncertainty in future radiative forcing, and (3) model structure, wherein different models produce different projections given the same emission scenario. Whereas uncertainty in projections of ocean carbon accumulation by 2100 is 100 Pg C and driven primarily by emission scenario, uncertainty in projections of terrestrial carbon accumulation by 2100 is 50% larger than that of the ocean, and driven primarily by model structure. This structural uncertainty is correlated with emission scenario: the variance associated with model structure is an order of magnitude larger under a business-as-usual scenario (RCP8.5) than a mitigation scenario (RCP2.6). In an effort to reduce this structural uncertainty, we apply various model weighting schemes to our analysis of variance in terrestrial carbon accumulation projections. The largest reductions in uncertainty are achieved when giving all the weight to a single model; here the uncertainty is of a similar magnitude to the ocean projections. Such an analysis suggests that this structural uncertainty is irreducible given current terrestrial model development efforts.
Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu
2007-01-01
As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (p<0.01) different between young and healthy elderly group. Results also suggest that the Beta between scales 1 to 2 are effective for recognizing falls risk gait patterns. Results have implication for quantifying gait dynamics in normal, ageing and pathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.
Does the Assessment of Recovery Capital scale reflect a single or multiple domains?
Arndt, Stephan; Sahker, Ethan; Hedden, Suzy
2017-01-01
The goal of this study was to determine whether the 50-item Assessment of Recovery Capital scale represents a single general measure or whether multiple domains might be psychometrically useful for research or clinical applications. Data are from a cross-sectional de-identified existing program evaluation information data set with 1,138 clients entering substance use disorder treatment. Principal components and iterated factor analysis were used on the domain scores. Multiple group factor analysis provided a quasi-confirmatory factor analysis. The solution accounted for 75.24% of the total variance, suggesting that 10 factors provide a reasonably good fit. However, Tucker's congruence coefficients between the factor structure and defining weights (0.41-0.52) suggested a poor fit to the hypothesized 10-domain structure. Principal components of the 10-domain scores yielded one factor whose eigenvalue was greater than one (5.93), accounting for 75.8% of the common variance. A few domains had perceptible but small unique variance components suggesting that a few of the domains may warrant enrichment. Our findings suggest that there is one general factor, with a caveat. Using the 10 measures inflates the chance for Type I errors. Using one general measure avoids this issue, is simple to interpret, and could reduce the number of items. However, those seeking to maximally predict later recovery success may need to use the full instrument and all 10 domains.
Kerner, Matthew S; Kurrant, Anthony B
2003-12-01
This study was designed to test the efficacy of the theory of planned behavior in predicting intention to engage in leisure-time physical activity and leisure-time physical activity behavior of high school girls. Rating scales were used for assessing attitude to leisure-time physical activity, subjective norm, perceived control, and intention to engage in leisure-time physical activity among 129 ninth through twelfth graders. Leisure-time physical activity was obtained from 3-wk. diaries. The first hierarchical multiple regression indicated that perceived control added (R2 change = .033) to the contributions of attitude to leisure-time physical activity and subjective norm in accounting for 50.7% of the total variance of intention to engage in leisure-time physical activity. The second regression analysis indicated that almost 10% of the variance of leisure-time physical activity was explicated by intention to engage in leisure-time physical activity and perceived control, with perceived control contributing 6.4%. From both academic and theoretical standpoints, our findings support the theory of planned behavior, although quantitatively the variance of leisure-time physical activity was not well-accounted for. In addition, considering the small percentage increase in variance explained by the addition of perceived control explaining variance of intention to engage in leisure-time physical activity, the pragmatism of implementing the measure of perceived control is questionable for this population.
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.
Saunders, Christina T; Blume, Jeffrey D
2017-10-26
Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.
Fragomeni, Breno de Oliveira; Misztal, Ignacy; Lourenco, Daniela Lino; Aguilar, Ignacio; Okimoto, Ronald; Muir, William M
2014-01-01
The purpose of this study was to determine if the set of genomic regions inferred as accounting for the majority of genetic variation in quantitative traits remain stable over multiple generations of selection. The data set contained phenotypes for five generations of broiler chicken for body weight, breast meat, and leg score. The population consisted of 294,632 animals over five generations and also included genotypes of 41,036 single nucleotide polymorphism (SNP) for 4,866 animals, after quality control. The SNP effects were calculated by a GWAS type analysis using single step genomic BLUP approach for generations 1-3, 2-4, 3-5, and 1-5. Variances were calculated for windows of 20 SNP. The top ten windows for each trait that explained the largest fraction of the genetic variance across generations were examined. Across generations, the top 10 windows explained more than 0.5% but less than 1% of the total variance. Also, the pattern of the windows was not consistent across generations. The windows that explained the greatest variance changed greatly among the combinations of generations, with a few exceptions. In many cases, a window identified as top for one combination, explained less than 0.1% for the other combinations. We conclude that identification of top SNP windows for a population may have little predictive power for genetic selection in the following generations for the traits here evaluated.
Miyashita, Naohiko; Laurie-Ahlberg, C. C.
1984-01-01
By combining ten second and ten third chromosomes, we investigated chromosomal interaction with respect to the action of the modifier factors on G6PD and 6PGD activities in Drosophila melanogaster. Analysis of variance revealed that highly significant chromosomal interaction exists for both enzyme activities. From the estimated variance components, it was concluded that the variation in enzyme activity attributed to the interaction is as great as the variation attributed to the second chromosome but less than attributed to the third chromosome. The interaction is not explained by the variation of body size (live weight). The interaction is generated from both the lack of correlation of second chromosomes for third chromosome backgrounds and the heterogeneous variance of second chromosomes for different third chromosome backgrounds. Large and constant correlation between G6PD and 6PGD activities were found for third chromosomes with any second chromosome background, whereas the correlations for second chromosomes were much smaller and varied considerably with the third chromosome background. This result suggests that the activity modifiers on the second chromosome are under the influence of third chromosome factors. PMID:6425115
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
The Transport of Density Fluctuations Throughout the Heliosphere
NASA Technical Reports Server (NTRS)
Zank, G. P.; Jetha, N.; Hu, Q.; Hunana, P.
2012-01-01
The solar wind is recognized as a turbulent magnetofluid, for which the properties of the turbulent velocity and magnetic field fluctuations are often described by the equations of incompressible magnetohydrodynamics (MHD). However, low-frequency density turbulence is also ubiquitous. On the basis of a nearly incompressible formulation of MHD in the expanding inhomogeneous solar wind, we derive the transport equation for the variance of the density fluctuations (Rho(exp 2)). The transport equation shows that density fluctuations behave as a passive scalar in the supersonic solar wind. In the absence of sources of density turbulence, such as within 1AU, the variance (Rho(exp 2)) approximates r(exp -4). In the outer heliosphere beyond 1 AU, the shear between fast and slow streams, the propagation of shocks, and the creation of interstellar pickup ions all act as sources of density turbulence. The model density fluctuation variance evolves with heliocentric distance within approximately 300 AU as (Rho(exp 2)) approximates r(exp -3.3) after which it flattens and then slowly increases. This is precisely the radial profile for the density fluctuation variance observed by Voyager 2. Using a different analysis technique, we confirm the radial profile for Rho(exp 2) of Bellamy, Cairns, & Smith using Voyager 2 data. We conclude that a passive scalar description for density fluctuations in the supersonic solar wind can explain the density fluctuation variance observed in both the inner and the outer heliosphere.
Akiyama, Tsuyoshi; Tsuda, Hitoshi; Matsumoto, Satoko; Miyake, Yuko; Kawamura, Yoshiya; Noda, Toshie; Akiskal, Kareen K; Akiskal, Hagop S
2005-03-01
In Japan, Kraepelin's descriptions on four "fundamental states" of manic depressive illness, the concepts of schizoid temperament by Kretschmer and obsessional and melancholic type temperament by Shimoda and Tellenbach have been widely accepted. This research investigates the construct validity of these temperaments through factor analysis. TEMPS-A measured depressive, cyclothymic, hyperthymic and irritable temperaments and MPT rigidity, esoteric and isolation subscales measured, respectively, melancholic type and schizoid temperaments. Factor analysis was implemented with TEMPS-A alone and TEMPS-A and MPT combined data. With TEMPS-A alone analysis, Factor 1 included 1 depressive, 11 cyclothymic and 12 irritable temperament items with a factor loading higher than 0.4; Factor 2 included 1 depressive and 10 hyperthymic temperament items; and Factor 3 included 2 depressive temperament items only. With TEMPS-A and MPT combined data, Factor 1 included 3 depressive, 11 cyclothymic and 5 irritable temperament items with a factor loading higher than 0.4 (interpreted as the central cyclothymic tendency for all affective temperaments along Kretschmerian lines and accounting for 11.7% of the variance); Factor 2 included 6 hyperthymic temperament items (6.22% of variance); Factor 3 included 1 cyclothymic, 7 irritable and 1 schizoid temperament items (interpreted as the irritable temperament and accounting for 3.24% of the variance); Factor 4 included 1 depressive temperament and 5 melancholic type items (interpreted as the latter, accounting for 2.66% of the variance); Factor 5 included 5 depressive temperament items, along interpersonal sensitivity and passivity lines, and accounting for 2.31% of the variance; and Factor 6 included 4 schizoid temperament items accounting for 2.07% of the variance. We did not use the Kasahara scale, which some believe to better capture the Japanese melancholic type. Sample was 70% male. These analyses confirm the factor validity of depressive, hyperthymic, cyclothymic and irritable temperaments (TEMPS-A), as well as the melancholic type and the schizoid temperament (MPT). Traits of the depressive and melancholic types emerge as rather distinct. Indeed, our results permit the delineation of an interpersonally sensitive type that "gives in to others" as the core features of the depressive temperament; this is to be contrasted with the higher functioning, perfectionistic, work-oriented melancholic type. Mood dysregulation is represented by the largest number of traits in this population. Contrary to a widely held belief that the melancholic type with its devotion to work and to others is the signature temperament in Japan, cyclothymic traits account for the largest variance in this nonclinical population. Hyperthymic temperament, melancholic type and schizoid temperaments appear largely independent of mood dysregulation. In this Japanese population, TEMPS-A may identify temperament constructs more comprehensively when implemented with melancholic type and schizoid temperament question items added to it. The proposed new Japanese Temperament and Personality (JTP) Scale has self-rated items divided into six subscales.
Dehouck, P; Vander Heyden, Y; Smeyers-Verbeke, J; Massart, D L; Marini, R D; Chiap, P; Hubert, Ph; Crommen, J; Van de Wauw, W; De Beer, J; Cox, R; Mathieu, G; Reepmeyer, J C; Voigt, B; Estevenon, O; Nicolas, A; Van Schepdael, A; Adams, E; Hoogmartens, J
2003-08-22
Erythromycin is a mixture of macrolide antibiotics produced by Saccharopolyspora erythreas during fermentation. A new method for the analysis of erythromycin by liquid chromatography has previously been developed. It makes use of an Astec C18 polymeric column. After validation in one laboratory, the method was now validated in an interlaboratory study. Validation studies are commonly used to test the fitness of the analytical method prior to its use for routine quality testing. The data derived in the interlaboratory study can be used to make an uncertainty statement as well. The relationship between validation and uncertainty statement is not clear for many analysts and there is a need to show how the existing data, derived during validation, can be used in practice. Eight laboratories participated in this interlaboratory study. The set-up allowed the determination of the repeatability variance, s(2)r and the between-laboratory variance, s(2)L. Combination of s(2)r and s(2)L results in the reproducibility variance s(2)R. It has been shown how these data can be used in future by a single laboratory that wants to make an uncertainty statement concerning the same analysis.
Ketefian, S
1981-01-01
The focus of this descriptive study was the relationship between critical thinking, educational preparation, and level of moral judgment in 79 practicing nurses. The Watson-Glaser Critical Thinking Appraisal Test was used to measure critical thinking; information on the participating nurses' educational preparation was obtained from a personal information sheet. Moral judgment was measured by Rest's Defining Issues Test. The hypothesis that critical thinking would be positively related to moral judgment was tested by Pearson product moment correlation; the obtained coefficient of .5326 was significant at the .001 level. The hypothesis that there would be a difference between professional and technical nurses' moral judgments was tested through a one-way analysis of variance. The F ratio (F [1,77] = 9.6) was significant beyond the .01 level. Data also supported the hypothesis that critical thinking and educational preparation would predict greater variance in moral judgment than either variable alone, which was tested through multiple regression analysis (F [2,75] = 18.3, p = .01). Critical thinking and education together accounted for 32.9 percent of the variance in moral judgment. Implications of the findings are discussed for nursing research, practice, and education.
Curtis, David; Knight, Jo; Sham, Pak C
2005-09-01
Although LOD score methods have been applied to diseases with complex modes of inheritance, linkage analysis of quantitative traits has tended to rely on non-parametric methods based on regression or variance components analysis. Here, we describe a new method for LOD score analysis of quantitative traits which does not require specification of a mode of inheritance. The technique is derived from the MFLINK method for dichotomous traits. A range of plausible transmission models is constructed, constrained to yield the correct population mean and variance for the trait but differing with respect to the contribution to the variance due to the locus under consideration. Maximized LOD scores under homogeneity and admixture are calculated, as is a model-free LOD score which compares the maximized likelihoods under admixture assuming linkage and no linkage. These LOD scores have known asymptotic distributions and hence can be used to provide a statistical test for linkage. The method has been implemented in a program called QMFLINK. It was applied to data sets simulated using a variety of transmission models and to a measure of monoamine oxidase activity in 105 pedigrees from the Collaborative Study on the Genetics of Alcoholism. With the simulated data, the results showed that the new method could detect linkage well if the true allele frequency for the trait was close to that specified. However, it performed poorly on models in which the true allele frequency was much rarer. For the Collaborative Study on the Genetics of Alcoholism data set only a modest overlap was observed between the results obtained from the new method and those obtained when the same data were analysed previously using regression and variance components analysis. Of interest is that D17S250 produced a maximized LOD score under homogeneity and admixture of 2.6 but did not indicate linkage using the previous methods. However, this region did produce evidence for linkage in a separate data set, suggesting that QMFLINK may have been able to detect a true linkage which was not picked up by the other methods. The application of model-free LOD score analysis to quantitative traits is novel and deserves further evaluation of its merits and disadvantages relative to other methods.
NASA Astrophysics Data System (ADS)
Liu, WenXiang; Mou, WeiHua; Wang, FeiXue
2012-03-01
As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.
Understanding Variability To Reduce the Energy and GHG Footprints of U.S. Ethylene Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Yuan; Graziano, Diane J.; Riddle, Matthew
2015-11-18
Recent growth in U.S. ethylene production due to the shale gas boom is affecting the U.S. chemical industry's energy and greenhouse gas (GHG) emissions footprints. To evaluate these effects, a systematic, first-principles model of the cradle-to-gate ethylene production system was developed and applied. The variances associated with estimating the energy consumption and GHG emission intensities of U.S. ethylene production, both from conventional natural gas,and from shale gas, are explicitly analyzed. A sensitivity analysis illustrates that the large variances in energy intensity are due to process parameters (e.g., compressor efficiency), and that large variances in GHG emissions intensity are due tomore » fugitive emissions from upstream natural gas production. On the basis of these results, the opportunities with the greatest leverage for reducing the energy and GHG footprints are presented. The model and analysis provide energy analysts and policy makers with a better understanding of the drivers of energy use and GHG emissions associated with U.S. ethylene production. They also constitute a rich data resource that can be used to evaluate options for managing the industry's footprints moving forward.« less
Measuring social impacts of breast carcinoma treatment in Chinese women.
Fielding, Richard; Lam, Wendy W T
2004-06-15
There is no existing instrument that is suitable for measuring the social impact of breast carcinoma (BC) and its treatment among women of Southern Chinese descent. In the current study, the authors assessed the validity of the Chinese Social Adjustment Scale, which was designed to address the need for such an instrument. Five dimensions of social concern were identified in a previous study of Cantonese-speaking Chinese women with BC; these dimensions were family and other relationships, intimacy, private self-image, and public self-image. The authors designed 40 items to address perceptions of change in these areas. These items were administered to a group of 226 women who had received treatment for BC, and factor analysis subsequently was performed to determine construct characteristics. The resulting draft instrument then was administered, along with other measures for the assessment of basic psychometric properties, to a second group of 367 women who recently had undergone surgery for BC. Factor analysis optimally identified 5 factors (corresponding to 33 items): 1) Relationships with Family (10 items, accounting for 22% of variance); 2) Self-Image (7 items, accounting for 15% of variance); 3) Relationships with Friends (7 items, accounting for 8% of variance); 4) Social Enjoyment (4 items, accounting for 6% of variance); and 5) Attractiveness and Sexuality (5 items, accounting for 5% of variance). Subscales were reliable (alpha = 0.63-0.93) and exhibited convergent validity in positive correlations with related measures and divergent validity in appropriate inverse or nonsignificant correlations with other measures. Criterion validity was good, and sensitivity was acceptable. Patterns of change on the scales were consistent with reports in the literature. Self-administration resulted in improved sensitivity. The 33-item Chinese Social Adjustment Scale validly, reliably, and sensitively measures the social impact of BC on Cantonese-speaking Hong Kong Chinese women. Further development of the scale to increase its sensitivity is underway. Copyright 2004 American Cancer Society.
NASA Astrophysics Data System (ADS)
Laube, G.; Schmidt, C.; Fleckenstein, J. H.
2014-12-01
The hyporheic zone (HZ) contributes significantly to whole stream biogeochemical cycling. Biogeochemical reactions within the HZ are often transport limited, thus, understanding these reactions requires knowledge about the magnitude of hyporheic fluxes (HF) and the residence time (RT) of these fluxes within the HZ. While the hydraulics of HF are relatively well understood, studies addressing the influence of permeability heterogeneity lack systematic analysis and have even produced contradictory results (e.g. [1] vs. [2]). In order to close this gap, this study uses a statistical numerical approach to elucidate the influence of permeability heterogeneity on HF and RT. We simulated and evaluated 3750 2D-scenarios of sediment heterogeneity by means of Gaussian random fields with focus on total HF and RT distribution. The scenarios were based on ten realizations of each of all possible combinations of 15 different correlation lengths, 5 dipping angles and 5 permeability variances. Roughly 500 hyporheic stream traces were analyzed per simulation, for a total of almost two million stream traces analyzed for correlations between permeability heterogeneity, HF, and RT. Total HF and the RT variance positively correlated with permeability variance while the mean RT negatively correlated with permeability variance. In contrast, changes in correlation lengths and dipping angles had little effect on the examined properties RT and HF. These results provide a possible explanation of the seemingly contradictory conclusions of recent studies, given that the permeability variances in these studies differ by several orders of magnitude. [1] Bardini, L., Boano, F., Cardenas, M.B, Sawyer, A.H, Revelli, R. and Ridolfi, L. "Small-Scale Permeability Heterogeneity Has Negligible Effects on Nutrient Cycling in Streambeds." Geophysical Research Letters, 2013. doi:10.1002/grl.50224. [2] Zhou, Y., Ritzi, R. W., Soltanian, M. R. and Dominic, D. F. "The Influence of Streambed Heterogeneity on Hyporheic Flow in Gravelly Rivers." Groundwater, 2013. doi:10.1111/gwat.12048.
Williams, Larry J; O'Boyle, Ernest H
2015-09-01
A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
Sinacore, James M.; And Others
1992-01-01
It is argued that there is a benefit to applying techniques of exploratory data analysis (EDA) to program evaluation. The evaluation of a rehabilitation program for people with rheumatoid arthritis (20 subjects and 21 comparisons) through EDA supports the argument, indicating outcomes more precisely than conventional analysis of variance. (SLD)
Maple-Brown, Louise J; Cunningham, Joan; Nandi, Nirjhar; Hodge, Allison; O'Dea, Kerin
2010-10-29
Epidemiological evidence suggests that fibrinogen and CRP are associated with coronary heart disease risk. High CRP in Indigenous Australians has been reported in previous studies including our 'Diabetes and Related diseases in Urban Indigenous population in Darwin region' (DRUID) Study. We studied levels of fibrinogen and its cross-sectional relationship with traditional and non-traditional cardiovascular risk factors in an urban Indigenous Australian cohort. Fibrinogen data were available from 287 males and 628 females (aged ≥ 15 years) from the DRUID study. Analysis was performed for associations with the following risk factors: diabetes, HbA1c, age, BMI, waist circumference, waist-hip ratio, total cholesterol, triglyceride, HDL cholesterol, C-reactive protein, homocysteine, blood pressure, heart rate, urine ACR, smoking status, alcohol abstinence. Fibrinogen generally increased with age in both genders; levels by age group were higher than those previously reported in other populations, including Native Americans. Fibrinogen was higher in those with than without diabetes (4.24 vs 3.56 g/L, p < 0.001). After adjusting for age and sex, the following were significantly associated with fibrinogen: BMI, waist, waist-hip ratio, systolic blood pressure, heart rate, fasting triglycerides, HDL cholesterol, HbA1c, CRP, ACR and alcohol abstinence. On multivariate regression (age and sex-adjusted) CRP and HbA1c were significant independent predictors of fibrinogen, explaining 27% of its variance; CRP alone explained 25% of fibrinogen variance. On factor analysis, both CRP and fibrinogen clustered with obesity in women (this factor explained 20% of variance); but in men, CRP clustered with obesity (factor explained 18% of variance) whilst fibrinogen clustered with HbA1c and urine ACR (factor explained 13% of variance). Fibrinogen is associated with traditional and non-traditional cardiovascular risk factors in this urban Indigenous cohort and may be a useful biomarker of CVD in this high-risk population. The apparent different associations of fibrinogen with cardiovascular disease risk markers in men and women should be explored further.
Information Literacy and Office Tool Competencies: A Benchmark Study
ERIC Educational Resources Information Center
Heinrichs, John H.; Lim, Jeen-Su
2010-01-01
Present information science literature recognizes the importance of information technology to achieve information literacy. The authors report the results of a benchmarking student survey regarding perceived functional skills and competencies in word-processing and presentation tools. They used analysis of variance and regression analysis to…
ERIC Educational Resources Information Center
Dixon, Paul W.; Ahern, Elsie H.
1973-01-01
EPPS scores from 167 high school seniors (Study 1, S1), 137 introductory psychology students (S2), and students from an innovative college program (S3) were compared using analysis of variance, image analysis, and factor pattern comparison. (Editor)
DREEM on: validation of the Dundee Ready Education Environment Measure in Pakistan.
Khan, Junaid Sarfraz; Tabasum, Saima; Yousafzai, Usman Khalil; Fatima, Mehreen
2011-09-01
To validate DREEM in medical education environment of Punjab, Pakistan. The DREEM questionnaire was anonymously collected from Final year Baccalaureate of Medicine; Baccalaureate of Surgery students in the private and public medical colleges affiliated with the University of Health Sciences, Lahore. Data was analyzed using Principal Component Analysis with Varimax Rotation. The response rate was 84.14 %. The average DREEM score was 125. Confirmatory and Exploratory Factor Analysis was applied under the conditions of eigenvalues >1 and loadings > or = 0.3. In CONFIRMATORY FACTOR ANALYSIS, Five components were extracted accounting for 40.10% of variance and in EXPLORATORY FACTOR ANALYSIS, Ten components were extracted accounting for 52.33% of variance. Total 50 items had internal consistency reliability of 0.91 (Cronbach's Alpha). The value of Spearman-Brown was 0.868 showing the reliability of the analysis. In both analyses the subscales produced were sensible but the mismatch from the original was largely due to the English-Pakistan contextual and cultural differences. DREEM is a generic instrument that will do well with regional modifications to suit individual, contextual and cultural settings.
Hierarchical multivariate covariance analysis of metabolic connectivity
Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J
2014-01-01
Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI). PMID:25294129
ERIC Educational Resources Information Center
Barton, Mitch; Yeatts, Paul E.; Henson, Robin K.; Martin, Scott B.
2016-01-01
There has been a recent call to improve data reporting in kinesiology journals, including the appropriate use of univariate and multivariate analysis techniques. For example, a multivariate analysis of variance (MANOVA) with univariate post hocs and a Bonferroni correction is frequently used to investigate group differences on multiple dependent…
ERIC Educational Resources Information Center
Williams, Lynne J.; Abdi, Herve; French, Rebecca; Orange, Joseph B.
2010-01-01
Purpose: In communication disorders research, clinical groups are frequently described based on patterns of performance, but researchers often study only a few participants described by many quantitative and qualitative variables. These data are difficult to handle with standard inferential tools (e.g., analysis of variance or factor analysis)…
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Analysis of middle bearing failure in rotor jet engine using tip-timing and tip-clearance techniques
NASA Astrophysics Data System (ADS)
Rzadkowski, R.; Rokicki, E.; Piechowski, L.; Szczepanik, R.
2016-08-01
The reported problem is the failure of the middle bearing in an aircraft rotor engine. Tip-timing and tip-clearance and variance analyses are carried out on a compressor rotor blade in the seventh stage above the middle bearing. The experimental analyses concern both an aircraft engine with a middle bearing in good working order and an engine with a damaged middle bearing. A numerical analysis of seventh stage blade free vibration is conducted to explain the experimental results. This appears to be an effective method of predicting middle bearing failure. The results show that variance first increases in the initial stages of bearing failure, but then starts to decrease and stabilize, and then again decrease shortly before complete bearing failure.
The Counseling Competencies Scale: Validation and Refinement
ERIC Educational Resources Information Center
Lambie, Glenn W.; Mullen, Patrick R.; Swank, Jacqueline M.; Blount, Ashley
2018-01-01
Supervisors evaluated counselors-in-training at multiple points during their practicum experience using the Counseling Competencies Scale (CCS; N = 1,070). The CCS evaluations were randomly split to conduct exploratory factor analysis and confirmatory factor analysis, resulting in a 2-factor model (61.5% of the variance explained).
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
Forward light scatter analysis of the eye in a spatially-resolved double-pass optical system.
Nam, Jayoung; Thibos, Larry N; Bradley, Arthur; Himebaugh, Nikole; Liu, Haixia
2011-04-11
An optical analysis is developed to separate forward light scatter of the human eye from the conventional wavefront aberrations in a double pass optical system. To quantify the separate contributions made by these micro- and macro-aberrations, respectively, to the spot image blur in the Shark-Hartmann aberrometer, we develop a metric called radial variance for spot blur. We prove an additivity property for radial variance that allows us to distinguish between spot blurs from macro-aberrations and micro-aberrations. When the method is applied to tear break-up in the human eye, we find that micro-aberrations in the second pass accounts for about 87% of the double pass image blur in the Shack-Hartmann wavefront aberrometer under our experimental conditions. © 2011 Optical Society of America
Comparative test on several forms of background error covariance in 3DVar
NASA Astrophysics Data System (ADS)
Shao, Aimei
2013-04-01
The background error covariance matrix (Hereinafter referred to as B matrix) plays an important role in the three-dimensional variational (3DVar) data assimilation method. However, it is difficult to get B matrix accurately because true atmospheric state is unknown. Therefore, some methods were developed to estimate B matrix (e.g. NMC method, innovation analysis method, recursive filters, and ensemble method such as EnKF). Prior to further development and application of these methods, the function of several B matrixes estimated by these methods in 3Dvar is worth studying and evaluating. For this reason, NCEP reanalysis data and forecast data are used to test the effectiveness of the several B matrixes with VAF (Huang, 1999) method. Here the NCEP analysis is treated as the truth and in this case the forecast error is known. The data from 2006 to 2007 is used as the samples to estimate B matrix and the data in 2008 is used to verify the assimilation effects. The 48h and 24h forecast valid at the same time is used to estimate B matrix with NMC method. B matrix can be represented by a correlation part (a non-diagonal matrix) and a variance part (a diagonal matrix of variances). Gaussian filter function as an approximate approach is used to represent the variation of correlation coefficients with distance in numerous 3DVar systems. On the basis of the assumption, the following several forms of B matrixes are designed and test with VAF in the comparative experiments: (1) error variance and the characteristic lengths are fixed and setted to their mean value averaged over the analysis domain; (2) similar to (1), but the mean characteristic lengths reduce to 50 percent for the height and 60 percent for the temperature of the original; (3) similar to (2), but error variance calculated directly by the historical data is space-dependent; (4) error variance and characteristic lengths are all calculated directly by the historical data; (5) B matrix is estimated directly by the historical data; (6) similar to (5), but a localization process is performed; (7) B matrix is estimated by NMC method but error variance is reduced by 1.7 times in order that the value is close to that calculated from the true forecast error samples; (8) similar to (7), but the localization similar to (6) is performed. Experimental results with the different B matrixes show that for the Gaussian-type B matrix the characteristic lengths calculated from the true error samples don't bring a good analysis results. However, the reduced characteristic lengths (about half of the original one) can lead to a good analysis. If the B matrix estimated directly from the historical data is used in 3DVar, the assimilation effect can not reach to the best. The better assimilation results are generated with the application of reduced characteristic length and localization. Even so, it hasn't obvious advantage compared with Gaussian-type B matrix with the optimal characteristic length. It implies that the Gaussian-type B matrix, widely used for operational 3DVar system, can get a good analysis with the appropriate characteristic lengths. The crucial problem is how to determine the appropriate characteristic lengths. (This work is supported by the National Natural Science Foundation of China (41275102, 40875063), and the Fundamental Research Funds for the Central Universities (lzujbky-2010-9) )
Crow, James F
2008-12-01
Although molecular methods, such as QTL mapping, have revealed a number of loci with large effects, it is still likely that the bulk of quantitative variability is due to multiple factors, each with small effect. Typically, these have a large additive component. Conventional wisdom argues that selection, natural or artificial, uses up additive variance and thus depletes its supply. Over time, the variance should be reduced, and at equilibrium be near zero. This is especially expected for fitness and traits highly correlated with it. Yet, populations typically have a great deal of additive variance, and do not seem to run out of genetic variability even after many generations of directional selection. Long-term selection experiments show that populations continue to retain seemingly undiminished additive variance despite large changes in the mean value. I propose that there are several reasons for this. (i) The environment is continually changing so that what was formerly most fit no longer is. (ii) There is an input of genetic variance from mutation, and sometimes from migration. (iii) As intermediate-frequency alleles increase in frequency towards one, producing less variance (as p --> 1, p(1 - p) --> 0), others that were originally near zero become more common and increase the variance. Thus, a roughly constant variance is maintained. (iv) There is always selection for fitness and for characters closely related to it. To the extent that the trait is heritable, later generations inherit a disproportionate number of genes acting additively on the trait, thus increasing genetic variance. For these reasons a selected population retains its ability to evolve. Of course, genes with large effect are also important. Conspicuous examples are the small number of loci that changed teosinte to maize, and major phylogenetic changes in the animal kingdom. The relative importance of these along with duplications, chromosome rearrangements, horizontal transmission and polyploidy is yet to be determined. It is likely that only a case-by-case analysis will provide the answers. Despite the difficulties that complex interactions cause for evolution in Mendelian populations, such populations nevertheless evolve very well. Longlasting species must have evolved mechanisms for coping with such problems. Since such difficulties do not arise in asexual populations, a comparison of epistatic patterns in closely related sexual and asexual species might provide some important insights.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
Zheng, Yiqi; Xu, Shaojun; Liu, Jing; Zhao, Yan; Liu, Jianxiu
2017-01-01
Bermudagrass [Cynodon dactylon (L.) Pers.], an important turfgrass used in public parks, home lawns, golf courses and sports fields, is widely distributed in China. In the present study, sequence-related amplified polymorphism (SRAP) markers were used to assess genetic diversity and population structure among 157 indigenous bermudagrass genotypes from 20 provinces in China. The application of 26 SRAP primer pairs produced 340 bands, of which 328 (96.58%) were polymorphic. The polymorphic information content (PIC) ranged from 0.36 to 0.49 with a mean of 0.44. Genetic distance coefficients among accessions ranged from 0.04 to 0.61, with an average of 0.32. The results of STRUCTURE analysis suggested that 157 bermudagrass accessions can be grouped into three subpopulations. Moreover, according to clustering based on the unweighted pair-group method of arithmetic averages (UPGMA), accessions were divided into three major clusters. The UPGMA dendrogram revealed that accessions from identical or adjacent areas were generally, but not entirely, clustered into the same cluster. Comparison of the UPGMA dendrogram and the Bayesian STRUCTURE analysis showed general agreement between the population subdivisions and the genetic relationships among accessions. Principal coordinate analysis (PCoA) with SRAP markers revealed a similar grouping of accessions to the UPGMA dendrogram and STRUCTUE analysis. Analysis of molecular variance (AMOVA) indicated that 18% of total molecular variance was attributed to diversity among subpopulations, while 82% of variance was associated with differences within subpopulations. Our study represents the most comprehensive investigation of the genetic diversity and population structure of bermudagrass in China to date, and provides valuable information for the germplasm collection, genetic improvement, and systematic utilization of bermudagrass.
Xu, Shaojun; Liu, Jing; Zhao, Yan; Liu, Jianxiu
2017-01-01
Bermudagrass [Cynodon dactylon (L.) Pers.], an important turfgrass used in public parks, home lawns, golf courses and sports fields, is widely distributed in China. In the present study, sequence-related amplified polymorphism (SRAP) markers were used to assess genetic diversity and population structure among 157 indigenous bermudagrass genotypes from 20 provinces in China. The application of 26 SRAP primer pairs produced 340 bands, of which 328 (96.58%) were polymorphic. The polymorphic information content (PIC) ranged from 0.36 to 0.49 with a mean of 0.44. Genetic distance coefficients among accessions ranged from 0.04 to 0.61, with an average of 0.32. The results of STRUCTURE analysis suggested that 157 bermudagrass accessions can be grouped into three subpopulations. Moreover, according to clustering based on the unweighted pair-group method of arithmetic averages (UPGMA), accessions were divided into three major clusters. The UPGMA dendrogram revealed that accessions from identical or adjacent areas were generally, but not entirely, clustered into the same cluster. Comparison of the UPGMA dendrogram and the Bayesian STRUCTURE analysis showed general agreement between the population subdivisions and the genetic relationships among accessions. Principal coordinate analysis (PCoA) with SRAP markers revealed a similar grouping of accessions to the UPGMA dendrogram and STRUCTUE analysis. Analysis of molecular variance (AMOVA) indicated that 18% of total molecular variance was attributed to diversity among subpopulations, while 82% of variance was associated with differences within subpopulations. Our study represents the most comprehensive investigation of the genetic diversity and population structure of bermudagrass in China to date, and provides valuable information for the germplasm collection, genetic improvement, and systematic utilization of bermudagrass. PMID:28493962
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks
Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544
Aristizabal, F.; Glavinovic, M. I.
2003-01-01
Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the “composite spectra” of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses. PMID:14507683
Aristizabal, F; Glavinovic, M I
2003-10-01
Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the "composite spectra" of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic currents is not highly accurate owing to the varying number of the activatable AMPA channels caused by desensitization. The spatial nonuniformity of open, bound, and desensitized AMPA channels, and the time dependence and spatial nonuniformity of the glutamate concentration in the synaptic cleft, further reduce the accuracy of estimates of the number of AMPA channels from synaptic currents. In conclusion, wavelet analysis of nonstationary fluctuations of patch and synaptic currents expands our ability to determine accurately the variance and frequency of current fluctuations, demonstrates the limits of applicability of techniques currently used to evaluate the single channel current and number of AMPA channels, and offers new insights into the mechanisms involved in the generation of unitary quantal events at excitatory central synapses.
O'Brien, B J; Sculpher, M J
2000-05-01
Current principles of cost-effectiveness analysis emphasize the rank ordering of programs by expected economic return (eg, quality-adjusted life-years gained per dollar expended). This criterion ignores the variance associated with the cost-effectiveness of a program, yet variance is a common measure of risk when financial investment options are appraised. Variation in health care program return is likely to be a criterion of program selection for health care managers with fixed budgets and outcome performance targets. Characterizing health care resource allocation as a risky investment problem, we show how concepts of portfolio analysis from financial economics can be adopted as a conceptual framework for presenting cost-effectiveness data from multiple programs as mean-variance data. Two specific propositions emerge: (1) the current convention of ranking programs by expected return is a special case of the portfolio selection problem in which the decision maker is assumed to be indifferent to risk, and (2) for risk-averse decision makers, the degree of joint risk or covariation in cost-effectiveness between programs will create incentives to diversify an investment portfolio. The conventional normative assumption of risk neutrality for social-level public investment decisions does not apply to a large number of health care resource allocation decisions in which health care managers seek to maximize returns subject to budget constraints and performance targets. Portfolio theory offers a useful framework for studying mean-variance tradeoffs in cost-effectiveness and offers some positive predictions (and explanations) of actual decision making in the health care sector.
Analysis of messy data with heteroscedastic in mean models
NASA Astrophysics Data System (ADS)
Trianasari, Nurvita; Sumarni, Cucu
2016-02-01
In the analysis of the data, we often faced with the problem of data where the data did not meet some assumptions. In conditions of such data is often called data messy. This problem is a consequence of the data that generates outliers that bias or error estimation. To analyze the data messy, there are three approaches, namely standard analysis, transform data and data analysis methods rather than a standard. Simulations conducted to determine the performance of a third comparative test procedure on average often the model variance is not homogeneous. Data simulation of each scenario is raised as much as 500 times. Next, we do the analysis of the average comparison test using three methods, Welch test, mixed models and Welch-r test. Data generation is done through software R version 3.1.2. Based on simulation results, these three methods can be used for both normal and abnormal case (homoscedastic). The third method works very well on data balanced or unbalanced when there is no violation in the homogenity's assumptions variance. For balanced data, the three methods still showed an excellent performance despite the violation of the assumption of homogeneity of variance, with the requisite degree of heterogeneity is high. It can be shown from the level of power test above 90 percent, and the best to Welch method (98.4%) and the Welch-r method (97.8%). For unbalanced data, Welch method will be very good moderate at in case of heterogeneity positive pair with a 98.2% power. Mixed models method will be very good at case of highly heterogeneity was negative negative pairs with power. Welch-r method works very well in both cases. However, if the level of heterogeneity of variance is very high, the power of all method will decrease especially for mixed models methods. The method which still works well enough (power more than 50%) is Welch-r method (62.6%), and the method of Welch (58.6%) in the case of balanced data. If the data are unbalanced, Welch-r method works well enough in the case of highly heterogeneous positive positive or negative negative pairs, there power are 68.8% and 51% consequencly. Welch method perform well enough only in the case of highly heterogeneous variety of positive positive pairs with it is power of 64.8%. While mixed models method is good in the case of a very heterogeneous variety of negative partner with 54.6% power. So in general, when there is a variance is not homogeneous case, Welch method is applied to the data rank (Welch-r) has a better performance than the other methods.
NASA Astrophysics Data System (ADS)
Sholtes, Joel; Werbylo, Kevin; Bledsoe, Brian
2014-10-01
Theoretical approaches to magnitude-frequency analysis (MFA) of sediment transport in channels couple continuous flow probability density functions (PDFs) with power law flow-sediment transport relations (rating curves) to produce closed-form equations relating MFA metrics such as the effective discharge, Qeff, and fraction of sediment transported by discharges greater than Qeff, f+, to statistical moments of the flow PDF and rating curve parameters. These approaches have proven useful in understanding the theoretical drivers behind the magnitude and frequency of sediment transport. However, some of their basic assumptions and findings may not apply to natural rivers and streams with more complex flow-sediment transport relationships or management and design scenarios, which have finite time horizons. We use simple numerical experiments to test the validity of theoretical MFA approaches in predicting the magnitude and frequency of sediment transport. Median values of Qeff and f+ generated from repeated, synthetic, finite flow series diverge from those produced with theoretical approaches using the same underlying flow PDF. The closed-form relation for f+ is a monotonically increasing function of flow variance. However, using finite flow series, we find that f+ increases with flow variance to a threshold that increases with flow record length. By introducing a sediment entrainment threshold, we present a physical mechanism for the observed diverging relationship between Qeff and flow variance in fine and coarse-bed channels. Our work shows that through complex and threshold-driven relationships sediment transport mode, channel morphology, flow variance, and flow record length all interact to influence estimates of what flow frequencies are most responsible for transporting sediment in alluvial channels.
System level analysis and control of manufacturing process variation
Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.
2005-05-31
A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.
Msimanga, Huggins Z; Ollis, Robert J
2010-06-01
Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Youhong, E-mail: youhong.zhang@mail.tsinghua.edu.cn
2011-01-01
The All Sky Monitor (ASM) on board the Rossi X-ray Timing Explorer has continuously monitored a number of active galactic nuclei (AGNs) with similar sampling rates for 14 years, from 1996 January to 2009 December. Utilizing the archival ASM data of 27 AGNs, we calculate the normalized excess variances of the 300-day binned X-ray light curves on the longest timescale (between 300 days and 14 years) explored so far. The observed variance appears to be independent of AGN black-hole mass and bolometric luminosity. According to the scaling relation of black-hole mass (and bolometric luminosity) from galactic black hole X-ray binariesmore » (GBHs) to AGNs, the break timescales that correspond to the break frequencies detected in the power spectral density (PSD) of our AGNs are larger than the binsize (300 days) of the ASM light curves. As a result, the singly broken power-law (soft-state) PSD predicts the variance to be independent of mass and luminosity. Nevertheless, the doubly broken power-law (hard-state) PSD predicts, with the widely accepted ratio of the two break frequencies, that the variance increases with increasing mass and decreases with increasing luminosity. Therefore, the independence of the observed variance on mass and luminosity suggests that AGNs should have soft-state PSDs. Taking into account the scaling of the break timescale with mass and luminosity synchronously, the observed variances are also more consistent with the soft-state than the hard-state PSD predictions. With the averaged variance of AGNs and the soft-state PSD assumption, we obtain a universal PSD amplitude of 0.030 {+-} 0.022. By analogy with the GBH PSDs in the high/soft state, the longest timescale variability supports the standpoint that AGNs are scaled-up GBHs in the high accretion state, as already implied by the direct PSD analysis.« less
Watkins, Marley W
2010-12-01
The structure of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV; D. Wechsler, 2003a) was analyzed via confirmatory factor analysis among a national sample of 355 students referred for psychoeducational evaluation by 93 school psychologists from 35 states. The structure of the WISC-IV core battery was best represented by four first-order factors as per D. Wechsler (2003b), plus a general intelligence factor in a direct hierarchical model. The general factor was the predominate source of variation among WISC-IV subtests, accounting for 48% of the total variance and 75% of the common variance. The largest 1st-order factor, Processing Speed, only accounted for 6.1% total and 9.5% common variance. Given these explanatory contributions, recommendations favoring interpretation of the 1st-order factor scores over the general intelligence score appear to be misguided.
Instrument Psychometrics: Parental Satisfaction and Quality Indicators of Perinatal Palliative Care.
Wool, Charlotte
2015-10-01
Despite a life-limiting fetal diagnosis, prenatal attachment often occurs in varying degrees resulting in role identification by an individual as a parent. Parents recognize quality care and report their satisfaction when interfacing with health care providers. The aim was to test an instrument measuring parental satisfaction and quality indicators with parents electing to continue a pregnancy after learning of a life-limiting fetal diagnosis. A cross sectional survey design gathered data using a computer-mediated platform. Subjects were parents (n=405) who opted to continue a pregnancy affected by a life-limiting diagnosis. Factor analysis using principal component analysis with Varimax rotation was used to validate the instrument, evaluate components, and summarize the explained variance achieved among quality indicator items. The Prenatal Scale was reduced to 37 items with a three-component solution explaining 66.19% of the variance and internal consistency reliability of 0.98. The Intrapartum Scale included 37 items with a four-component solution explaining 66.93% of the variance and a Cronbach α of 0.977. The Postnatal Scale was reduced to 44 items with a six-component solution explaining 67.48% of the variance. Internal consistency reliability was 0.975. The Parental Satisfaction and Quality Indicators of Perinatal Palliative Care Instrument is a valid and reliable measure for parent-reported quality care and satisfaction. Use of this instrument will enable clinicians and researchers to measure quality indicators and parental satisfaction. The instrument is useful for assessing, analyzing, and reporting data on quality for care delivered during the prenatal, intrapartum, and postnatal periods.
NASA Astrophysics Data System (ADS)
Varghese, Bino; Hwang, Darryl; Mohamed, Passant; Cen, Steven; Deng, Christopher; Chang, Michael; Duddalwar, Vinay
2017-11-01
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
Young, Allison; Klossner, Joanne; Docherty, Carrie L; Dodge, Thomas M; Mensch, James M
2013-01-01
Context A better understanding of why students leave an undergraduate athletic training education program (ATEP), as well as why they persist, is critical in determining the future membership of our profession. Objective To better understand how clinical experiences affect student retention in undergraduate ATEPs. Design Survey-based research using a quantitative and qualitative mixed-methods approach. Setting Three-year undergraduate ATEPs across District 4 of the National Athletic Trainers' Association. Patients or Other Participants Seventy-one persistent students and 23 students who left the ATEP prematurely. Data Collection and Analysis Data were collected using a modified version of the Athletic Training Education Program Student Retention Questionnaire. Multivariate analysis of variance was performed on the quantitative data, followed by a univariate analysis of variance on any significant findings. The qualitative data were analyzed through inductive content analysis. Results A difference was identified between the persister and dropout groups (Pillai trace = 0.42, F1,92 = 12.95, P = .01). The follow-up analysis of variance revealed that the persister and dropout groups differed on the anticipatory factors (F1,92 = 4.29, P = .04), clinical integration (F1,92 = 6.99, P = .01), and motivation (F1,92 = 43.12, P = .01) scales. Several themes emerged in the qualitative data, including networks of support, authentic experiential learning, role identity, time commitment, and major or career change. Conclusions A perceived difference exists in how athletic training students are integrated into their clinical experiences between those students who leave an ATEP and those who stay. Educators may improve retention by emphasizing authentic experiential learning opportunities rather than hours worked, by allowing students to take on more responsibility, and by facilitating networks of support within clinical education experiences. PMID:23672327
Structural Analysis of Covariance and Correlation Matrices.
ERIC Educational Resources Information Center
Joreskog, Karl G.
1978-01-01
A general approach to analysis of covariance structures is considered, in which the variances and covariances or correlations of the observed variables are directly expressed in terms of the parameters of interest. The statistical problems of identification, estimation and testing of such covariance or correlation structures are discussed.…
Spotting Incorrect Rules in Signed-Number Arithmetic by the Individual Consistency Index.
1981-08-01
meaning of dimensionality of achievement data. It also shows the importance of construct validity, even in criterion referenced testing of the cognitive ... aspect of performance, and that the traditional means of item analysis that are based on taking the variances of binary scores and content analysis
Inter-individual Differences in the Effects of Aircraft Noise on Sleep Fragmentation.
McGuire, Sarah; Müller, Uwe; Elmenhorst, Eva-Maria; Basner, Mathias
2016-05-01
Environmental noise exposure disturbs sleep and impairs recuperation, and may contribute to the increased risk for (cardiovascular) disease. Noise policy and regulation are usually based on average responses despite potentially large inter-individual differences in the effects of traffic noise on sleep. In this analysis, we investigated what percentage of the total variance in noise-induced awakening reactions can be explained by stable inter-individual differences. We investigated 69 healthy subjects polysomnographically (mean ± standard deviation 40 ± 13 years, range 18-68 years, 32 male) in this randomized, balanced, double-blind, repeated measures laboratory study. This study included one adaptation night, 9 nights with exposure to 40, 80, or 120 road, rail, and/or air traffic noise events (including one noise-free control night), and one recovery night. Mixed-effects models of variance controlling for reaction probability in noise-free control nights, age, sex, number of noise events, and study night showed that 40.5% of the total variance in awakening probability and 52.0% of the total variance in EEG arousal probability were explained by inter-individual differences. If the data set was restricted to nights (4 exposure nights with 80 noise events per night), 46.7% of the total variance in awakening probability and 57.9% of the total variance in EEG arousal probability were explained by inter-individual differences. The results thus demonstrate that, even in this relatively homogeneous, healthy, adult study population, a considerable amount of the variance observed in noise-induced sleep disturbance can be explained by inter-individual differences that cannot be explained by age, gender, or specific study design aspects. It will be important to identify those at higher risk for noise induced sleep disturbance. Furthermore, the custom to base noise policy and legislation on average responses should be re-assessed based on these findings. © 2016 Associated Professional Sleep Societies, LLC.
Temperament affects sympathetic nervous function in a normal population.
Kim, Bora; Lee, Jae-Hon; Kang, Eun-Ho; Yu, Bum-Hee
2012-09-01
Although specific temperaments have been known to be related to autonomic nervous function in some psychiatric disorders, there are few studies that have examined the relationship between temperaments and autonomic nervous function in a normal population. In this study, we examined the effect of temperament on the sympathetic nervous function in a normal population. Sixty eight healthy subjects participated in the present study. Temperament was assessed using the Korean version of the Cloninger Temperament and Character Inventory (TCI). Autonomic nervous function was determined by measuring skin temperature in a resting state, which was recorded for 5 minutes from the palmar surface of the left 5th digit using a thermistor secured with a Velcro® band. Pearson's correlation analysis and multiple linear regression were used to examine the relationship between temperament and skin temperature. A higher harm avoidance score was correlated with a lower skin temperature (i.e. an increased sympathetic tone; r=-0.343, p=0.004) whereas a higher persistence score was correlated with a higher skin temperature (r=0.433, p=0.001). Hierarchical linear regression analysis revealed that harm avoidance was able to predict the variance of skin temperature independently, with a variance of 7.1% after controlling for sex, blood pressure and state anxiety and persistence was the factor predicting the variance of skin temperature with a variance of 5.0%. These results suggest that high harm avoidance is related to an increased sympathetic nervous function whereas high persistence is related to decreased sympathetic nervous function in a normal population.
Temperament Affects Sympathetic Nervous Function in a Normal Population
Kim, Bora; Lee, Jae-Hon; Kang, Eun-Ho
2012-01-01
Objective Although specific temperaments have been known to be related to autonomic nervous function in some psychiatric disorders, there are few studies that have examined the relationship between temperaments and autonomic nervous function in a normal population. In this study, we examined the effect of temperament on the sympathetic nervous function in a normal population. Methods Sixty eight healthy subjects participated in the present study. Temperament was assessed using the Korean version of the Cloninger Temperament and Character Inventory (TCI). Autonomic nervous function was determined by measuring skin temperature in a resting state, which was recorded for 5 minutes from the palmar surface of the left 5th digit using a thermistor secured with a Velcro® band. Pearson's correlation analysis and multiple linear regression were used to examine the relationship between temperament and skin temperature. Results A higher harm avoidance score was correlated with a lower skin temperature (i.e. an increased sympathetic tone; r=-0.343, p=0.004) whereas a higher persistence score was correlated with a higher skin temperature (r=0.433, p=0.001). Hierarchical linear regression analysis revealed that harm avoidance was able to predict the variance of skin temperature independently, with a variance of 7.1% after controlling for sex, blood pressure and state anxiety and persistence was the factor predicting the variance of skin temperature with a variance of 5.0%. Conclusion These results suggest that high harm avoidance is related to an increased sympathetic nervous function whereas high persistence is related to decreased sympathetic nervous function in a normal population. PMID:22993530
Methods for presentation and display of multivariate data
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Methods for the presentation and display of multivariate data are discussed with emphasis placed on the multivariate analysis of variance problems and the Hotelling T(2) solution in the two-sample case. The methods utilize the concepts of stepwise discrimination analysis and the computation of partial correlation coefficients.
Statistical analysis tables for truncated or censored samples
NASA Technical Reports Server (NTRS)
Cohen, A. C.; Cooley, C. G.
1971-01-01
Compilation describes characteristics of truncated and censored samples, and presents six illustrations of practical use of tables in computing mean and variance estimates for normal distribution using selected samples.
Big Five personality traits: are they really important for the subjective well-being of Indians?
Tanksale, Deepa
2015-02-01
This study empirically examined the relationship between the Big Five personality traits and subjective well-being (SWB) in India. SWB variables used were life satisfaction, positive affect and negative affect. A total of 183 participants in the age range 30-40 years from Pune, India, completed the personality and SWB measures. Backward stepwise regression analysis showed that the Big Five traits accounted for 17% of the variance in life satisfaction, 35% variance in positive affect and 28% variance in negative affect. Conscientiousness emerged as the strongest predictor of life satisfaction. In line with the earlier research findings, neuroticism and extraversion were found to predict negative affect and positive affect, respectively. Neither openness to experience nor agreeableness contributed to SWB. The research emphasises the need to revisit the association between personality and SWB across different cultures, especially non-western cultures. © 2014 International Union of Psychological Science.
Eating Disorders: Explanatory Variables in Caucasian and Hispanic College Women
ERIC Educational Resources Information Center
Aviña, Vanessa; Day, Susan X.
2016-01-01
The authors explored Hispanic and Caucasian college women's (N = 264) behavioral and attitudinal symptoms of eating disorders after controlling for body mass index and internalization of the thinness ideal, as well as the roles of ethnicity and ethnic identity in symptomatology. Correlational analysis, multivariate analysis of variance, and…
The Stanford Prison Experiment in Introductory Psychology Textbooks: A Content Analysis
ERIC Educational Resources Information Center
Bartels, Jared M.
2015-01-01
The present content analysis examines the coverage of theoretical and methodological problems with the Stanford prison experiment (SPE) in a sample of introductory psychology textbooks. Categories included the interpretation and replication of the study, variance in guard behavior, participant selection bias, the presence of demand characteristics…
Determinants of Standard Errors of MLEs in Confirmatory Factor Analysis
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Cheng, Ying; Zhang, Wei
2010-01-01
This paper studies changes of standard errors (SE) of the normal-distribution-based maximum likelihood estimates (MLE) for confirmatory factor models as model parameters vary. Using logical analysis, simplified formulas and numerical verification, monotonic relationships between SEs and factor loadings as well as unique variances are found.…