Science.gov

Sample records for allan variance analysis

  1. The quantum Allan variance

    NASA Astrophysics Data System (ADS)

    Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał

    2016-08-01

    The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.

  2. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    PubMed

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  3. On the application of Allan variance method for Ring Laser Gyro performance characterization

    SciTech Connect

    Ng, L.C.

    1993-10-15

    This report describes the method of Allan variance and its application to the characterization of a Ring Laser Gyro`s (RLG) performance. Allan variance, a time domain analysis technique, is an accepted IEEE standard for gyro specifications. The method was initially developed by David Allan of the National Bureau of Standards to quantify the error statistics of a Cesium beam frequency standard employed as the US Frequency Standards in 1960`s. The method can, in general, be applied to analyze the error characteristics of any precision measurement instrument. The key attribute of the method is that it allows for a finer, easier characterization and identification of error sources and their contribution to the overall noise statistics. This report presents an overview of the method, explains the relationship between Allan variance and power spectral density distribution of underlying noise sources, describes the batch and recursive implementation approaches, validates the Allan variance computation with a simulation model, and illustrates the Allan variance method using data collected from several Honeywell LIMU units.

  4. Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.

    PubMed

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  5. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  6. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674

  7. The dynamic Allan variance II: a fast computational algorithm.

    PubMed

    Galleani, Lorenzo

    2010-01-01

    The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.

  8. Allan Variance Computed in Space Domain: Definition and Application to InSAR Data to Characterize Noise and Geophysical Signal.

    PubMed

    Cavalié, Olivier; Vernotte, François

    2016-04-01

    The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time

  9. Allan Variance Computed in Space Domain: Definition and Application to InSAR Data to Characterize Noise and Geophysical Signal.

    PubMed

    Cavalié, Olivier; Vernotte, François

    2016-04-01

    The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time

  10. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    PubMed

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  11. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    PubMed

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).

  12. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    PubMed

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV). PMID:26800535

  13. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    NASA Technical Reports Server (NTRS)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  14. Three-dimensional Allan fault plane analysis

    SciTech Connect

    Hoffman, K.S.; Taylor, D.R.; Schnell, R.T.

    1994-12-31

    Allan fault-plane analysis is a useful tool for determining hydrocarbon migration paths and the location of possible traps. While initially developed for Gulf coast deltaic and interdeltaic environments, fault-plane analysis has been successfully applied in many other geologic settings. Where the geology involves several intersecting faults and greater complexity, many two-dimensional displays are required in the investigation and it becomes increasingly difficult to accurately visualize both fault relationships and migration routes. Three-dimensional geospatial fault and structure modeling using computer techniques, however, facilitates both visualization and understanding and extends fault-plane analysis into much more complex situations. When a model is viewed in three dimensions, the strata on both sides of a fault can be seen simultaneously while the true structural character of one or more fault surfaces is preserved. Three-dimensional analysis improves the speed and accuracy of the fault plane methodology.

  15. Power spectrum and Allan variance methods for calibrating single-molecule video-tracking instruments

    PubMed Central

    Lansdorp, Bob M.; Saleh, Omar A.

    2012-01-01

    Single-molecule manipulation instruments, such as optical traps and magnetic tweezers, frequently use video tracking to measure the position of a force-generating probe. The instruments are calibrated by comparing the measured probe motion to a model of Brownian motion in a harmonic potential well; the results of calibration are estimates of the probe drag, α, and spring constant, κ. Here, we present both time- and frequency-domain methods to accurately and precisely extract α and κ from the probe trajectory. In the frequency domain, we discuss methods to estimate the power spectral density (PSD) from data (including windowing and blocking), and we derive an analytical formula for the PSD which accounts both for aliasing and the filtering intrinsic to video tracking. In the time domain, we focus on the Allan variance (AV): we present a theoretical equation for the AV relevant to typical single-molecule setups and discuss the optimal manner for computing the AV from experimental data using octave-sampled overlapping bins. We show that, when using maximum-likelihood methods to fit to the data, both the PSD and AV approaches can extract α and κ in an unbiased and low-error manner, though the AV approach is simpler and more robust. PMID:22380133

  16. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  17. A Historical Perspective on the Development of the Allan Variances and Their Strengths and Weaknesses.

    PubMed

    Allan, David W; Levine, Judah

    2016-04-01

    Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.

  18. Nominal analysis of "variance".

    PubMed

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  19. Budget variance analysis using RVUs.

    PubMed

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  20. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  1. Warped functional analysis of variance.

    PubMed

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  2. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    PubMed

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  3. Multireader multicase variance analysis for binary data.

    PubMed

    Gallas, Brandon D; Pennello, Gene A; Myers, Kyle J

    2007-12-01

    Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.

  4. Nonorthogonal Analysis of Variance Programs: An Evaluation.

    ERIC Educational Resources Information Center

    Hosking, James D.; Hamer, Robert M.

    1979-01-01

    Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)

  5. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  6. Uses and abuses of analysis of variance.

    PubMed Central

    Evans, S J

    1983-01-01

    Analysis of variance is a term often quoted to explain the analysis of data in experiments and clinical trials. The relevance of its methodology to clinical trials is shown and an explanation of the principles of the technique is given. The assumptions necessary are examined and the problems caused by their violation are discussed. The dangers of misuse are given with some suggestions for alternative approaches. PMID:6347228

  7. Allan Deviation Plot as a Tool for Quartz-Enhanced Photoacoustic Sensors Noise Analysis.

    PubMed

    Giglio, Marilena; Patimisco, Pietro; Sampaolo, Angelo; Scamarcio, Gaetano; Tittel, Frank K; Spagnolo, Vincenzo

    2016-04-01

    We report here on the use of the Allan deviation plot to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor. The Allan plot provides information about the optimum averaging time for the QEPAS signal and allows the prediction of its ultimate detection limit. The Allan deviation can also be used to determine the main sources of noise coming from the individual components of the sensor. Quartz tuning fork thermal noise dominates for integration times up to 275 s, whereas at longer averaging times, the main contribution to the sensor noise originates from laser power instabilities.

  8. Analysis of Variance of Multiply Imputed Data.

    PubMed

    van Ginkel, Joost R; Kroonenberg, Pieter M

    2014-01-01

    As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.

  9. Analysis of variance of microarray data.

    PubMed

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.

  10. Automatic variance analysis of multistage care pathways.

    PubMed

    Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T

    2014-01-01

    A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280

  11. Correcting an analysis of variance for clustering.

    PubMed

    Hedges, Larry V; Rhoads, Christopher H

    2011-02-01

    A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.

  12. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  13. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  14. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256

  15. Wave propagation analysis using the variance matrix.

    PubMed

    Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S

    2014-10-01

    The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243

  16. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    PubMed

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  17. A Computer Program to Determine Reliability Using Analysis of Variance

    ERIC Educational Resources Information Center

    Burns, Edward

    1976-01-01

    A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)

  18. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  19. Variance analysis. Part II, The use of computers.

    PubMed

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  20. Uncovering hidden variance: pair-wise SNP analysis accounts for additional variance in nicotine dependence

    PubMed Central

    Culverhouse, Robert C.; Saccone, Nancy L.; Stitzel, Jerry A.; Wang, Jen C.; Steinbach, Joseph H.; Goate, Alison M.; Schwantes-An, Tae-Hwi; Grucza, Richard A.; Stevens, Victoria L.; Bierut, Laura J.

    2010-01-01

    Results from genome-wide association studies of complex traits account for only a modest proportion of the trait variance predicted to be due to genetics. We hypothesize that joint analysis of polymorphisms may account for more variance. We evaluated this hypothesis on a case–control smoking phenotype by examining pairs of nicotinic receptor single-nucleotide polymorphisms (SNPs) using the Restricted Partition Method (RPM) on data from the Collaborative Genetic Study of Nicotine Dependence (COGEND). We found evidence of joint effects that increase explained variance. Four signals identified in COGEND were testable in independent American Cancer Society (ACS) data, and three of the four signals replicated. Our results highlight two important lessons: joint effects that increase the explained variance are not limited to loci displaying substantial main effects, and joint effects need not display a significant interaction term in a logistic regression model. These results suggest that the joint analyses of variants may indeed account for part of the genetic variance left unexplained by single SNP analyses. Methodologies that limit analyses of joint effects to variants that demonstrate association in single SNP analyses, or require a significant interaction term, will likely miss important joint effects. PMID:21079997

  1. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  2. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  3. Cyclostationary analysis with logarithmic variance stabilisation

    NASA Astrophysics Data System (ADS)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  4. Wavelet variance analysis for random fields on a regular lattice.

    PubMed

    Mondal, Debashis; Percival, Donald B

    2012-02-01

    There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.

  5. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  6. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  7. Exploratory Multivariate Analysis of Variance: Contrasts and Variables.

    ERIC Educational Resources Information Center

    Barcikowski, Robert S.; Elliott, Ronald S.

    The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…

  8. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  9. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  10. Some Computer Programs for Selected Problems in Analysis of Variance.

    ERIC Educational Resources Information Center

    Edwards, Lynne K.; Bland, Patricia C.

    Selected examples using the statistical packages Statistical Package for the Social Sciences (SPSS), the Statistical Analysis System (SAS), and BMDP are presented to facilitate their use and encourage appropriate uses in: (1) a hierarchical design; (2) a confounded factorial design; and (3) variance component estimation procedures. To illustrate…

  11. Allan Sillitoe's Lonely Hero.

    ERIC Educational Resources Information Center

    Obst, Jennifer

    1969-01-01

    The hero of Allan Sillitoe's novel, "The Loneliness of the Long-Distance Runner," differs in many ways from the typical modern existential hero. Unlike the anti-hero, Smith is not searching for values, for he understands what life is and accepts it. He follows a code of honesty and hates "phonies." He is aware of class distinctions and sees the…

  12. Analysis of variance in spectroscopic imaging data from human tissues.

    PubMed

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  13. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  14. Two-dimensional finite-element temperature variance analysis

    NASA Technical Reports Server (NTRS)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  15. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  16. FMRI group analysis combining effect estimates and their variances

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.

    2012-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  17. Analysis of variance of thematic mapping experiment data.

    USGS Publications Warehouse

    Rosenfield, G.H.

    1981-01-01

    As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author

  18. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  19. Correct use of repeated measures analysis of variance.

    PubMed

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  20. Analysis of variance of an underdetermined geodetic displacement problem

    SciTech Connect

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  1. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  2. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  3. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  4. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  5. Local variance for multi-scale analysis in geomorphometry.

    PubMed

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-07-15

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  6. Analysis of Variance of Migmatite Composition II: Comparison of Two Areas.

    PubMed

    Ward, R F; Werner, S L

    1964-03-01

    To obtain comparison with previous results an analysis of variance was made on measurements of proportion of granite and country rock in a second Colorado migmatite. The distributional parameters (mean and variance) of both regions are similar, but the distributions of variance among the three levels of the nested design differ radically.

  7. Analysis of variance (ANOVA) models in lower extremity wounds.

    PubMed

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  8. Analysis of variance in neuroreceptor ligand imaging studies.

    PubMed

    Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P

    2011-01-01

    Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

  9. A model selection approach to analysis of variance and covariance.

    PubMed

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.

  10. Propagation of variance uncertainty calculation for an autopsy tissue analysis

    SciTech Connect

    Bruckner, L.A.

    1994-07-01

    When a radiochemical analysis is reported, it is often accompanied by an uncertainty value that simply reflects the natural variation in the observed counts due to radioactive decay, the so-called counting statistics. However, when the assay procedure is complex or when the number of counts is large, there are usually other important contributors to the total measurement uncertainty that need to be considered. An assay value is almost useless unless it is accompanied by a measure of the uncertainty associated with that value. The uncertainty value should reflect all the major sources of variation and bias affecting the assay and should provide a specified level of confidence. An approach to uncertainty calculation that includes the uncertainty due to instrument calibration, values of the standards, and intermediate measurements as well as counting statistics is presented and applied to the analysis of an autopsy tissue. This approach, usually called propagation of variance, attempts to clearly distinguish between errors that have systematic (bias) effects and those that have random effects on the assays. The effects of these different types of errors are then propagated to the assay using formal statistical techniques. The result is an uncertainty on the assay that has a defensible level of confidence and which can be traced to individual major contributors. However, since only measurement steps are readly quantified and since all models are approximations, it is emphasized that without empirical verification, a propagation of uncertainty model may be just a fancy model with no connection to reality. 5 refs., 1 fig., 2 tab.

  11. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  12. Edgar Allan Poe and neurology.

    PubMed

    Teive, Hélio Afonso Ghizoni; Paola, Luciano de; Munhoz, Renato Puppi

    2014-06-01

    Edgar Allan Poe was one of the most celebrated writers of all time. He published several masterpieces, some of which include references to neurological diseases. Poe suffered from recurrent depression, suggesting a bipolar disorder, as well as alcohol and drug abuse, which in fact led to his death from complications related to alcoholism. Various hypotheses were put forward, including Wernicke's encephalopathy.

  13. Allan Bloom, America, and Education.

    ERIC Educational Resources Information Center

    West, Thomas

    2000-01-01

    Refutes the claims of Allan Bloom that the source of the problem with today's universities is modern philosophy, that the writings and ideas of Hobbes and Locke planted the seeds of relativism in American culture, and that the cure is Great Books education. Suggests instead that America's founding principles are the only solution to the failure of…

  14. MCT8 mutation analysis and identification of the first female with Allan-Herndon-Dudley syndrome due to loss of MCT8 expression.

    PubMed

    Frints, Suzanna Gerarda Maria; Lenzner, Steffen; Bauters, Mareike; Jensen, Lars Riff; Van Esch, Hilde; des Portes, Vincent; Moog, Ute; Macville, Merryn Victor Erik; van Roozendaal, Kees; Schrander-Stumpel, Constance Theresia Rimbertha Maria; Tzschach, Andreas; Marynen, Peter; Fryns, Jean-Pierre; Hamel, Ben; van Bokhoven, Hans; Chelly, Jamel; Beldjord, Chérif; Turner, Gillian; Gecz, Jozef; Moraine, Claude; Raynaud, Martine; Ropers, Hans Hilger; Froyen, Guy; Kuss, Andreas Walter

    2008-09-01

    Mutations in the thyroid monocarboxylate transporter 8 gene (MCT8/SLC16A2) have been reported to result in X-linked mental retardation (XLMR) in patients with clinical features of the Allan-Herndon-Dudley syndrome (AHDS). We performed MCT8 mutation analysis including 13 XLMR families with LOD scores >2.0, 401 male MR sibships and 47 sporadic male patients with AHDS-like clinical features. One nonsense mutation (c.629insA) and two missense changes (c.1A>T and c.1673G>A) were identified. Consistent with previous reports on MCT8 missense changes, the patient with c.1673G>A showed elevated serum T3 level. The c.1A>T change in another patient affects a putative translation start codon, but the same change was present in his healthy brother. In addition normal serum T3 levels were present, suggesting that the c.1A>T (NM_006517) variation is not responsible for the MR phenotype but indicates that MCT8 translation likely starts with a methionine at position p.75. Moreover, we characterized a de novo translocation t(X;9)(q13.2;p24) in a female patient with full blown AHDS clinical features including elevated serum T3 levels. The MCT8 gene was disrupted at the X-breakpoint. A complete loss of MCT8 expression was observed in a fibroblast cell-line derived from this patient because of unfavorable nonrandom X-inactivation. Taken together, these data indicate that MCT8 mutations are not common in non-AHDS MR patients yet they support that elevated serum T3 levels can be indicative for AHDS and that AHDS clinical features can be present in female MCT8 mutation carriers whenever there is unfavorable nonrandom X-inactivation.

  15. Variance component estimation for mixed model analysis of cDNA microarray data.

    PubMed

    Sarholz, Barbara; Piepho, Hans-Peter

    2008-12-01

    Microarrays provide a valuable tool for the quantification of gene expression. Usually, however, there is a limited number of replicates leading to unsatisfying variance estimates in a gene-wise mixed model analysis. As thousands of genes are available, it is desirable to combine information across genes. When more than two tissue types or treatments are to be compared it might be advisable to consider the array effect as random. Then information between arrays may be recovered, which can increase accuracy in estimation. We propose a method of variance component estimation across genes for a linear mixed model with two random effects. The method may be extended to models with more than two random effects. We assume that the variance components follow a log-normal distribution. Assuming that the sums of squares from the gene-wise analysis, given the true variance components, follow a scaled chi(2)-distribution, we adopt an empirical Bayes approach. The variance components are estimated by the expectation of their posterior distribution. The new method is evaluated in a simulation study. Differentially expressed genes are more likely to be detected by tests based on these variance estimates than by tests based on gene-wise variance estimates. This effect is most visible in studies with small array numbers. Analyzing a real data set on maize endosperm the method is shown to work well. PMID:19035549

  16. Commonality Analysis: Partitioning Variance to Facilitate Better Understanding of Data

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Thompson, Bruce

    2006-01-01

    In early intervention, researchers often are interested in interpretation aids that can help determine the relative importance of variables when multiple regression models are used, and that facilitate deeper insight into prediction dynamics. Commonality analysis is one approach for helping researchers understand the contributions independent or…

  17. Variance Analysis and Comparison in Computer-Aided Design

    NASA Astrophysics Data System (ADS)

    Ullrich, T.; Schiffer, T.; Schinko, C.; Fellner, D. W.

    2011-09-01

    The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. In computer graphics, for example, differences of surfaces are used for analyzing mesh processing algorithms such as mesh compression. They are also used to validate reconstruction and fitting results of laser scanned surfaces. As laser scanning has become very important for the acquisition and preservation of artifacts, scanned representations are used for documentation as well as analysis of ancient objects. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Differences of surfaces are analyzed to check the quality of productions. Our contribution to this problem is a workflow, which compares a reference / nominal surface with an actual, laser-scanned data set. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object; whereas the laser-scanned object is a real-world data set without any additional semantic information.

  18. On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.

    2000-01-01

    Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)

  19. Resampling analysis of participant variance to improve the efficiency of sensor modeling perception experiments

    NASA Astrophysics Data System (ADS)

    O'Connor, John D.; Hixson, Jonathan; McKnight, Patrick; Peterson, Matthew S.; Parasuraman, Raja

    2010-04-01

    Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) sensor models, such as NV Therm IP, are developed through perception experiments that investigate phenomena associated with sensor performance (e.g. sampling, noise, sensitivity). A standardized laboratory perception testing method developed in the mid-1990's has been responsible for advances in sensor modeling that are supported by field sensor performance experiments.1 The number of participants required to yield dependable results for these experiments could not be estimated because the variance in performance due to participant differences was not known. NVESD and George Mason University (GMU) scientists measured the contribution of participant variance within the overall experimental variance for 22 individuals each exposed to 1008 stimuli. Results of the analysis indicate that the total participant contribution to overall experimental variance was between 1% and 2%.

  20. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  1. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

  2. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift.

  3. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  4. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  5. [Wavelength selection of the oximetry based on test analysis of variance].

    PubMed

    Lin, Ling; Li, Wei; Zeng, Rui-Li; Liu, Rui-An; Li, Gang; Wu, Xiao-Rong

    2014-07-01

    In order to improve the precision and reliability of the spectral measurement of blood oxygen saturation, and enhance the validity of the measurement, the method of test analysis of variance was employed. Preferred wavelength combination was selected by the analysis of the distribution of the coefficient of oximetry at different wavelength combinations and rational use of statistical theory. Calculated by different combinations of wavelengths (660 and 940 nm, 660 and 805 nm and 805 and 940 nm) through the clinical data under different oxygen saturation, the single factor test analysis of variance model of the oxygen saturation coefficient was established, the relative preferabe wavelength combination can be selected by comparative analysis of different combinations of wavelengths from the photoelectric volume pulse to provide a reliable intermediate data for further modeling. The experiment results showed that the wavelength combination of 660 and 805 nm responded more significantly to the changes in blood oxygen saturation and the introduced noise and method error were relatively smaller of this combination than other wavelength combination, which could improve the measurement accuracy of oximetry. The study applied the test variance analysis to the selection of wavelength combination in the blood oxygen result measurement, and the result was significant. The study provided a new idea for the blood oxygen measurements and other related spectroscopy quantitative analysis. The method of test analysis of variance can help extract the valid information which represents the measured values from the spectrum.

  6. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    PubMed

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  7. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    ERIC Educational Resources Information Center

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  8. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  9. The Use of the Arc-Sine Transformation in the Analysis of Variance.

    ERIC Educational Resources Information Center

    Milligan, Glenn W.

    1987-01-01

    The use of the arc-sine transformation in analysis of variance can lead to difficult inference situations and pose problems in interpretation. It can also produce tests of noticeably lower power when the null hypothesis is false, and is not recommended as a standard tool. Simulated illustrations are provided. (Author/GDC)

  10. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  11. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    ERIC Educational Resources Information Center

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  12. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    ERIC Educational Resources Information Center

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  13. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…

  14. Development of Statistically Parallel Tests by Analysis of Unique Item Variance.

    ERIC Educational Resources Information Center

    Ree, Malcolm James

    A method for developing statistically parallel tests based on the analysis of unique item variance was developed. A test population of 907 basic airmen trainees were required to estimate the angle at which an object in a photograph was viewed, selecting from eight possibilities. A FORTRAN program known as VARSEL was used to rank all the test items…

  15. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2014-01-01

    Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

  16. Cost-variance analysis by DRGs; a technique for clinical budget analysis.

    PubMed

    Voss, G B; Limpens, P G; Brans-Brabant, L J; van Ooij, A

    1997-02-01

    In this article it is shown how a cost accounting system based on DRGs can be valuable in determining changes in clinical practice and explaining alterations in expenditure patterns from one period to another. A cost-variance analysis is performed using data from the orthopedic department from the fiscal years 1993 and 1994. Differences between predicted and observed cost for medical care, such as diagnostic procedures, therapeutic procedures and nursing care are analyzed into different components: changes in patient volume, case-mix differences, changes in resource use and variations in cost per procedure. Using a DRG cost accounting system proved to be a useful technique for clinical budget analysis. Results may stimulate discussions between hospital managers and medical professionals to explain cost variations integrating medical and economic aspects of clinical health care. PMID:10165044

  17. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    PubMed

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  18. Edgar Allan Poe's Physical Cosmology

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    1994-06-01

    In this paper I describe the scientific content of Eureka, the prose poem written by Edgar Allan Poe in 1848. In that work, starting from metaphysical assumptions, Poe claims that the Universe is finite in an infinite Space, and that it was originated from a primordial Particle, whose fragmentation under the action of a repulsive force caused a diffusion of atoms in space. I will show that his subsequently collapsing universe represents a scientifically acceptable Newtonian model. In the framework of his evolving universe, Poe makes use of contemporary astronomical knowledge, deriving modern concepts such as a primordial atomic state of the universe and a common epoch of galaxy formation. Harrison found in Eureka the first, qualitative solution of the Olbers' paradox; I show that Poe also applies in a modern way the anthropic principle, trying to explain why the Universe is so large.

  19. Sensitivity analysis of a two-dimensional probabilistic risk assessment model using analysis of variance.

    PubMed

    Mokhtari, Amirhossein; Frey, H Christopher

    2005-12-01

    This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.

  20. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques

    NASA Technical Reports Server (NTRS)

    Wu, Andy

    1995-01-01

    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  1. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  2. Hierarchical linear model: thinking outside the traditional repeated-measures analysis-of-variance box.

    PubMed

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C

    2015-04-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.

  3. Combining multivariate statistics and analysis of variance to redesign a water quality monitoring network.

    PubMed

    Guigues, Nathalie; Desenfant, Michèle; Hance, Emmanuel

    2013-09-01

    The objective of this paper was to demonstrate how multivariate statistics combined with the analysis of variance could support decision-making during the process of redesigning a water quality monitoring network with highly heterogeneous datasets in terms of time and space. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were selected to optimise the selection of water quality parameters to be monitored as well as the number and location of monitoring stations. Sampling frequency was specifically investigated through the analysis of variance. The data used were obtained between 2007 and 2010 at the Long-term Environmental Research Monitoring and Testing System (OPE) located in the north-eastern part of France in relation with a geological disposal of radioactive waste project. PCA results showed that no substantial reduction among the parameters was possible as strong correlation only exists between electrical conductivity, calcium or bicarbonates. HCA results were geospatially represented for each field campaign and compared to one another in terms of similarities and differences allowing us to group the monitoring stations into 12 categories. This approach enabled us to take into account not only the spatial variability of water quality but also its temporal variability. Finally, the analysis of variances showed that three very different behaviours occurred: parameters with high temporal variability and low spatial variability (e.g. suspended matter), parameters with high spatial variability and average temporal variability (e.g. calcium) and finally parameters with both high temporal and spatial variability (e.g. nitrate).

  4. Toward a more robust variance-based global sensitivity analysis of model outputs

    SciTech Connect

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  5. Publishing nutrition research: a review of multivariate techniques--part 2: analysis of variance.

    PubMed

    Harris, Jeffrey E; Sheean, Patricia M; Gleason, Philip M; Bruemmer, Barbara; Boushey, Carol

    2012-01-01

    This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research, and the second in a series focused on multivariate statistical analytical techniques. The purpose of this review is to examine the statistical technique, analysis of variance (ANOVA), from its simplest to multivariate applications. Many dietetics practitioners are familiar with basic ANOVA, but less informed of the multivariate applications such as multiway ANOVA, repeated-measures ANOVA, analysis of covariance, multiple ANOVA, and multiple analysis of covariance. The article addresses all these applications and includes hypothetical and real examples from the field of dietetics.

  6. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE PAGES

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density mapsmore » for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  7. Structure analysis of simulated molecular clouds with the Δ-variance

    SciTech Connect

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.

  8. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    NASA Astrophysics Data System (ADS)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  9. Addressing misallocation of variance in principal components analysis of event-related potentials.

    PubMed

    Dien, J

    1998-01-01

    Interpretation of evoked response potentials is complicated by the extensive superposition of multiple electrical events. The most common approach to disentangling these features is principal components analysis (PCA). Critics have demonstrated a number of caveats that complicate interpretation, notably misallocation of variance and latency jitter. This paper describes some further caveats to PCA as well as using simulations to evaluate three potential methods for addressing them: parallel analysis, oblique rotations, and spatial PCA. An improved simulation model is introduced for examining these issues. It is concluded that PCA is an essential statistical tool for event-related potential analysis, but only if applied appropriately.

  10. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data.

    PubMed

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M; Feng, Ling; Hansen, Adam E; Baare, William; Rosen, Bruce; Fischl, Bruce; Knudsen, Gitte M

    2014-05-15

    Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise in a ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface-based smoothing, level of smoothing, use of voxelwise partial volume correction (PVC), and PVC masking threshold. PVC was implemented using the Muller-Gartner method with the masking out of voxels with low gray matter (GM) partial volume fraction. Dynamic PET scans of an antagonist serotonin-4 receptor radioligand ([(11)C]SB207145) were collected on sixteen healthy subjects using a Siemens HRRT PET scanner. Kinetic modeling was used to compute maps of non-displaceable binding potential (BPND) after preprocessing. The results showed a complicated interaction between smoothing, PVC, and masking on BPND estimates. Volume-based smoothing resulted in large bias and intersubject variance because it smears signal across tissue types. In some cases, PVC with volume smoothing paradoxically caused the estimated BPND to be less than when no PVC was used at all. When applied in the absence of PVC, cortical surface-based smoothing resulted in dramatically less bias and the least variance of the methods tested for smoothing levels 5mm and higher. When used in combination with PVC, surface-based smoothing minimized the bias without significantly increasing the variance. Surface-based smoothing resulted in 2-4 times less intersubject variance than when volume smoothing was used. This translates into more than 4 times fewer subjects needed in a group analysis to achieve similarly powered

  11. Efficiency control in large-scale genotyping using analysis of variance.

    PubMed

    Spijker, Geert T; Bruinenberg, Marcel; te Meerman, Gerard J

    2005-01-01

    The efficiency of the genotyping process is determined by many simultaneous factors. In actual genotyping, a production run is often preceded by small-scale experiments to find optimal conditions. We propose to use statistical analysis of production run data as well, to gain insight into factors important for the outcome of genotyping. As an example, we show that analysis of variance (ANOVA) applied to the first-pass results of a genetic study reveals important determinants of genotyping success. The largest factor limiting genotyping appeared to be interindividual variation among DNA samples, explaining 20% of the variance, and a smaller reaction volume, sizing failure, and differences among markers all explained approximately 10%. Other potentially important factors, such as sample position within the plate and reusing electrophoresis matrix, appeared to be of minor influence. About 55% of the total variance could be explained by systematic factors. These results show that ANOVA can provide valuable feedback to improve genotyping efficiency. We propose to adjust genotype production runs using principles of experimental design in order to maximize genotyping efficiency at little additional cost.

  12. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

    PubMed

    Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  13. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

    PubMed

    Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution.

  14. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    PubMed

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model.

  15. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  16. Image embedded coding with edge preservation based on local variance analysis for mobile applications

    NASA Astrophysics Data System (ADS)

    Luo, Gaoyong; Osypiw, David

    2006-02-01

    Transmitting digital images via mobile device is often subject to bandwidth which are incompatible with high data rates. Embedded coding for progressive image transmission has recently gained popularity in image compression community. However, current progressive wavelet-based image coders tend to send information on the lowest-frequency wavelet coefficients first. At very low bit rates, images compressed are therefore dominated by low frequency information, where high frequency components belonging to edges are lost leading to blurring the signal features. This paper presents a new image coder employing edge preservation based on local variance analysis to improve the visual appearance and recognizability of compressed images. The analysis and compression is performed by dividing an image into blocks. Fast lifting wavelet transform is developed with the advantages of being computationally efficient and boundary effects minimized by changing wavelet shape for handling filtering near the boundaries. A modified SPIHT algorithm with more bits used to encode the wavelet coefficients and transmitting fewer bits in the sorting pass for performance improvement, is implemented to reduce the correlation of the coefficients at scalable bit rates. Local variance estimation and edge strength measurement can effectively determine the best bit allocation for each block to preserve the local features by assigning more bits for blocks containing more edges with higher variance and edge strength. Experimental results demonstrate that the method performs well both visually and in terms of MSE and PSNR. The proposed image coder provides a potential solution with parallel computation and less memory requirements for mobile applications.

  17. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

  18. Analysis of T-RFLP data using analysis of variance and ordination methods: a comparative study.

    PubMed

    Culman, S W; Gauch, H G; Blackwood, C B; Thies, J E

    2008-09-01

    The analysis of T-RFLP data has developed considerably over the last decade, but there remains a lack of consensus about which statistical analyses offer the best means for finding trends in these data. In this study, we empirically tested and theoretically compared ten diverse T-RFLP datasets derived from soil microbial communities using the more common ordination methods in the literature: principal component analysis (PCA), nonmetric multidimensional scaling (NMS) with Sørensen, Jaccard and Euclidean distance measures, correspondence analysis (CA), detrended correspondence analysis (DCA) and a technique new to T-RFLP data analysis, the Additive Main Effects and Multiplicative Interaction (AMMI) model. Our objectives were i) to determine the distribution of variation in T-RFLP datasets using analysis of variance (ANOVA), ii) to determine the more robust and informative multivariate ordination methods for analyzing T-RFLP data, and iii) to compare the methods based on theoretical considerations. For the 10 datasets examined in this study, ANOVA revealed that the variation from Environment main effects was always small, variation from T-RFs main effects was large, and variation from T-RFxEnvironment (TxE) interactions was intermediate. Larger variation due to TxE indicated larger differences in microbial communities between environments/treatments and thus demonstrated the utility of ANOVA to provide an objective assessment of community dissimilarity. The comparison of statistical methods typically yielded similar empirical results. AMMI, T-RF-centered PCA, and DCA were the most robust methods in terms of producing ordinations that consistently reached a consensus with other methods. In datasets with high sample heterogeneity, NMS analyses with Sørensen and Jaccard distance were the most sensitive for recovery of complex gradients. The theoretical comparison showed that some methods hold distinct advantages for T-RFLP analysis, such as estimations of variation

  19. Chasing change: repeated-measures analysis of variance is so yesterday!

    PubMed

    Dijkers, Marcel P

    2013-03-01

    Change and growth are the bread and butter of rehabilitation research, but to date, most researchers have used less than optimal statistical methods to quantify change, its nature, speed, and form. Hierarchical linear modeling (HLM) (random/mixed effects or latent growth or multilevel modeling, individual/latent growth curve analysis) generally is superior to analysis of (co)variance and other methods, but has been underused in rehabilitation research. Apropos of the publication of 2 didactic articles setting forth the basics of HLM, this commentary sketches some of the advantages of this technique.

  20. The analysis of variance in anaesthetic research: statistics, biography and history.

    PubMed

    Pandit, J J

    2010-12-01

    Multiple t-tests (or their non-parametric equivalents) are often used erroneously to compare the means of three or more groups in anaesthetic research. Methods for correcting the p value regarded as significant can be applied to take account of multiple testing, but these are somewhat arbitrary and do not avoid several unwieldy calculations. The appropriate method for most such comparisons is the 'analysis of variance' that not only economises on the number of statistical procedures, but also indicates if underlying factors or sub-groups have contributed to any significant results. This article outlines the history, rationale and method of this analysis.

  1. A further analysis for the minimum-variance deconvolution filter performance

    NASA Astrophysics Data System (ADS)

    Chi, Chong-Yung

    1987-06-01

    Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.

  2. Reducing experimental variability in variance-based sensitivity analysis of biochemical reaction systems.

    PubMed

    Zhang, Hong-Xuan; Goutsias, John

    2011-03-21

    Sensitivity analysis is a valuable task for assessing the effects of biological variability on cellular behavior. Available techniques require knowledge of nominal parameter values, which cannot be determined accurately due to experimental uncertainty typical to problems of systems biology. As a consequence, the practical use of existing sensitivity analysis techniques may be seriously hampered by the effects of unpredictable experimental variability. To address this problem, we propose here a probabilistic approach to sensitivity analysis of biochemical reaction systems that explicitly models experimental variability and effectively reduces the impact of this type of uncertainty on the results. The proposed approach employs a recently introduced variance-based method to sensitivity analysis of biochemical reaction systems [Zhang et al., J. Chem. Phys. 134, 094101 (2009)] and leads to a technique that can be effectively used to accommodate appreciable levels of experimental variability. We discuss three numerical techniques for evaluating the sensitivity indices associated with the new method, which include Monte Carlo estimation, derivative approximation, and dimensionality reduction based on orthonormal Hermite approximation. By employing a computational model of the epidermal growth factor receptor signaling pathway, we demonstrate that the proposed technique can greatly reduce the effect of experimental variability on variance-based sensitivity analysis results. We expect that, in cases of appreciable experimental variability, the new method can lead to substantial improvements over existing sensitivity analysis techniques.

  3. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis.

    PubMed

    Luthria, Devanand L; Lin, Long-Ze; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-11-12

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of two cultivars of broccoli, Majestic and Legacy, the first grown with four different levels of Se and the second grown organically and conventionally with two rates of irrigation. Chemical composition differences in the two cultivars and seven treatments produced patterns that were visually and statistically distinguishable using ANOVA-PCA. PCA loadings allowed identification of the molecular and fragment ions that provided the most significant chemical differences. A standardized profiling method for phenolic compounds showed that important discriminating ions were not phenolic compounds. The elution times of the discriminating ions and previous results suggest that they were common sugars and organic acids. ANOVA calculations of the positive and negative ionization MS fingerprints showed that 33% of the variance came from the cultivar, 59% from the growing treatment, and 8% from analytical uncertainty. Although the positive and negative ionization fingerprints differed significantly, there was no difference in the distribution of variance. High variance of individual masses with cultivars or growing treatment was correlated with high PCA loadings. The ANOVA data suggest that only variables with high variance for analytical uncertainty should be deleted. All other variables represent discriminating masses that allow separation of the samples with respect to cultivar and treatment.

  4. Analysis of variance: is there a difference in means and what does it mean?

    PubMed

    Kao, Lillian S; Green, Charles E

    2008-01-01

    To critically evaluate the literature and to design valid studies, surgeons require an understanding of basic statistics. Despite the increasing complexity of reported statistical analyses in surgical journals and the decreasing use of inappropriate statistical methods, errors such as in the comparison of multiple groups still persist. This review introduces the statistical issues relating to multiple comparisons, describes the theoretical basis behind analysis of variance (ANOVA), discusses the essential differences between ANOVA and multiple t-tests, and provides an example of the computations and computer programming used in performing ANOVA.

  5. Advances in the meta-analysis of heterogeneous clinical trials I: The inverse variance heterogeneity model.

    PubMed

    Doi, Suhail A R; Barendregt, Jan J; Khan, Shahjahan; Thalib, Lukman; Williams, Gail M

    2015-11-01

    This article examines an improved alternative to the random effects (RE) model for meta-analysis of heterogeneous studies. It is shown that the known issues of underestimation of the statistical error and spuriously overconfident estimates with the RE model can be resolved by the use of an estimator under the fixed effect model assumption with a quasi-likelihood based variance structure - the IVhet model. Extensive simulations confirm that this estimator retains a correct coverage probability and a lower observed variance than the RE model estimator, regardless of heterogeneity. When the proposed IVhet method is applied to the controversial meta-analysis of intravenous magnesium for the prevention of mortality after myocardial infarction, the pooled OR is 1.01 (95% CI 0.71-1.46) which not only favors the larger studies but also indicates more uncertainty around the point estimate. In comparison, under the RE model the pooled OR is 0.71 (95% CI 0.57-0.89) which, given the simulation results, reflects underestimation of the statistical error. Given the compelling evidence generated, we recommend that the IVhet model replace both the FE and RE models. To facilitate this, it has been implemented into free meta-analysis software called MetaXL which can be downloaded from www.epigear.com.

  6. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  7. The apportionment of total genetic variation by categorical analysis of variance.

    PubMed

    Khang, Tsung Fei; Yap, Von Bing

    2010-01-01

    We wish to suggest the categorical analysis of variance as a means of quantifying the proportion of total genetic variation attributed to different sources of variation. This method potentially challenges researchers to rethink conclusions derived from a well-known method known as the analysis of molecular variance (AMOVA). The CATANOVA framework allows explicit definition, and estimation, of two measures of genetic differentiation. These parameters form the subject of interest in many research programmes, but are often confused with the correlation measures defined in AMOVA, which cannot be interpreted as relative contributions of particular sources of variation. Through a simulation approach, we show that under certain conditions, researchers who use AMOVA to estimate these measures of genetic differentiation may attribute more than justified amounts of total variation to population labels. Moreover, the two measures can also lead to incongruent conclusions regarding the genetic structure of the populations of interest. Fortunately, one of the two measures seems robust to variations in relative sample sizes used. Its merits are illustrated in this paper using mitochondrial haplotype and amplified fragment length polymorphism (AFLP) data.

  8. Analysis of variance components reveals the contribution of sample processing to transcript variation.

    PubMed

    van der Veen, Douwe; Oliveira, José Miguel; van den Berg, Willy A M; de Graaff, Leo H

    2009-04-01

    The proper design of DNA microarray experiments requires knowledge of biological and technical variation of the studied biological model. For the filamentous fungus Aspergillus niger, a fast, quantitative real-time PCR (qPCR)-based hierarchical experimental design was used to determine this variation. Analysis of variance components determined the contribution of each processing step to total variation: 68% is due to differences in day-to-day handling and processing, while the fermentor vessel, cDNA synthesis, and qPCR measurement each contributed equally to the remainder of variation. The global transcriptional response to d-xylose was analyzed using Affymetrix microarrays. Twenty-four statistically differentially expressed genes were identified. These encode enzymes required to degrade and metabolize D-xylose-containing polysaccharides, as well as complementary enzymes required to metabolize complex polymers likely present in the vicinity of D-xylose-containing substrates. These results confirm previous findings that the d-xylose signal is interpreted by the fungus as the availability of a multitude of complex polysaccharides. Measurement of a limited number of transcripts in a defined experimental setup followed by analysis of variance components is a fast and reliable method to determine biological and technical variation present in qPCR and microarray studies. This approach provides important parameters for the experimental design of batch-grown filamentous cultures and facilitates the evaluation and interpretation of microarray data.

  9. John A. Scigliano Interviews Allan B. Ellis.

    ERIC Educational Resources Information Center

    Scigliano, John A.

    2000-01-01

    This interview with Allan Ellis focuses on a history of computer applications in education. Highlights include work at the Harvard Graduate School of Education; the New England Education Data System; and efforts to create a computer-based distance learning and development program called ISVD (Information System for Vocational Decisions). (LRW)

  10. The Curious Mind of Allan Bloom.

    ERIC Educational Resources Information Center

    Gardner, Martin

    1988-01-01

    This article reviews Allan Bloom's 1987 book, THE CLOSING OF THE AMERICAN MIND: HOW HIGHER EDUCATION HAS FAILED DEMOCRACY AND IMPOVERISHED THE SOULS OF TODAY'S CHILDREN. Compares Bloom's book with THE HIGHER LEARNING IN AMERICA, a 1930s book by Mortimer Adler and Robert Hutchins. (JDH)

  11. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  12. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  13. Identification of mitochondrial proteins of malaria parasite using analysis of variance.

    PubMed

    Ding, Hui; Li, Dongmei

    2015-02-01

    As a parasitic protozoan, Plasmodium falciparum (P. falciparum) can cause malaria. The mitochondrial proteins of malaria parasite play important roles in the discovery of anti-malarial drug targets. Thus, accurate identification of mitochondrial proteins of malaria parasite is a key step for understanding their functions and finding potential drug targets. In this work, we developed a sequence-based method to identify the mitochondrial proteins of malaria parasite. At first, we extended adjoining dipeptide composition to g-gap dipeptide composition for discretely formulating the protein sequences. Subsequently, the analysis of variance (ANOVA) combined with incremental feature selection (IFS) was used to pick out the optimal features. Finally, the jackknife cross-validation was used to evaluate the performance of the proposed model. Evaluation results showed that the maximum accuracy of 97.1% could be achieved by using 101 optimal 5-gap dipeptides. The comparison with previous methods demonstrated that our method was accurate and efficient.

  14. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    NASA Astrophysics Data System (ADS)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  15. fullfact: an R package for the analysis of genetic and maternal variance components from full factorial mating designs.

    PubMed

    Houde, Aimee Lee S; Pitcher, Trevor E

    2016-03-01

    Full factorial breeding designs are useful for quantifying the amount of additive genetic, nonadditive genetic, and maternal variance that explain phenotypic traits. Such variance estimates are important for examining evolutionary potential. Traditionally, full factorial mating designs have been analyzed using a two-way analysis of variance, which may produce negative variance values and is not suited for unbalanced designs. Mixed-effects models do not produce negative variance values and are suited for unbalanced designs. However, extracting the variance components, calculating significance values, and estimating confidence intervals and/or power values for the components are not straightforward using traditional analytic methods. We introduce fullfact - an R package that addresses these issues and facilitates the analysis of full factorial mating designs with mixed-effects models. Here, we summarize the functions of the fullfact package. The observed data functions extract the variance explained by random and fixed effects and provide their significance. We then calculate the additive genetic, nonadditive genetic, and maternal variance components explaining the phenotype. In particular, we integrate nonnormal error structures for estimating these components for nonnormal data types. The resampled data functions are used to produce bootstrap-t confidence intervals, which can then be plotted using a simple function. We explore the fullfact package through a worked example. This package will facilitate the analyses of full factorial mating designs in R, especially for the analysis of binary, proportion, and/or count data types and for the ability to incorporate additional random and fixed effects and power analyses.

  16. Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs

    SciTech Connect

    Tong, C

    2009-04-20

    Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.

  17. Analysis of variance with unbalanced data: an update for ecology & evolution.

    PubMed

    Hector, Andy; von Felten, Stefanie; Schmid, Bernhard

    2010-03-01

    1. Factorial analysis of variance (anova) with unbalanced (non-orthogonal) data is a commonplace but controversial and poorly understood topic in applied statistics. 2. We explain that anova calculates the sum of squares for each term in the model formula sequentially (type I sums of squares) and show how anova tables of adjusted sums of squares are composite tables assembled from multiple sequential analyses. A different anova is performed for each explanatory variable or interaction so that each term is placed last in the model formula in turn and adjusted for the others. 3. The sum of squares for each term in the analysis can be calculated after adjusting only for the main effects of other explanatory variables (type II sums of squares) or, controversially, for both main effects and interactions (type III sums of squares). 4. We summarize the main recent developments and emphasize the shift away from the search for the 'right'anova table in favour of presenting one or more models that best suit the objectives of the analysis.

  18. The Cosmology of Edgar Allan Poe

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    2011-06-01

    Eureka is a ``prose poem'' published in 1848, where Edgar Allan Poe presents his original cosmology. While starting from metaphysical assumptions, Poe develops an evolving Newtonian model of the Universe which has many and non casual analogies with modern cosmology. Poe was well informed about astronomical and physical discoveries, and he was influenced by both contemporary science and ancient ideas. For these reasons, Eureka is a unique synthesis of metaphysics, art and science.

  19. [The medical history of Edgar Allan Poe].

    PubMed

    Miranda C, Marcelo

    2007-09-01

    Edgar Allan Poe, one of the best American storytellers and poets, suffered an episodic behaviour disorder partially triggered by alcohol and opiate use. Much confusion still exists about the last days of his turbulent life and the cause of his death at an early age. Different etiologies have been proposed to explain his main medical problem, however, complex partial seizures triggered by alcohol, poorly recognized at the time when Poe lived, seems to be one of the most acceptable hypothesis, among others discussed.

  20. Princess Marie Bonaparte, Edgar Allan Poe, and psychobiography.

    PubMed

    Warner, S L

    1991-01-01

    Princess Marie Bonaparte was a colorful yet mysterious member of Freud's inner circle of psychoanalysis. In analysis with Freud beginning in 1925 (she was then 45 years old), she became a lay analyst and writer of many papers and books. Her most ambitious task was a 700-page psychobiography of Edgar Allan Poe that was first published in French in 1933. She was fascinated by Poe's gothic stories--with the return to life of dead persons and the eerie, unexpected turns of events. Her fascination with Poe can be traced to the similarity of their early traumatic life experiences. Bonaparte had lost her mother a month after her birth. Poe's father deserted the family when Edgar was two years old, and his mother died of tuberculosis when he was three. Poe's stories helped him to accommodate to these early traumatic losses. Bonaparte vicariously shared in Poe's loss and the fantasies of the return of the deceased parent in his stories. She was sensitive and empathetic to Poe's inner world because her inner world was similar. The result of this psychological fit between Poe and Bonaparte was her psychobiography, The Life and Works of Edgar Allan Poe. It was a milestone in psychobiography but limited in its psychological scope by its strong emphasis on early childhood trauma. Nevertheless it proved Bonaparte a bona fide creative psychoanalyst and not a dilettante propped up by her friendship with Freud.

  1. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    SciTech Connect

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  2. Effects of Violations of Data Set Assumptions When Using the Analysis of Variance and Covariance with Unequal Group Sizes.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    This research explored the degree to which group sizes can differ before the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) are jeopardized. Monte Carlo methodology was used, allowing for the experimental investigation of potential threats to robustness under conditions common to researchers in education. The…

  3. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    ERIC Educational Resources Information Center

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  4. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    PubMed

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  5. Inheritance of dermatoglyphic asymmetry and diversity traits in twins based on factor: variance decomposition analysis.

    PubMed

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2013-06-01

    Dermatoglyphic asymmetry and diversity traits from a large number of twins (MZ and DZ) were analyzed based on principal factors to evaluate genetic effects and common familial environmental influences on twin data by the use of maximum likelihood-based Variance decomposition analysis. Sample consists of monozygotic (MZ) twins of two sexes (102 male pairs and 138 female pairs) and 120 pairs of dizygotic (DZ) female twins. All asymmetry (DA and FA) and diversity of dermatoglyphic traits were clearly separated into factors. These are perfectly corroborated with the earlier studies in different ethnic populations, which indicate a common biological validity perhaps exists of the underlying component structures of dermatoglyphic characters. Our heritability result in twins clearly showed that DA_F2 is inherited mostly in dominant type (28.0%) and FA_F1 is additive (60.7%), but no significant difference in sexes was observed for these factors. Inheritance is also very prominent in diversity Factor 1, which is exactly corroborated with our previous findings. The present results are similar with the earlier results of finger ridge count diversity in twin data, which suggested that finger ridge count diversity is under genetic control.

  6. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance-components analysis.

    PubMed

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O'Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

    2015-12-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale.

  7. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217

  8. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

    USGS Publications Warehouse

    Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

    1995-01-01

    The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

  9. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance-components analysis.

    PubMed

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O'Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

    2015-12-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  10. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance components analysis

    PubMed Central

    Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick

    2015-01-01

    Heritability analyses of GWAS cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here, we analyze the genetic architecture of schizophrenia in 49,806 samples from the PGC, and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) among several pairs of GERA diseases; genetic correlations were on average 1.3x stronger than correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multi-component, multi-trait variance components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  11. Odor measurements according to EN 13725: A statistical analysis of variance components

    NASA Astrophysics Data System (ADS)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore

  12. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  13. Methods to estimate the between-study variance and its uncertainty in meta-analysis.

    PubMed

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia

    2016-03-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144

  14. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  15. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  16. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  17. Analysis of variance is easily misapplied in the analysis of randomized trials: a critique and discussion of alternative statistical approaches.

    PubMed

    Vickers, Andrew J

    2005-01-01

    Analysis of variance (ANOVA) is a statistical method that is widely used in the psychosomatic literature to analyze the results of randomized trials, yet ANOVA does not provide an estimate for the difference between groups, the key variable of interest in a randomized trial. Although the use of ANOVA is frequently justified on the grounds that a trial incorporates more than two groups, the hypothesis tested by ANOVA for these trials--"Are all groups equivalent?"--is often scientifically uninteresting. Regression methods are not only applicable to trials with many groups, but can be designed to address specific questions arising from the study design. ANOVA is also frequently used for trials with repeated measures, but the consequent reporting of "group effects," "time effects," and "time-by-group interactions" is a distraction from statistics of clinical and scientific value. Given that ANOVA is easily misapplied in the analysis of randomized trials, alternative approaches such as regression methods should be considered in preference.

  18. [Analysis of variance of bacterial counts in milk. 1. Characterization of total variance and the components of variance random sampling error, methodologic error and variation between parallel errors during storage].

    PubMed

    Böhmer, L; Hildebrandt, G

    1998-01-01

    In contrast to the prevailing automatized chemical analytical methods, classical microbiological techniques are linked with considerable material- and human-dependent sources of errors. These effects must be objectively considered for assessing the reliability and representativeness of a test result. As an example for error analysis, the deviation of bacterial counts and the influence of the time of testing, bacterial species involved (total bacterial count, coliform count) and the detection method used (pour-/spread-plate) were determined in a repeated testing of parallel samples of pasteurized (stored for 8 days at 10 degrees C) and raw (stored for 3 days at 6 degrees C) milk. Separate characterization of deviation components, namely, unavoidable random sampling error as well as methodical error and variation between parallel samples, was made possible by means of a test design where variance analysis was applied. Based on the results of the study, the following conclusions can be drawn: 1. Immediately after filling, the total count deviation in milk mainly followed the POISSON-distribution model and allowed a reliable hygiene evaluation of lots even with few samples. Subsequently, regardless of the examination procedure used, the setting up of parallel dilution series can be disregarded. 2. With increasing storage period, bacterial multiplication especially of psychrotrophs leads to unpredictable changes in the bacterial profile and density. With the increase in errors between samples, it is common to find packages which have acceptable microbiological quality but are already spoiled by the time of the expiry date labeled. As a consequence, a uniform acceptance or rejection of the batch is seldom possible. 3. Because the contamination level of coliforms in certified raw milk mostly lies near the detection limit, coliform counts with high relative deviation are expected to be found in milk directly after filling. Since no bacterial multiplication takes place

  19. Analysis of Quantitative Traits in Two Long-Term Randomly Mated Soybean Populations I. Genetic Variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...

  20. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  1. Power Analysis of Selected Parametric and Nonparametric Tests for Heterogeneous Variances in Non-Normal Distributions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…

  2. A Genome-Wide Association Analysis Reveals Epistatic Cancellation of Additive Genetic Variance for Root Length in Arabidopsis thaliana.

    PubMed

    Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan

    2015-01-01

    Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance

  3. A Genome-Wide Association Analysis Reveals Epistatic Cancellation of Additive Genetic Variance for Root Length in Arabidopsis thaliana

    PubMed Central

    Lachowiec, Jennifer; Shen, Xia; Queitsch, Christine; Carlborg, Örjan

    2015-01-01

    Efforts to identify loci underlying complex traits generally assume that most genetic variance is additive. Here, we examined the genetics of Arabidopsis thaliana root length and found that the genomic narrow-sense heritability for this trait in the examined population was statistically zero. The low amount of additive genetic variance that could be captured by the genome-wide genotypes likely explains why no associations to root length could be found using standard additive-model-based genome-wide association (GWA) approaches. However, as the broad-sense heritability for root length was significantly larger, and primarily due to epistasis, we also performed an epistatic GWA analysis to map loci contributing to the epistatic genetic variance. Four interacting pairs of loci were revealed, involving seven chromosomal loci that passed a standard multiple-testing corrected significance threshold. The genotype-phenotype maps for these pairs revealed epistasis that cancelled out the additive genetic variance, explaining why these loci were not detected in the additive GWA analysis. Small population sizes, such as in our experiment, increase the risk of identifying false epistatic interactions due to testing for associations with very large numbers of multi-marker genotypes in few phenotyped individuals. Therefore, we estimated the false-positive risk using a new statistical approach that suggested half of the associated pairs to be true positive associations. Our experimental evaluation of candidate genes within the seven associated loci suggests that this estimate is conservative; we identified functional candidate genes that affected root development in four loci that were part of three of the pairs. The statistical epistatic analyses were thus indispensable for confirming known, and identifying new, candidate genes for root length in this population of wild-collected A. thaliana accessions. We also illustrate how epistatic cancellation of the additive genetic variance

  4. Analysis the processing algorithm for the frequency measurement variance of the acousto-optic spectrum analyzer

    NASA Astrophysics Data System (ADS)

    He, Qi-rui; Gan, Lu; Zhou, Ying; Gao, Chun-ming; Zhang, Xi-ren

    2015-08-01

    When the acousto-optic device worked on the Bragg model, the non-liner affected the diffraction beam. There were some errors between the diffraction beam deflection peak position and the input signal's frequency, which reduced the frequency measure accuracy of the acousto-optic spectrum analyzer. Under the existing optical experimental platform, we eliminated the CCD background noise by reducing the threshold firstly, and then we processed the data by four methods, the peak value method, the Gaussian fitting method, the squared cancroids method and the Hilbert transform method. The least frequency measure variance is 31.8 KHz2, the data processed by the Gaussian fitting method. It provides theoretical support for reducing the frequency measurement variance of acousto-optic spectrum analyzer.

  5. Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance

    PubMed Central

    Lacy, Robert C.; Lynch, Carol Becker

    1979-01-01

    Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909

  6. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    NASA Astrophysics Data System (ADS)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  7. Obituary: Allan R. Sandage (1926-2010)

    NASA Astrophysics Data System (ADS)

    Devorkin, David

    2011-12-01

    Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer

  8. Uranium series dating of Allan Hills ice

    NASA Astrophysics Data System (ADS)

    Fireman, E. L.

    1986-03-01

    Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.

  9. Uranium series dating of Allan Hills ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.

    1986-01-01

    Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.

  10. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  11. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  12. 32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE AT CONTROL PANEL, ABOUT 1965. - U.S. Geological Survey, Rock Magnetics Laboratory, 345 Middlefield Road, Menlo Park, San Mateo County, CA

  13. View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.

    2013-01-01

    Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.

  14. The Effects of Violations of Data Set Assumptions When Using the Oneway, Fixed-Effects Analysis of Variance and the One Concomitant Analysis of Covariance.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    1994-01-01

    This research is an empirical study, through Monte Carlo simulation, of the effects of violations of the assumptions for the oneway fixed-effects analysis of variance (ANOVA) and analysis of covariance (ANCOVA). Research reaffirms findings of previous studies that suggest that ANOVA and ANCOVA be avoided when group sizes are not equal. (SLD)

  15. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  16. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  17. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

    NASA Astrophysics Data System (ADS)

    Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

    1994-01-01

    Cosmic anisotrophy produces an excess variance sq sigmasky in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigmasky(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)0.5. The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)0.5 to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigmasky(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)0.5 = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

  18. Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.

    ERIC Educational Resources Information Center

    Kroff, Michael W.

    This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…

  19. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

    SciTech Connect

    Phillips, J; Wolfgang, J

    2014-06-01

    Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least insensitive

  20. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    PubMed

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  1. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    PubMed

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  2. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  3. Radial forcing and Edgar Allan Poe's lengthening pendulum

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Blasing, David; Whitney, Heather M.

    2013-09-01

    Inspired by Edgar Allan Poe's The Pit and the Pendulum, we investigate a radially driven, lengthening pendulum. We first show that increasing the length of an undriven pendulum at a uniform rate does not amplify the oscillations in a manner consistent with the behavior of the scythe in Poe's story. We discuss parametric amplification and the transfer of energy (through the parameter of the pendulum's length) to the oscillating part of the system. In this manner, radial driving can easily and intuitively be understood, and the fundamental concept applied in many other areas. We propose and show by a numerical model that appropriately timed radial forcing can increase the oscillation amplitude in a manner consistent with Poe's story. Our analysis contributes a computational exploration of the complex harmonic motion that can result from radially driving a pendulum and sheds light on a mechanism by which oscillations can be amplified parametrically. These insights should prove especially valuable in the undergraduate physics classroom, where investigations into pendulums and oscillations are commonplace.

  4. Analysis of Variances and Covariances is Misleading as a Guide to a Common Factor Model.

    ERIC Educational Resources Information Center

    Humphreys, Lloyd G.; Park, Randolph K.

    1981-01-01

    The "factor" analyses published by Schultz, Kaye, and Hoyer (1980) confused component and factor analysis and led to unwarranted conclusions. The principal factors method yields two factors which support the a priori expectation of a difference between intelligence tasks and spontaneous flexibility tasks. (Author/RD)

  5. Biomarker profiling and reproducibility study of MALDI-MS measurements of Escherichia coli by analysis of variance-principal component analysis.

    PubMed

    Chen, Ping; Lu, Yao; Harrington, Peter B

    2008-03-01

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has proved useful for the characterization of bacteria and the detection of biomarkers. Key challenges for MALDI-MS measurements of bacteria are overcoming the relatively large variability in peak intensities. A soft tool, combining analysis of variance and principal component analysis (ANOVA-PCA) (Harrington, P. D.; Vieira, N. E.; Chen, P.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Chemom. Intell. Lab. Syst. 2006, 82, 283-293. Harrington, P. D.; Vieira, N. E.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Anal. Chim. Acta. 2005, 544, 118-127) was applied to investigate the effects of the experimental factors associated with MALDI-MS studies of microorganisms. The variance of the measurements was partitioned with ANOVA and the variance of target factors combined with the residual error was subjected to PCA to provide an easy to understand statistical test. The statistical significance of these factors can be visualized with 95% Hotelling T2 confidence intervals. ANOVA-PCA is useful to facilitate the detection of biomarkers in that it can remove the variance corresponding to other experimental factors from the measurements that might be mistaken for a biomarker. Four strains of Escherichia coli at four different growth ages were used for the study of reproducibility of MALDI-MS measurements. ANOVA-PCA was used to disclose potential biomarker proteins associated with different growth stages.

  6. Obituary: Allan R. Sandage (1926-2010)

    NASA Astrophysics Data System (ADS)

    Devorkin, David

    2011-12-01

    Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer

  7. A comparison of analysis of variance and correlation methods for investigating cognitive development with functional magnetic resonance imaging.

    PubMed

    Fair, Damien A; Brown, Timothy T; Petersen, Steven E; Schlaggar, Bradley L

    2006-01-01

    Statistical approaches used in functional magnetic resonance imaging (fMRI) to study cognitive development are varied and evolving. Two approaches have generally been used. These are between-group end-point analysis of variance (ANOVA) and age-related regression. Differences in these 2 approaches could produce different results when applied to a single data set. Event-related fMRI data from a group of typically developing participants (n = 95; age range = 7-35 years) performing controlled lexical processing tasks were analyzed using both methods. Results from the 2 approaches showed significant overlap, but also noteworthy differences. The results suggest that for regions showing age-related changes, correlation was relatively more sensitive to more linear changes whereas ANOVA was relatively more sensitive to less-linear changes. These findings suggest that full characterization of developmental dynamics will require converging methodologies.

  8. Variance components, heritability and correlation analysis of anther and ovary size during the floral development of bread wheat.

    PubMed

    Guo, Zifeng; Chen, Dijun; Schnurbusch, Thorsten

    2015-06-01

    Anther and ovary development play an important role in grain setting, a crucial factor determining wheat (Triticum aestivum L.) yield. One aim of this study was to determine the heritability of anther and ovary size at different positions within a spikelet at seven floral developmental stages and conduct a variance components analysis. Relationships between anther and ovary size and other traits were also assessed. The thirty central European winter wheat genotypes used in this study were based on reduced height (Rht) and photoperiod sensitivity (Ppd) genes with variable genetic backgrounds. Identical experimental designs were conducted in a greenhouse and field simultaneously. Heritability of anther and ovary size indicated strong genetic control. Variance components analysis revealed that anther and ovary sizes of floret 3 (i.e. F3, the third floret from the spikelet base) and floret 4 (F4) were more sensitive to the environment compared with those in floret 1 (F1). Good correlations were found between spike dry weight and anther and ovary size in both greenhouse and field, suggesting that anther and ovary size are good predictors of each other, as well as spike dry weight in both conditions. Relationships between spike dry weight and anther and ovary size at F3/4 positions were stronger than at F1, suggesting that F3/4 anther and ovary size are better predictors of spike dry weight. Generally, ovary size showed a closer relationship with spike dry weight than anther size, suggesting that ovary size is a more reliable predictor of spike dry weight.

  9. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

    NASA Astrophysics Data System (ADS)

    liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

    2016-04-01

    Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

  10. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  11. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  12. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  13. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  14. Comparing tongue shapes from ultrasound imaging using smoothing spline analysis of variance.

    PubMed

    Davidson, Lisa

    2006-07-01

    Ultrasound imaging of the tongue is increasingly common in speech production research. However, there has been little standardization regarding the quantification and statistical analysis of ultrasound data. In linguistic studies, researchers may want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in word-final versus word-medial position) is the same or different. This paper demonstrates how the smoothing spline ANOVA (SS ANOVA) can be applied to the comparison of tongue curves [Gu, Smoothing Spline ANOVA Models (Springer, New York, 2002)]. The SS ANOVA is a technique for determining whether or not there are significant differences between the smoothing splines that are the best fits for two data sets being compared. If the interaction term of the SS ANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves are different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SS ANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.

  15. A Variance Decomposition Approach to Uncertainty Quantification and Sensitivity Analysis of the J&E Model

    PubMed Central

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.

    2015-01-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  16. A variance decomposition approach to uncertainty quantification and sensitivity analysis of the Johnson and Ettinger model.

    PubMed

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G

    2015-02-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive.

  17. A variance decomposition approach to uncertainty quantification and sensitivity analysis of the Johnson and Ettinger model.

    PubMed

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G

    2015-02-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  18. The Self According to Allan Bloom and Charles Reich.

    ERIC Educational Resources Information Center

    Aspy, David N.; Aspy, Cheryl B.

    1998-01-01

    Discusses the works of Charles Reich and Allan Bloom that have helped to shape current social and political debate concerning self theory. Both Reich and Bloom were concerned with the relationship between self and environment. Argues that it is important to insure that its cultural role of self theory is clearly interpreted and applied. (MKA)

  19. Biotechnology Symposium - In Memoriam, the Late Dr. Allan Zipf

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A one-day biotechnology symposium was held at Alabama A&M University (AAMU), Normal, AL on June 4, 2004 in memory of the late Dr. Allan Zipf (Sept 1953-Jan 2004). Dr. Zipf was a Research Associate Professor at the Department of Plant and Soil Sciences, AAMU, who collaborated extensively with ARS/MS...

  20. Allan Sandage : L'architecte de l'expansion

    NASA Astrophysics Data System (ADS)

    Bonnet-Bidaud, J. M.

    1998-07-01

    Il fut de cette poignee de pionniers qui ouvrirent le monde extragalactique. Depuis pres de 50 ans, Allan Sandage poursuit la quete amorcee par le "maitre" Edwin Hubble : mesurer le taux d'expansion de l'Univers. Rencontre avec une legende vivante de la cosmologie...

  1. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  2. Nuclear Material Variance Calculation

    1995-01-01

    MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less

  3. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    PubMed

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  4. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    PubMed

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.

  5. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight

    PubMed Central

    Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  6. Variance associated with the use of relative velocity for force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Volstad, Nicola; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials.

  7. Evaluation of single-cell gel electrophoresis data: combination of variance analysis with sum of ranking differences.

    PubMed

    Héberger, Károly; Kolarević, Stoimir; Kračun-Kolarević, Margareta; Sunjog, Karolina; Gačić, Zoran; Kljajić, Zoran; Mitrić, Milena; Vuković-Gačić, Branka

    2014-09-01

    Specimens of the mussel Mytilus galloprovincialis were collected from five sites in the Boka Kotorska Bay (Adriatic Sea, Montenegro) during the period summer 2011-autumn 2012. Three types of tissue, haemolymph, digestive gland were used for assessment of DNA damage. Images of randomly selected cells were analyzed with a fluorescence microscope and image analysis by the Comet Assay IV Image-analysis system. Three parameters, viz. tail length, tail intensity and Olive tail moment were analyzed on 4200 nuclei per cell type. We observed variations in the level of DNA damage in mussels collected at different sites, as well as seasonal variations in response. Sum of ranking differences (SRD) was implemented to compare use of different types of cell and different measure of comet tail per nucleus. Numerical scales were transferred into ranks, range scaling between 0 and 1; standardization and normalization were carried out. SRD selected the best (and worst) combinations: tail moment is the best for all data treatment and for all organs; second best is tail length, and intensity ranks third (except for digestive gland). The differences were significant at the 5% level. Whereas gills and haemolymph cells do not differ significantly, cells of the digestive gland are much more suitable to estimate genotoxicity. Variance analysis decomposed the effect of different factors on the SRD values. This unique combination has provided not only the relative importance of factors, but also an overall evaluation: the best evaluation method, the best data pre-treatment, etc., were chosen even for partially contradictory data. The rank transformation is superior to any other way of scaling, which is proven by ordering the SRD values by SRD again, and by cross validation.

  8. Selecting a linear mixed model for longitudinal data: repeated measures analysis of variance, covariance pattern model, and growth curve approaches.

    PubMed

    Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M

    2012-03-01

    With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.

  9. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2014-12-01

    Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P <0.05. Mean dog bodyweight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P <0.001). Trial number significantly affected thoracic limb PVF (P <0.001). Limb (left or right) significantly affected thoracic limb VI (P = 0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5-2.0 m/s, 1.8-2.2 m/s, and 1.9-2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5-2.2 m/s, captured a large percentage of trials per dog (84.2 ± 21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. It was concluded that wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design.

  10. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs

    PubMed Central

    Hans, Eric C.; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRF) include: dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, we hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from twenty normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5m/s2). Variance effects from twelve trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P<0.05. Mean dog body weight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P<0.001). Trial number significantly affected thoracic limb PVF (P<0.001). Limb (left or right) significantly affected thoracic limb VI (P=0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5–2.0 m/s, 1.8–2.2 m/s, and 1.9–2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5–2.2 m/s, captured a large percentage of trials per dog (84.2±21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. We conclude wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design. PMID:25457264

  11. Low-energy positron diffraction from CdTe(110):mA minimum-variance R-factor analysis

    NASA Astrophysics Data System (ADS)

    Duke, C. B.; Paton, A.; Lazarides, A.; Vasumathi, D.; Canter, K. F.

    1997-03-01

    The atomic geometry of the (110) surface of CdTe has been determined by low-energy positron diffraction (LEPD). Diffracted intensities of 13 inequivalent beams were measured at sample temperatures of 110 K over an energy range 20 eV<=E<=140 eV. These intensity energy profiles were analyzed using a multiple-scattering dynamical theory. The surface structural parameters were determined via a comparison of the calculated and experimentally measured profiles. An uncertainty analysis scheme, expanded from the analogous one proposed for analyses of low-energy electron diffraction intensities, was used to estimate the uncertainties in the structural parameters so as to reflect accurately uncertainties in the measured data. This analysis is based on a minimum-variance least-squares R factor RMV , defined and applied to the LEPD data from CdTe(110). It yields the top-layer rotation angle ω1 =30.0+/-0.5° the second-layer rotation angle ω2 =-6.9+/-0.2° and bond lengths d(c2 -a1 )=2.84+/-0.02 Å, d(c1 -a1 )=2.74+/-0.01 Å, and d(c1 -a2 )=2.65+/-0.02 Å. The uncertainty intervals quoted are the 95% confidence limits (+/-2σ, where σ is the rms standard deviation) associated with an analysis of the uncertainties in the measured LEPD intensities. Uncertainties in the structural parameters associated with those in the construction of the model of the diffraction process could not be estimated quantitatively. These results agree well with prior structure determinations based on low-energy electron diffraction intensity analysis and x-ray standing waves. They confirm that when measured in units of the bulk lattice constant, the atomic geometry of highly ionic CdTe(110) is comparable to that of the (110) surfaces of other III-V and II-VI semiconductors rather than collapsing to a nearly unrelaxed bulk structure as predicted by an analysis of the role of ionicity on the atomic geometries of the (110) surfaces of zinc-blende structure binary compound semiconductors.

  12. MCNP variance reduction overview

    SciTech Connect

    Hendricks, J.S.; Booth, T.E.

    1985-01-01

    The MCNP code is rich in variance reduction features. Standard variance reduction methods found in most Monte Carlo codes are available as well as a number of methods unique to MCNP. We discuss the variance reduction features presently in MCNP as well as new ones under study for possible inclusion in future versions of the code.

  13. Estimation of Variance Components Using Computer Packages.

    ERIC Educational Resources Information Center

    Chastain, Robert L.; Willson, Victor L.

    Generalizability theory is based upon analysis of variance (ANOVA) and requires estimation of variance components for the ANOVA design under consideration in order to compute either G (Generalizability) or D (Decision) coefficients. Estimation of variance components has a number of alternative methods available using SAS, BMDP, and ad hoc…

  14. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2015-01-01

    Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

  15. Exposure and terrestrial ages of four Allan Hills Antarctic meteorites

    NASA Technical Reports Server (NTRS)

    Kirsten, T.; Ries, D.; Fireman, E. L.

    1978-01-01

    Terrestrial ages of meteorites are based on the amount of cosmic-ray-produced radioactivity in the sample and the number of observed falls that have similar cosmic-ray exposure histories. The cosmic-ray exposures are obtained from the stable noble gas isotopes. Noble gas isotopes are measured by high-sensitivity mass spectrometry. In the present study, the noble gas contents were measured in four Allan Hill meteorites (No. 5, No. 6, No. 7, and No. 8), whose C-14, Al-26, and Mn-53 radioactivities are known. These meteorites are of particular interest because they belong to a large assemblage of distinct meteorites that lie exposed on a small (110 sq km) area of ice near the Allan Hills.

  16. Carbon-14 ages of Allan Hills meteorites and ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.; Norris, T.

    1982-01-01

    Allan Hills is a blue ice region of approximately 100 sq km area in Antarctica where many meteorites have been found exposed on the ice. The terrestrial ages of the Allan Hills meteorites, which are obtained from their cosmogenic nuclide abundances are important time markers which can reflect the history of ice movement to the site. The principal purpose in studying the terrestrial ages of ALHA meteorites is to locate samples of ancient ice and analyze their trapped gas contents. Attention is given to the C-14 and Ar-39 terrestrial ages of ALHA meteorites, and C-14 ages and trapped gas compositions in ice samples. On the basis of the obtained C-14 terrestrial ages, and Cl-36 and Al-26 results reported by others, it is concluded that most ALHA meteorites fell between 20,000 and 200,000 years ago.

  17. On the measurement of frequency and of its sample variance with high-resolution counters

    SciTech Connect

    Rubiola, Enrico

    2005-05-15

    A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.

  18. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the

  19. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  20. Analysis of variance in determinations of equivalence volume and of the ionic product of water in potentiometric titrations.

    PubMed

    Braibanti, A; Bruschi, C; Fisicaro, E; Pasquali, M

    1986-06-01

    Homogeneous sets of data from strong acid-strong base potentiometric titrations in aqueous solution at various constant ionic strengths have been analysed by statistical criteria. The aim is to see whether the error distribution matches that for the equilibrium constants determined by competitive potentiometric methods using the glass electrode. The titration curve can be defined when the estimated equivalence volume VEM, with standard deviation (s.d.) sigma (VEM), the standard potential E(0), with s.d. sigma(E(0)), and the operational ionic product of water K(*)(w) (or E(*)(w) in mV), with s.d. sigma(K(*)(w)) [or sigma(E(*)(w))] are known. A special computer program, BEATRIX, has been written which optimizes the values of VEM, E(0) and K(*)(w) by linearization of the titration curve as a Gran plot. Analysis of variance applied to a set of 11 titrations in 1.0M sodium chloride medium at 298 K has demonstrated that the values of VEM belong to a normal population of points corresponding to individual potential/volume data-pairs (E(i); v(i)) of any titration, whereas the values of pK(*)(w) (or of E(*)(w)) belong to a normal population with members corresponding to individual titrations, which is also the case for the equilibrium constants. The intertitration variation is attributable to the electrochemical component of the system and appears as signal noise distributed over the titrations. The correction for junction-potentials, introduced in a further stage of the program by optimization in a Nernst equation, increases the noise, i.e., sigma(pK(*)(w)). This correction should therefore be avoided whenever it causes an increase of sigma(pK(*)(w)). The influence of the ionic medium has been examined by processing data from acid-base titrations in 0.1M potassium chloride and 0.5M potassium nitrate media. The titrations in potassium chloride medium showed the same behaviour as those in sodium chloride medium, but with an s.d. for pK(*)(w) that was smaller and close to the

  1. Reporting explained variance

    NASA Astrophysics Data System (ADS)

    Good, Ron; Fletcher, Harold J.

    The importance of reporting explained variance (sometimes referred to as magnitude of effects) in ANOVA designs is discussed in this paper. Explained variance is an estimate of the strength of the relationship between treatment (or other factors such as sex, grade level, etc.) and dependent variables of interest to the researcher(s). Three methods that can be used to obtain estimates of explained variance in ANOVA designs are described and applied to 16 studies that were reported in recent volumes of this journal. The results show that, while in most studies the treatment accounts for a relatively small proportion of the variance in dependent variable scores., in., some studies the magnitude of the treatment effect is respectable. The authors recommend that researchers in science education report explained variance in addition to the commonly reported tests of significance, since the latter are inadequate as the sole basis for making decisions about the practical importance of factors of interest to science education researchers.

  2. Pragmatics: The State of the Art: An Online Interview with Keith Allan

    ERIC Educational Resources Information Center

    Allan, Keith; Salmani Nodoushan, Mohammad Ali

    2015-01-01

    This interview was conducted with Professor Keith Allan with the aim of providing a brief but informative summary of the state of the art of pragmatics. In providing answers to the interview questions, Professor Allan begins with a definition of pragmatics as it is practiced today, i.e., the study of the meanings of utterances with attention to…

  3. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    SciTech Connect

    Milias-Argeitis, Andreas Khammash, Mustafa; Lygeros, John

    2014-07-14

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  4. Interpretation of analysis of variance models using principal component analysis to assess the effect of a maternal anticancer treatment on the mineralization of rat bones.

    PubMed

    Stanimirova, I; Michalik, K; Drzazga, Z; Trzeciak, H; Wentzell, P D; Walczak, B

    2011-03-01

    The goal of the present study is to assess the effects of anticancer treatment with cyclophosphamide and cytarabine during pregnancy on the mineralization of mandible bones in 7-, 14- and 28-day-old rats. Each bone sample was described by its X-ray fluorescence spectrum characterizing the mineral composition. The data collected are multivariate in nature and their structure is difficult to visualize and interpret directly. Therefore, methods like analysis of variance-principal component analysis (ANOVA-PCA) and ANOVA-simultaneous component analysis (ASCA), which are suitable for the analysis of highly correlated spectral data and are able to incorporate information about the underlined experimental design, are greatly valued. In this study, the ASCA methodology adapted for unbalanced data was used to investigate the impact of the anticancer drug treatment during pregnancy on the mineralization of the mandible bones of newborn rats and to examine any changes in the mineralization of the bones over time. The results showed that treatment with cyclophosphamide and cytarabine during pregnancy induces a decrease in the K and Zn levels in the mandible bones of newborns. This suppresses the development of mandible bones in rats in the early stages (up to 14 days) of formation. An interesting observation was that the levels of essential minerals like K, Mg, Na and Ca vary considerably in the different regions of the mandible bones.

  5. Heritabilities of ego strength (factor C), super ego strength (factor G), and self-sentiment (factor Q3) by multiple abstract variance analysis.

    PubMed

    Cattell, R B; Schuerger, J M; Klein, T W

    1982-10-01

    Tested over 3,000 boys (identical and fraternal twins, ordinary sibs, general population) aged 12-18 on Ego Strength, Super Ego Strength, and Self Sentiment. The Multiple Abstract Variance Analysis (MAVA) method was used to obtain estimates of abstract (hereditary, environmental) variances and covariances that contribute to total variation in the three traits. Within-family heritabilities for these traits were about .30, .05, and .65. Between-family heritabilities were .60, .08, and .45. Within-family correlations of genetic and environmental deviations were trivial, unusually so among personality variables, but between-family values showed the usual high negative values, consistent with the law of coercion to the biosocial mean.

  6. Variance Component Analysis of a Multi-Site Study for the Reproducibility of Multiple Reaction Monitoring Measurements of Peptides in Human Plasma

    PubMed Central

    Xia, Jessie Q.; Sedransk, Nell; Feng, Xingdong

    2011-01-01

    Background In the Addona et al. paper (Nature Biotechnology 2009), a large-scale multi-site study was performed to quantify Multiple Reaction Monitoring (MRM) measurements of proteins spiked in human plasma. The unlabeled signature peptides derived from the seven target proteins were measured at nine different concentration levels, and their isotopic counterparts were served as the internal standards. Methodology/Principal Findings In this paper, the sources of variation are analyzed by decomposing the variance into parts attributable to specific experimental factors: technical replicates, sites, peptides, transitions within each peptide, and higher-order interaction terms based on carefully built mixed effects models. The factors of peptides and transitions are shown to be major contributors to the variance of the measurements considering heavy (isotopic) peptides alone. For the light (12C) peptides alone, in addition to these factors, the factor of study*peptide also contributes significantly to the variance of the measurements. Heterogeneous peptide component models as well as influence analysis identify the outlier peptides in the study, which are then excluded from the analysis. Using a log-log scale transformation and subtracting the heavy/isotopic peptide [internal standard] measurement from the peptide measurements (i.e., taking the logarithm of the peak area ratio in the original scale establishes that), the MRM measurements are overall consistent across laboratories following the same standard operating procedures, and the variance components related to sites, transitions and higher-order interaction terms involving sites have greatly reduced impact. Thus the heavy peptides have been effective in reducing apparent inter-site variability. In addition, the estimates of intercepts and slopes of the calibration curves are calculated for the sub-studies. Conclusions/Significance The MRM measurements are overall consistent across laboratories following the same

  7. Novel SLC16A2 mutations in patients with Allan-Herndon-Dudley syndrome.

    PubMed

    Shimojima, Keiko; Maruyama, Koichi; Kikuchi, Masahiro; Imai, Ayako; Inoue, Ken; Yamamoto, Toshiyuki

    2016-08-01

    Allan-Herndon-Dudley syndrome (AHDS) is an X-linked disorder caused by impaired thyroid hormone transporter. Patients with AHDS usually exhibit severe motor developmental delay, delayed myelination of the brain white matter, and elevated T3 levels in thyroid tests. Neurological examination of two patients with neurodevelopmental delay revealed generalized hypotonia, and not paresis, as the main neurological finding. Nystagmus and dyskinesia were not observed. Brain magnetic resonance imaging demonstrated delayed myelination in early childhood in both patients. Nevertheless, matured myelination was observed at 6 years of age in one patient. Although the key finding for AHDS is elevated free T3, one of the patients showed a normal T3 level in childhood, misleading the diagnosis of AHDS. Genetic analysis revealed two novel SLC16A2 mutations, p.(Gly122Val) and p.(Gly221Ser), confirming the AHDS diagnosis. These results indicate that AHDS diagnosis is sometimes challenging owing to clinical variability among patients. PMID:27672545

  8. Element distribution and noble gas isotopic abundances in lunar meteorite Allan Hills A81005

    NASA Technical Reports Server (NTRS)

    Kraehenbuehl, U.; Eugster, O.; Niedermann, S.

    1986-01-01

    Antarctic meteorite ALLAN HILLS A81005, an anorthositic breccia, is recognized to be of lunar origin. The noble gases in this meteorite were analyzed and found to be solar-wind implanted gases, whose absolute and relative concentrations are quite similar to those in lunar regolith samples. A sample of this meteorite was obtained for the analysis of the noble gas isotopes, including Kr(81), and for the determination of the elemental abundances. In order to better determine the volume derived from the surface correlated gases, grain size fractions were prepared. The results of the instrumental measurements of the gamma radiation are listed. From the amounts of cosmic ray produced noble gases and respective production rates, the lunar surface residence times were calculated. It was concluded that the lunar surface time is about half a billion years.

  9. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    PubMed

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930

  10. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    PubMed

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results.

  11. A COSMIC VARIANCE COOKBOOK

    SciTech Connect

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu

    2011-04-20

    Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z

  12. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  13. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  14. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  15. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  16. Longitudinal analysis of residual feed intake and BW in mink using random regression with heterogeneous residual variance.

    PubMed

    Shirali, M; Nielsen, V H; Møller, S H; Jensen, J

    2015-10-01

    The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting

  17. Measurement and modeling of acid dissociation constants of tri-peptides containing Glu, Gly, and His using potentiometry and generalized multiplicative analysis of variance.

    PubMed

    Khoury, Rima Raffoul; Sutton, Gordon J; Hibbert, D Brynn; Ebrahimi, Diako

    2013-02-28

    We report pK(a) values with measurement uncertainties for all labile protons of the 27 tri-peptides prepared from the amino acids glutamic acid (E), glycine (G) and histidine (H). Each tri-peptide (GGG, GGE, GGH, …, HHH) was subjected to alkali titration and pK(a) values were calculated from triplicate potentiometric titrations data using HyperQuad 2008 software. A generalized multiplicative analysis of variance (GEMANOVA) of pK(a) values for the most acidic proton gave the optimum model having two terms, an interaction between the end amino acids plus an isolated main effect of the central amino acid.

  18. Measurement and modeling of acid dissociation constants of tri-peptides containing Glu, Gly, and His using potentiometry and generalized multiplicative analysis of variance.

    PubMed

    Khoury, Rima Raffoul; Sutton, Gordon J; Hibbert, D Brynn; Ebrahimi, Diako

    2013-02-28

    We report pK(a) values with measurement uncertainties for all labile protons of the 27 tri-peptides prepared from the amino acids glutamic acid (E), glycine (G) and histidine (H). Each tri-peptide (GGG, GGE, GGH, …, HHH) was subjected to alkali titration and pK(a) values were calculated from triplicate potentiometric titrations data using HyperQuad 2008 software. A generalized multiplicative analysis of variance (GEMANOVA) of pK(a) values for the most acidic proton gave the optimum model having two terms, an interaction between the end amino acids plus an isolated main effect of the central amino acid. PMID:23247603

  19. Technical note: An improved estimate of uncertainty for source contribution from effective variance Chemical Mass Balance (EV-CMB) analysis

    NASA Astrophysics Data System (ADS)

    Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang

    2015-01-01

    The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.

  20. The Effects of Single and Compound Violations of Data Set Assumptions when Using the Oneway, Fixed Effects Analysis of Variance and the One Concomitant Analysis of Covariance Statistical Models.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook

    This study integrates into one comprehensive Monte Carlo simulation a vast array of previously defined and substantively interrelated research studies of the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) statistical procedures. Three sets of balanced ANOVA and ANCOVA designs (group sizes of 15, 30, and 45) and one…

  1. Seizures in the life and works of Edgar Allan Poe.

    PubMed

    Bazil, C W

    1999-06-01

    Edgar Allan Poe, one of the most celebrated of American storytellers, lived through and wrote descriptions of episodic unconsciousness, confusion, and paranoia. These symptoms have been attributed to alcohol or drug abuse but also could represent complex partial seizures, prolonged postictal states, or postictal psychosis. Complex partial seizures were not well described in Poe's time, which could explain a misdiagnosis. Alternatively, he may have suffered from complex partial epilepsy that was complicated or caused by substance abuse. Even today, persons who have epilepsy are mistaken for substance abusers and occasionally are arrested during postictal confusional states. Poe was able to use creative genius and experiences from illness to create memorable tales and poignant poems.

  2. Petrogenetic relationship between Allan Hills 77005 and other achondrites

    NASA Technical Reports Server (NTRS)

    Mcsween, H. Y., Jr.; Taylor, L. A.; Stolper, E. M.; Muntean, R. A.; Okelley, G. D.; Eldridge, J. S.; Biswas, S.; Ngo, H. T.; Lipschutz, M. E.

    1979-01-01

    The paper presents chemical and petrologic data relating the Allan Hills (ALHA) 77005 achondrite from Antarctica and explores their petrogenetic relationship with the shergottites. Petrologic similarities with the latter in terms of mineralogy, oxidation state, inferred source region composition, and shock ages suggest a genetic relationship, also indicated by volatile to involatile element ratios and abundances of other trace elements. ALHA 77005 may be a cumulate crystallized from a liquid parental to materials from which the shergottites crystallized or a sample of peridotite from which shergottite parent liquids were derived. Chemical similarities with terrestrial ultramafic rocks suggest that it provides an additional sample of the only other solar system body with basalt source origins chemically similar to the upper earth mantle.

  3. A new kind of primitive chondrite, Allan Hills 85085

    NASA Technical Reports Server (NTRS)

    Scott, Edward R. D.

    1988-01-01

    Allan Hills (ALH) 85085, a chemically and mineralogically unique chondrite whose components have suffered little metamorphism or alteration, is discussed. It is found that ALH 85085 has 4 wt pct chondrules (mean diameter 16 microns), 36 wt pct Fe, Ni, 56 wt pct lithic and mineral silicate fragments, and 2 wt pct trolite. It is suggested that, with the exception of matrix lumps, the components of ALH 85085 formed and accreted in the solar nebula. It is shown that ALH 85085 does not belong to any of the nine chondrite groups and is very different from Kakangari. Similarities between ALH 85085 and Bencubbin and Weatherford suggest that the latter two primitive meteorites may be chondrites with high metal abundances and very large, partly fragmented chondrules.

  4. Evaluation of the oscillatory interference model of grid cell firing through analysis and measured period variance of some biological oscillators.

    PubMed

    Zilli, Eric A; Yoshida, Motoharu; Tahvildari, Babak; Giocomo, Lisa M; Hasselmo, Michael E

    2009-11-01

    Models of the hexagonally arrayed spatial activity pattern of grid cell firing in the literature generally fall into two main categories: continuous attractor models or oscillatory interference models. Burak and Fiete (2009, PLoS Comput Biol) recently examined noise in two continuous attractor models, but did not consider oscillatory interference models in detail. Here we analyze an oscillatory interference model to examine the effects of noise on its stability and spatial firing properties. We show analytically that the square of the drift in encoded position due to noise is proportional to time and inversely proportional to the number of oscillators. We also show there is a relatively fixed breakdown point, independent of many parameters of the model, past which noise overwhelms the spatial signal. Based on this result, we show that a pair of oscillators are expected to maintain a stable grid for approximately t = 5mu(3)/(4pisigma)(2) seconds where mu is the mean period of an oscillator in seconds and sigma(2) its variance in seconds(2). We apply this criterion to recordings of individual persistent spiking neurons in postsubiculum (dorsal presubiculum) and layers III and V of entorhinal cortex, to subthreshold membrane potential oscillation recordings in layer II stellate cells of medial entorhinal cortex and to values from the literature regarding medial septum theta bursting cells. All oscillators examined have expected stability times far below those seen in experimental recordings of grid cells, suggesting the examined biological oscillators are unfit as a substrate for current implementations of oscillatory interference models. However, oscillatory interference models can tolerate small amounts of noise, suggesting the utility of circuit level effects which might reduce oscillator variability. Further implications for grid cell models are discussed.

  5. The history of Allan Hills 84001 revised: multiple shock events.

    PubMed

    Treiman, A H

    1998-07-01

    The geologic history of Martian meteorite Allan Hills (ALH) 84001 is more complex than previously recognized, with evidence for four or five crater-forming impacts onto Mars. This history of repeated deformation and shock metamorphism appears to weaken some arguments that have been offered for and against the hypothesis of ancient Martian life in ALH 84001. Allan Hills 84001 formed originally from basaltic magma. Its first impact event (I1) is inferred from the deformation (D1) that produced the granular-textured bands ("crush zones") that transect the original igneous fabric. Deformation D1 is characterized by intense shear and may represent excavation or rebound flow of rock beneath a large impact crater. An intense thermal metamorphism followed D1 and may be related to it. The next impact (I2) produced fractures, (Fr2) in which carbonate "pancakes" were deposited and produced feldspathic glass from some of the igneous feldspars and silica. After I2, carbonate pancakes and globules were deposited in Fr2 fractures and replaced feldspathic glass and possibly crystalline silicates. Next, feldspars, feldspathic glass, and possibly some carbonates were mobilized and melted in the third impact (I3). Microfaulting, intense fracturing, and shear are also associated with I3. In the fourth impact (I4), the rock was fractured and deformed without significant heating, which permitted remnant magnetization directions to vary across fracture surfaces. Finally, ALH 84001 was ejected from Mars in event I5, which could be identical to I4. This history of multiple impacts is consistent with the photogeology of the Martian highlands and may help resolve some apparent contradictions among recent results on ALH 84001. For example, the submicron rounded magnetite grains in the carbonate globules could be contemporaneous with carbonate deposition, whereas the elongate magnetite grains, epitaxial on carbonates, could be ascribed to vapor-phase deposition during I3. PMID:11543074

  6. Variance Anisotropy in Kinetic Plasmas

    NASA Astrophysics Data System (ADS)

    Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping

    2016-06-01

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.

  7. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  8. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  9. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  10. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  11. Allan-Herndon-Dudley syndrome with unusual profound sensorineural hearing loss.

    PubMed

    Gagliardi, Lucia; Nataren, Nathalie; Feng, Jinghua; Schreiber, Andreas W; Hahn, Christopher N; Conwell, Louise S; Coman, David; Scott, Hamish S

    2015-08-01

    The Allan-Herndon-Dudley syndrome is caused by mutations in the thyroid hormone transporter, Monocarboxylate transporter 8 (MCT8). It is characterized by profound intellectual disability and abnormal thyroid function. We report on a patient with Allan-Herndon-Dudley syndrome (AHDS) with profound sensorineural hearing loss which is not usually a feature of AHDS and which may have been due to a coexisting nonsense mutation in Microphthalmia-associated transcription factor (MITF).

  12. High-dimensional nested analysis of variance to assess the effect of production season, quality grade and steam pasteurization on the phenolic composition of fermented rooibos herbal tea.

    PubMed

    Stanimirova, I; Kazura, M; de Beer, D; Joubert, E; Schulze, A E; Beelders, T; de Villiers, A; Walczak, B

    2013-10-15

    A nested analysis of variance combined with simultaneous component analysis, ASCA, was proposed to model high-dimensional chromatographic data. The data were obtained from an experiment designed to investigate the effect of production season, quality grade and post-production processing (steam pasteurization) on the phenolic content of the infusion of the popular herbal tea, rooibos, at 'cup-of-tea' strength. Specifically, a four-way analysis of variance where the experimental design involves nesting in two of the three crossed factors was considered. For the purpose of the study, batches of fermented rooibos plant material were sampled from each of four quality grades during three production seasons (2009, 2010 and 2011) and a sub-sample of each batch was steam-pasteurized. The phenolic content of each rooibos infusion was characterized by high performance liquid chromatography (HPLC)-diode array detection (DAD). In contrast to previous studies, the complete HPLC-DAD signals were used in the chemometric analysis in order to take into account the entire phenolic profile. All factors had a significant effect on the phenolic content of a 'cup-of-tea' strength rooibos infusion. In particular, infusions prepared from the grade A (highest quality) samples contained a higher content of almost all phenolic compounds than the lower quality plant material. The variations of the content of isoorientin and orientin in the different quality grade infusions over production seasons are larger than the variations in the content of aspalathin and quercetin-3-O-robinobioside. Ferulic acid can be used as an indicator of the quality of rooibos tea as its content generally decreases with increasing tea quality. Steam pasteurization decreased the content of the majority of phenolic compounds in a 'cup-of-tea' strength rooibos infusion.

  13. Source apportionment of sediment PAHs in the Pearl River Delta region (China) using nonnegative matrix factorization analysis with effective weighted variance solution.

    PubMed

    Chen, Hai-Yang; Teng, Yan-Guo; Wang, Jin-Sheng; Song, Liu-Ting; Zuo, Rui

    2013-02-01

    Considering the advantages and limitations of a single receptor model, in this study, a combined technique of nonnegative matrix factorization analysis with effective weighted variance solution (NMF-EWV) was proposed for source apportionment. Utilizing NMF, major linear independent factor loadings with nonnegative elements were extracted to identify potential pollution sources. Then, these physical reasonable factor loadings were regarded as source profiles to apportion contributions using effective weighted variance solutions. Evaluation results indicated that the NMF-EWV method reproduced the source profiles well, and got a reasonable apportionment results for the synthetic dataset. The methodology of the NMF-EWV was also applied to recognize sources and apportion the contributions of polycyclic aromatic hydrocarbons (PAHs) collected from freshwater and marine sediments in the Pearl River Delta (PRD) region which is one of the most industrialized and economically significant regions of China. Apportionment results showed that traffic tunnel made the largest contribution (46.49%) for the freshwater PAH sediments in the PRD, followed by coal residential source (29.61%), power plant (13.45%) and gasoline engine (10.45%). For the marine sediments, traffic tunnel was also apportioned as the largest source (57.61%), followed by power plant (22.86%), gasoline engine (17.71%) and coal residential source (1.82%). Traffic-related sources were the predominant reasons for PAH pollution in that region.

  14. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    PubMed

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  15. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    PubMed

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  16. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  17. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    PubMed

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations. PMID:24738761

  18. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    PubMed

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  19. Dimension reduction in heterogeneous neural networks: Generalized Polynomial Chaos (gPC) and ANalysis-Of-VAriance (ANOVA)

    NASA Astrophysics Data System (ADS)

    Choi, M.; Bertalan, T.; Laing, C. R.; Kevrekidis, I. G.

    2016-09-01

    We propose, and illustrate via a neural network example, two different approaches to coarse-graining large heterogeneous networks. Both approaches are inspired from, and use tools developed in, methods for uncertainty quantification (UQ) in systems with multiple uncertain parameters - in our case, the parameters are heterogeneously distributed on the network nodes. The approach shows promise in accelerating large scale network simulations as well as coarse-grained fixed point, periodic solution computation and stability analysis. We also demonstrate that the approach can successfully deal with structural as well as intrinsic heterogeneities.

  20. Three-dimensional lung nodule segmentation and shape variance analysis to detect lung cancer with reduced false positives.

    PubMed

    Krishnamurthy, Senthilkumar; Narasimhan, Ganesh; Rengasamy, Umamaheswari

    2016-01-01

    The three-dimensional analysis on lung computed tomography scan was carried out in this study to detect the malignant lung nodules. An automatic three-dimensional segmentation algorithm proposed here efficiently segmented the tissue clusters (nodules) inside the lung. However, an automatic morphological region-grow segmentation algorithm that was implemented to segment the well-circumscribed nodules present inside the lung did not segment the juxta-pleural nodule present on the inner surface of wall of the lung. A novel edge bridge and fill technique is proposed in this article to segment the juxta-pleural and pleural-tail nodules accurately. The centroid shift of each candidate nodule was computed. The nodules with more centroid shift in the consecutive slices were eliminated since malignant nodule's resultant position did not usually deviate. The three-dimensional shape variation and edge sharp analyses were performed to reduce the false positives and to classify the malignant nodules. The change in area and equivalent diameter was more for malignant nodules in the consecutive slices and the malignant nodules showed a sharp edge. Segmentation was followed by three-dimensional centroid, shape and edge analysis which was carried out on a lung computed tomography database of 20 patient with 25 malignant nodules. The algorithms proposed in this article precisely detected 22 malignant nodules and failed to detect 3 with a sensitivity of 88%. Furthermore, this algorithm correctly eliminated 216 tissue clusters that were initially segmented as nodules; however, 41 non-malignant tissue clusters were detected as malignant nodules. Therefore, the false positive of this algorithm was 2.05 per patient.

  1. THE DEAD-LIVING-MOTHER: MARIE BONAPARTE'S INTERPRETATION OF EDGAR ALLAN POE'S SHORT STORIES.

    PubMed

    Obaid, Francisco Pizarro

    2016-06-01

    Princess Marie Bonaparte is an important figure in the history of psychoanalysis, remembered for her crucial role in arranging Freud's escape to safety in London from Nazi Vienna, in 1938. This paper connects us to Bonaparte's work on Poe's short stories. Founded on concepts of Freudian theory and an exhaustive review of the biographical facts, Marie Bonaparte concluded that the works of Edgar Allan Poe drew their most powerful inspirational force from the psychological consequences of the early death of the poet's mother. In Bonaparte's approach, which was powerfully influenced by her recognition of the impact of the death of her own mother when she was born-an understanding she gained in her analysis with Freud-the thesis of the dead-living-mother achieved the status of a paradigmatic key to analyze and understand Poe's literary legacy. This paper explores the background and support of this hypothesis and reviews Bonaparte's interpretation of Poe's most notable short stories, in which extraordinary female figures feature in the narrative.

  2. Novel SLC16A2 mutations in patients with Allan-Herndon-Dudley syndrome

    PubMed Central

    Shimojima, Keiko; Maruyama, Koichi; Kikuchi, Masahiro; Imai, Ayako; Inoue, Ken; Yamamoto, Toshiyuki

    2016-01-01

    Summary Allan-Herndon-Dudley syndrome (AHDS) is an X-linked disorder caused by impaired thyroid hormone transporter. Patients with AHDS usually exhibit severe motor developmental delay, delayed myelination of the brain white matter, and elevated T3 levels in thyroid tests. Neurological examination of two patients with neurodevelopmental delay revealed generalized hypotonia, and not paresis, as the main neurological finding. Nystagmus and dyskinesia were not observed. Brain magnetic resonance imaging demonstrated delayed myelination in early childhood in both patients. Nevertheless, matured myelination was observed at 6 years of age in one patient. Although the key finding for AHDS is elevated free T3, one of the patients showed a normal T3 level in childhood, misleading the diagnosis of AHDS. Genetic analysis revealed two novel SLC16A2 mutations, p.(Gly122Val) and p.(Gly221Ser), confirming the AHDS diagnosis. These results indicate that AHDS diagnosis is sometimes challenging owing to clinical variability among patients. PMID:27672545

  3. THE DEAD-LIVING-MOTHER: MARIE BONAPARTE'S INTERPRETATION OF EDGAR ALLAN POE'S SHORT STORIES.

    PubMed

    Obaid, Francisco Pizarro

    2016-06-01

    Princess Marie Bonaparte is an important figure in the history of psychoanalysis, remembered for her crucial role in arranging Freud's escape to safety in London from Nazi Vienna, in 1938. This paper connects us to Bonaparte's work on Poe's short stories. Founded on concepts of Freudian theory and an exhaustive review of the biographical facts, Marie Bonaparte concluded that the works of Edgar Allan Poe drew their most powerful inspirational force from the psychological consequences of the early death of the poet's mother. In Bonaparte's approach, which was powerfully influenced by her recognition of the impact of the death of her own mother when she was born-an understanding she gained in her analysis with Freud-the thesis of the dead-living-mother achieved the status of a paradigmatic key to analyze and understand Poe's literary legacy. This paper explores the background and support of this hypothesis and reviews Bonaparte's interpretation of Poe's most notable short stories, in which extraordinary female figures feature in the narrative. PMID:27194275

  4. Cosmology without cosmic variance

    DOE PAGES

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  5. Cosmology without cosmic variance

    SciTech Connect

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.

  6. Method of median semi-variance for the analysis of left-censored data: comparison with other techniques using environmental data.

    PubMed

    Zoffoli, Hugo José Oliveira; Varella, Carlos Alberto Alves; do Amaral-Sobrinho, Nelson Moura Brasil; Zonta, Everaldo; Tolón-Becerra, Alfredo

    2013-11-01

    In environmental monitoring, variables with analytically non-detected values are commonly encountered. For the statistical evaluation of these data, most of the methods that produce a less biased performance require specific computer programs. In this paper, a statistical method based on the median semi-variance (SemiV) is proposed to estimate the position and spread statistics in a dataset with single left-censoring. The performances of the SemiV method and 12 other statistical methods are evaluated using real and complete datasets. The performances of all the methods are influenced by the percentage of censored data. In general, the simple substitution and deletion methods showed biased performance, with exceptions for L/2, Inter and L/√2 methods that can be used with caution under specific conditions. In general, the SemiV method and other parametric methods showed similar performances and were less biased than other methods. The SemiV method is a simple and accurate procedure that can be used in the analysis of datasets with less than 50% of left-censored data. PMID:23830887

  7. Effect of the viral protease on the dynamics of bacteriophage HK97 maturation intermediates characterized by variance analysis of cryo EM particle ensembles.

    PubMed

    Gong, Yunye; Veesler, David; Doerschuk, Peter C; Johnson, John E

    2016-03-01

    Cryo EM structures of maturation-intermediate Prohead I of bacteriophage HK97 with (PhI(Pro+)) and without (PhI(Pro-)) the viral protease packaged have been reported (Veesler et al., 2014). In spite of PhI(Pro+) containing an additional ∼ 100 × 24 kD of protein, the two structures appeared identical although the two particles have substantially different biochemical properties, e.g., PhI(Pro-) is less stable to disassembly conditions such as urea. Here the same cryo EM images are used to characterize the spatial heterogeneity of the particles at 17Å resolution by variance analysis and show that PhI(Pro-) has roughly twice the standard deviation of PhI(Pro+). Furthermore, the greatest differences in standard deviation are present in the region where the δ-domain, not seen in X-ray crystallographic structures or fully seen in cryo EM, is expected to be located. Thus presence of the protease appears to stabilize the δ-domain which the protease will eventually digest.

  8. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling.

  9. Variance Components in Discrete Force Production Tasks

    PubMed Central

    SKM, Varadhan; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2010-01-01

    The study addresses the relationships between task parameters and two components of variance, “good” and “bad”, during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does (“bad”) and does not (“good”) affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the “bad” variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. “Good” variance scaled linearly with force magnitude and did not depend on force rate. “Bad” variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic

  10. The final days of Edgar Allan Poe: clues to an old mystery using 21st century medical science.

    PubMed

    Francis, Roger A

    This study examines all documented information regarding the final days and death of Edgar Allan Poe (1809-1849), in an attempt to determine the most likely cause of death of the American poet, short story writer, and literary critic. Information was gathered from letters, newspaper accounts, and magazine articles written during the period after Poe's death, and also from biographies and medical journal articles written up until the present. A chronology of Poe's final days was constructed, and this was used to form a differential diagnosis of possible causes of death. Death theories over the last 160 years were analyzed using this information. This analysis, along with a review of Poe's past medical history, would seem to support an alcohol-related cause of death.

  11. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable. PMID:26540679

  12. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable.

  13. Allan Bloom, Mike Rose, and Paul Goodman: In Search of a Lost Pedagogical Synthesis.

    ERIC Educational Resources Information Center

    Smith, Jeff

    1993-01-01

    Discusses and compares two recent books on American higher education: "The Closing of the American Mind" by Allan Bloom, and "Lives on the Boundary" by Mike Rose. Develops a view which synthesizes those of Bloom and Rose. Considers this view as comparable to that of Paul Goodman. (HB)

  14. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    ERIC Educational Resources Information Center

    Gelfert, Axel

    2014-01-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling "prose-poem" "Eureka" (1848), in particular, has…

  15. Where Were the Whistleblowers? The Case of Allan McDonald and Roger Boisjoly.

    ERIC Educational Resources Information Center

    Stewart, Lea P.

    Employees who "blow the whistle" on their company because they believe it is engaged in practices that are illegal, immoral, or harmful to the public, often face grave consequences for their actions, including demotion, harassment, forced resignation, or termination. The case of Allan McDonald and Roger Boisjoly, engineers who blew the whistle on…

  16. Horror from the Soul--Gothic Style in Allan Poe's Horror Fictions

    ERIC Educational Resources Information Center

    Sun, Chunyan

    2015-01-01

    Edgar Allan Poe made tremendous contribution to horror fiction. Poe's inheritance of gothic fiction and American literature tradition combined with his living experience forms the background of his horror fictions. He inherited the tradition of the gothic fictions and made innovations on it, so as to penetrate to subconsciousness. Poe's horror…

  17. European Studies as Answer to Allan Bloom's "The Closing of the American Mind."

    ERIC Educational Resources Information Center

    Macdonald, Michael H.

    European studies can provide a solution to several of the issues raised in Allan Bloom's "The Closing of the American Mind." European studies pursue the academic quest for what is truth, what is goodness, and what is beauty. In seeking to answer these questions, the Greeks were among the first to explore many of humanity's problems and their…

  18. Allan M. Freedman, LLB: a lawyer’s gift to Canadian chiropractors

    PubMed Central

    Brown, Douglas M.

    2007-01-01

    This paper reviews the leadership role, contributions, accolades, and impact of Professor Allan Freedman through a 30 year history of service to CMCC and the chiropractic profession in Canada. Professor Freedman has served as an educator, philanthropist and also as legal counsel. His influence on chiropractic organizations and chiropractors during this significant period in the profession is discussed. PMID:18060008

  19. The Art of George Morrison and Allan Houser: The Development and Impact of Native Modernism

    ERIC Educational Resources Information Center

    Montiel, Anya

    2005-01-01

    The idea for a retrospective on George Morrison and Allan Houser as one of the inaugural exhibitions at the National Museum of the American Indian (NMAI) came from the NMAI curator of contemporary art, Truman Lowe. An artist and sculptor himself, Lowe knew both artists personally and saw them as mentors and visionaries. Lowe advised an exhibition…

  20. An Interview with Allan Wigfield: A Giant on Research on Expectancy Value, Motivation, and Reading Achievement

    ERIC Educational Resources Information Center

    Bembenutty, Hefer

    2012-01-01

    This article presents an interview with Allan Wigfield, professor and chair of the Department of Human Development and distinguished scholar-teacher at the University of Maryland. He has authored more than 100 peer-reviewed journal articles and book chapters on children's motivation and other topics. He is a fellow of Division 15 (Educational…

  1. Estimation of velocity uncertainties from GPS time series: Examples from the analysis of the South African TrigNet network

    NASA Astrophysics Data System (ADS)

    Hackl, M.; Malservisi, R.; Hugentobler, U.; Wonnacott, R.

    2011-11-01

    We present a method to derive velocity uncertainties from GPS position time series that are affected by time-correlated noise. This method is based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation (MLE). The Allan variance of the rate (AVR) is calculated in the time domain and hence is not too sensitive to gaps in the time series. We derived analytical expressions of the AVR for different kinds of noises like power law noise, white noise, flicker noise, and random walk and found an expression for the variance produced by an annual signal. These functional relations form the basis of error models that have to be fitted to the AVR in order to estimate the velocity uncertainty. Finally, we applied the method to the South Africa GPS network TrigNet. Most time series show noise characteristics that can be modeled by a power law noise plus an annual signal. The method is computationally very cheap, and the results are in good agreement with the ones obtained by methods based on MLE.

  2. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    PubMed

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.

  3. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    PubMed

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution. PMID:27112026

  4. Formation constants of copper(II) complexes with tripeptides containing Glu, Gly, and His: potentiometric measurements and modeling by generalized multiplicative analysis of variance.

    PubMed

    Khoury, Rima Raffoul; Sutton, Gordon J; Ebrahimi, Diako; Hibbert, D Brynn

    2014-02-01

    We report a systematic study of the effects of types and positions of amino acid residues of tripeptides on the formation constants logβ, acid dissociation constants pKa, and the copper coordination modes of the copper(II) complexes with 27 tripeptides formed from the amino acids glutamic acid, glycine, and histidine. logβ values were calculated from pH titrations with l mmol L(-1):1 mmol L(-1) solutions of the metal and ligand and previously reported ligand pKa values. Generalized multiplicative analysis of variance (GEMANOVA) was used to model the logβ values of the saturated, most protonated, monoprotonated, logβ(CuL) - logβ(HL), and pKa of the amide group. The resulting model of the saturated copper species has a two-term model describing an interaction between the central and the C-terminal residues plus a smaller, main effect of the N-terminal residue. The model supports the conclusion that two copper coordination modes exist depending on the absence or presence of His at the central position, giving species in which copper is coordinated via two or three fused chelate rings, respectively. The GEMANOVA model for pKamide, which is the same as that for the saturated complex, showed that Gly-Gly-His has the lowest pKamide values among the 27 tripeptides. Visible spectroscopy indicated the formation of metal-ligand dimers for tripeptides His-His-Gly and His-His-Glu, but not for His-His-His, and the formation of multiple ligand bis compexes CuL2 and Cu(HL)2 for tripeptides (Glu/Gly)-His-(Glu/Gly) and His-(Glu/Gly)-(Glu/Gly), respectively.

  5. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  6. Least-Squares Analysis of Phosphorus Soil Sorption Data with Weighting from Variance Function Estimation: A Statistical Case for the Freundlich Isotherm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...

  7. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641

  8. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity.

  9. Mutations in MCT8 in patients with Allan-Herndon-Dudley-syndrome affecting its cellular distribution.

    PubMed

    Kersseboom, Simone; Kremers, Gert-Jan; Friesema, Edith C H; Visser, W Edward; Klootwijk, Wim; Peeters, Robin P; Visser, Theo J

    2013-05-01

    Monocarboxylate transporter 8 (MCT8) is a thyroid hormone (TH)-specific transporter. Mutations in the MCT8 gene are associated with Allan-Herndon-Dudley Syndrome (AHDS), consisting of severe psychomotor retardation and disturbed TH parameters. To study the functional consequences of different MCT8 mutations in detail, we combined functional analysis in different cell types with live-cell imaging of the cellular distribution of seven mutations that we identified in patients with AHDS. We used two cell models to study the mutations in vitro: 1) transiently transfected COS1 and JEG3 cells, and 2) stably transfected Flp-in 293 cells expressing a MCT8-cyan fluorescent protein construct. All seven mutants were expressed at the protein level and showed a defect in T3 and T4 transport in uptake and metabolism studies. Three mutants (G282C, P537L, and G558D) had residual uptake activity in Flp-in 293 and COS1 cells, but not in JEG3 cells. Four mutants (G221R, P321L, D453V, P537L) were expressed at the plasma membrane. The mobility in the plasma membrane of P537L was similar to WT, but the mobility of P321L was altered. The other mutants studied (insV236, G282C, G558D) were predominantly localized in the endoplasmic reticulum. In essence, loss of function by MCT8 mutations can be divided in two groups: mutations that result in partial or complete loss of transport activity (G221R, P321L, D453V, P537L) and mutations that mainly disturb protein expression and trafficking (insV236, G282C, G558D). The cell type-dependent results suggest that MCT8 mutations in AHDS patients may have tissue-specific effects on TH transport probably caused by tissue-specific expression of yet unknown MCT8-interacting proteins. PMID:23550058

  10. Analysis of speech-related variance in rapid event-related fMRI using a time-aware acquisition system.

    PubMed

    Mehta, S; Grabowski, T J; Razavi, M; Eaton, B; Bolinger, L

    2006-02-15

    Speech production introduces signal changes in fMRI data that can mimic or mask the task-induced BOLD response. Rapid event-related designs with variable ISIs address these concerns by minimizing the correlation of task and speech-related signal changes without sacrificing efficiency; however, the increase in residual variance due to speech still decreases statistical power and must be explicitly addressed primarily through post-processing techniques. We investigated the timing, magnitude, and location of speech-related variance in an overt picture naming fMRI study with a rapid event-related design, using a data acquisition system that time-stamped image acquisitions, speech, and a pneumatic belt signal on the same clock. Using a spectral subtraction algorithm to remove scanner gradient noise from recorded speech, we related the timing of speech, stimulus presentation, chest wall movement, and image acquisition. We explored the relationship of an extended speech event time course and respiration on signal variance by performing a series of voxelwise regression analyses. Our results demonstrate that these effects are spatially heterogeneous, but their anatomic locations converge across subjects. Affected locations included basal areas (orbitofrontal, mesial temporal, brainstem), areas adjacent to CSF spaces, and lateral frontal areas. If left unmodeled, speech-related variance can result in regional detection bias that affects some areas critically implicated in language function. The results establish the feasibility of detecting and mitigating speech-related variance in rapid event-related fMRI experiments with single word utterances. They further demonstrate the utility of precise timing information about speech and respiration for this purpose. PMID:16412665

  11. Further Insights into the Allan-Herndon-Dudley Syndrome: Clinical and Functional Characterization of a Novel MCT8 Mutation

    PubMed Central

    Yoon, Grace; Visser, Theo J.

    2015-01-01

    Background Mutations in the thyroid hormone (TH) transporter MCT8 have been identified as the cause for Allan-Herndon-Dudley Syndrome (AHDS), characterized by severe psychomotor retardation and altered TH serum levels. Here we report a novel MCT8 mutation identified in 4 generations of one family, and its functional characterization. Methods Proband and family members were screened for 60 genes involved in X-linked cognitive impairment and the MCT8 mutation was confirmed. Functional consequences of MCT8 mutations were studied by analysis of [125I]TH transport in fibroblasts and transiently transfected JEG3 and COS1 cells, and by subcellular localization of the transporter. Results The proband and a male cousin demonstrated clinical findings characteristic of AHDS. Serum analysis showed high T3, low rT3, and normal T4 and TSH levels in the proband. A MCT8 mutation (c.869C>T; p.S290F) was identified in the proband, his cousin, and several female carriers. Functional analysis of the S290F mutant showed decreased TH transport, metabolism and protein expression in the three cell types, whereas the S290A mutation had no effect. Interestingly, both uptake and efflux of T3 and T4 was impaired in fibroblasts of the proband, compared to his healthy brother. However, no effect of the S290F mutation was observed on TH efflux from COS1 and JEG3 cells. Immunocytochemistry showed plasma membrane localization of wild-type MCT8 and the S290A and S290F mutants in JEG3 cells. Conclusions We describe a novel MCT8 mutation (S290F) in 4 generations of a family with Allan-Herndon-Dudley Syndrome. Functional analysis demonstrates loss-of-function of the MCT8 transporter. Furthermore, our results indicate that the function of the S290F mutant is dependent on cell context. Comparison of the S290F and S290A mutants indicates that it is not the loss of Ser but its substitution with Phe, which leads to S290F dysfunction. PMID:26426690

  12. Assessment of analysis-of-variance-based methods to quantify the random variations of observers in medical imaging measurements: guidelines to the investigator.

    PubMed

    Zeggelink, William F A Klein; Hart, Augustinus A M; Gilhuijs, Kenneth G A

    2004-07-01

    The random variations of observers in medical imaging measurements negatively affect the outcome of cancer treatment, and should be taken into account during treatment by the application of safety margins that are derived from estimates of the random variations. Analysis-of-variance- (ANOVA-) based methods are the most preferable techniques to assess the true individual random variations of observers, but the number of observers and the number of cases must be taken into account to achieve meaningful results. Our aim in this study is twofold. First, to evaluate three representative ANOVA-based methods for typical numbers of observers and typical numbers of cases. Second, to establish guidelines to the investigator to determine which method, how many observers, and which number of cases are required to obtain the a priori chosen performance. The ANOVA-based methods evaluated in this study are an established technique (pairwise differences method: PWD), a new approach providing additional statistics (residuals method: RES), and a generic technique that uses restricted maximum likelihood (REML) estimation. Monte Carlo simulations were performed to assess the performance of the ANOVA-based methods, which is expressed by their accuracy (closeness of the estimates to the truth), their precision (standard error of the estimates), and the reliability of their statistical test for the significance of a difference in the random variation of an observer between two groups of cases. The highest accuracy is achieved using REML estimation, but for datasets of at least 50 cases or arrangements with 6 or more observers, the differences between the methods are negligible, with deviations from the truth well below +/-3%. For datasets up to 100 cases, it is most beneficial to increase the number of cases to improve the precision of the estimated random variations, whereas for datasets over 100 cases, an improvement in precision is most efficiently achieved by increasing the number of

  13. Cosmic-ray-produced Cl-36 and Mn-53 in Allan Hills-77 meteorites

    NASA Technical Reports Server (NTRS)

    Nishiizumi, K.; Murrell, M. T.; Arnold, J. R.; Finkel, R. C.; Elmore, D.; Ferraro, R. D.; Gove, H. E.

    1981-01-01

    Cosmic-ray-produced Mn-53 has been determined by neutron activation in nine Allan Hills-77 meteorites. Additionally, Cl-36 has been measured in seven of these objects using tandem accelerator mass spectrometry. These results, along with C-14 and Al-26 concentrations determined elsewhere, yield terrestrial ages ranging from 10,000 to 700,000 years. Weathering was not found to result in Mn-53 loss.

  14. Measurements of Ultra-Stable Oscillator (USO) Allan Deviations in Space

    NASA Technical Reports Server (NTRS)

    Enzer, Daphna G.; Klipstein, William M.; Wang, Rabi T.; Dunn, Charles E.

    2013-01-01

    Researchers have used data from the GRAIL mission to the Moon to make the first in-flight verification of ultra-stable oscillators (USOs) with Allan deviation below 10 13 for 1-to-100-second averaging times. USOs are flown in space to provide stable timing and/or navigation signals for a variety of different science and programmatic missions. The Gravity Recovery and Interior Laboratory (GRAIL) mission is flying twin spacecraft, each with its own USO and with a Ka-band crosslink used to measure range fluctuations. Data from this crosslink can be combined in such a way as to give the relative time offsets of the two spacecrafts USOs and to calculate the Allan deviation to describe the USOs combined performance while orbiting the Moon. Researchers find the first direct in-space Allan deviations below 10(exp -13) for 1-to-100-second averaging times comparable to pre-launch data, and better than measurements from ground tracking of an X-band carrier coherent with the USO. Fluctuations in Earth s atmosphere limit measurement performance in direct-to-Earth links. Inflight USO performance verification was also performed for GRAIL s parent mission, the Gravity Recovery and Climate Experiment (GRACE), using both Kband and Ka-band crosslinks.

  15. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  16. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  17. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  18. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  19. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  20. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  1. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  2. Myelination Delay and Allan-Herndon-Dudley Syndrome Caused by a Novel Mutation in the SLC16A2 Gene.

    PubMed

    La Piana, Roberta; Vanasse, Michel; Brais, Bernard; Bernard, Genevieve

    2015-09-01

    Allan-Herndon-Dudley syndrome is an X-linked disease caused by mutations in the solute carrier family 16 member 2 (SLC16A2) gene. As SLC16A2 encodes the monocarboxylate transporter 8 (MCT8), a thyroid hormone transporter, patients with Allan-Herndon-Dudley syndrome present a specific altered thyroid hormone profile. Allan-Herndon-Dudley syndrome has been associated with myelination delay on the brain magnetic resonance imaging (MRI) of affected subjects. We report a patient with Allan-Herndon-Dudley syndrome characterized by developmental delay, hypotonia, and delayed myelination caused by a novel SLC16A2 mutation (p.L291R). The thyroid hormones profile in our patient was atypical for Allan-Herndon-Dudley syndrome. The follow-up examinations showed that the progression of the myelination was not accompanied by a clinical improvement. Our paper suggests that SLC16A2 mutations should be investigated in patients with myelination delay even when the thyroid function is not conclusively altered.

  3. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  4. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Neutrino mass without cosmic variance

    NASA Astrophysics Data System (ADS)

    LoVerde, Marilena

    2016-05-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.

  7. Robust Techniques for Testing Heterogeneity of Variance Effects in Factorial Designs.

    ERIC Educational Resources Information Center

    O'Brien, Ralph G.

    1978-01-01

    Several ways of using traditional analysis of variance to test the homogeneity of variance in factorial designs with equal or unequal cell sizes are compared using theoretical and Monte Carlo results. (Author/JKS)

  8. Matrix Differencing as a Concise Expression of Test Variance: A Computer Implementation.

    ERIC Educational Resources Information Center

    Krus, David J.; Wilkinson, Sue Marie

    1986-01-01

    Matrix differencing of data vectors is introduced as a method for computing test variance and is compared to traditional analysis of variance. Applications for computer assisted instruction, provided by supplemental computer software, are also described. (Author/GDC)

  9. Noble gases in twenty Yamato H-chondrites: Comparison with Allan Hills chondrites and modern falls

    NASA Technical Reports Server (NTRS)

    Loeken, TH.; Scherer, P.; Schultz, L.

    1993-01-01

    Concentration and isotopic composition of noble gases have been measured in 20 H-chrondrites found on the Yamato Mountains ice fields in Antarctica. The distribution of exposure ages as well as of radiogenic He-4 contents is similar to that of H-chrondrites collected at the Allan Hills site. Furthermore, a comparison of the noble gas record of Antarctic H-chrondrites and finds or falls from non-Antarctic areas gives no support to the suggestion that Antarctic H-chrondrites and modern falls derive from differing interplanetary meteorite populations.

  10. Allan C. Gotlib, DC, CM: A worthy Member of the Order of Canada

    PubMed Central

    Brown, Douglas M.

    2016-01-01

    On June 29, 2012, His Excellency the Right Honourable David Johnston, Governor General of Canada, announced 70 new appointments to the Order of Canada. Among them was Dr. Allan Gotlib, who was subsequently installed as a Member of the Order of Canada, in recognition of his contributions to advancing research in the chiropractic profession and its inter-professional integration. This paper attempts an objective view of his career, to substantiate the accomplishments that led to Dr. Gotlib receiving Canada’s highest civilian honour. PMID:27069273

  11. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  12. The natural thermoluminescence of meteorites. V - Ordinary chondrites at the Allan Hills ice fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Natural TL values for meteorites from the Main ice field are fairly low, while the Farwestern field shows a spread with many values 30-80 krad, suggestive of less than 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  13. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 7 2011-07-01 2011-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this...

  14. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  15. Using "Excel" for White's Test--An Important Technique for Evaluating the Equality of Variance Assumption and Model Specification in a Regression Analysis

    ERIC Educational Resources Information Center

    Berenson, Mark L.

    2013-01-01

    There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…

  16. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  17. Identification and quantification of peptides and proteins secreted from prostate epithelial cells by unbiased liquid chromatography tandem mass spectrometry using goodness of fit and analysis of variance.

    PubMed

    Florentinus, Angelica K; Bowden, Peter; Sardana, Girish; Diamandis, Eleftherios P; Marshall, John G

    2012-02-01

    The proteins secreted by prostate cancer cells (PC3(AR)6) were separated by strong anion exchange chromatography, digested with trypsin and analyzed by unbiased liquid chromatography tandem mass spectrometry with an ion trap. The spectra were matched to peptides within proteins using a goodness of fit algorithm that showed a low false positive rate. The parent ions for MS/MS were randomly and independently sampled from a log-normal population and therefore could be analyzed by ANOVA. Normal distribution analysis confirmed that the parent and fragment ion intensity distributions were sampled over 99.9% of their range that was above the background noise. Arranging the ion intensity data with the identified peptide and protein sequences in structured query language (SQL) permitted the quantification of ion intensity across treatments, proteins and peptides. The intensity of 101,905 fragment ions from 1421 peptide precursors of 583 peptides from 233 proteins separated over 11 sample treatments were computed together in one ANOVA model using the statistical analysis system (SAS) prior to Tukey-Kramer honestly significant difference (HSD) testing. Thus complex mixtures of proteins were identified and quantified with a high degree of confidence using an ion trap without isotopic labels, multivariate analysis or comparing chromatographic retention times.

  18. Cation diffusion in calcite: determining closure temperatures and the thermal history for the Allan Hills 84001 meteorite.

    PubMed

    Fisler, D K; Cygan, R T

    1998-07-01

    The presence of zoned Fe, Mg, Ca, and Mn in the carbonate phases associated with the cracks and inclusions of the Allan Hills (ALH) 84001 meteorite provides evidence for constraining the thermal history of the meteorite. Using self- and tracer-diffusion coefficients obtained from laboratory experiments on natural calcite, cooling rates are calculated for various temperatures and diffusion distances to assist in the evaluation of the compositional zoning associated with the carbonate phases in ALH 84001. The closure temperature model provides the average temperature below which compositional zoning will be preserved for a given cooling rate, that is, the temperature at which diffusion will be ineffective in homogenizing the phase. The validity of various theories for the formation of the carbonate globules may be examined, therefore, in view of the diffusion-limited kinetic constraints. Experiments using a thin film-mineral diffusion couple and ion microprobe for depth profiling analysis were performed for the temperature range of 550-800 degrees C to determine self- and tracer-diffusion coefficients for Ca and Mg and in calcite. The resulting activation energies for Ca (Ea(Ca) = 271 +/- 80 kJ/mol) and for Mg (Ea(Mg) = 284 +/- 74 kJ/mol) were used then to calculate a series of cooling rate, grain size, and closure temperature curves. The data indicate, for example, that by the diffusion of Mg in calcite, a 10 micrometers compositional zone would be completely homogenized at a temperature of 300 degrees C for cooling rates <100 K/Ma. These data provide no constraint on formation models that propose a low-temperature fluid precipitation mechanism; however, they indicate that the carbonate globules were not exposed to a high-temperature environment for long time scales following formation. PMID:11543076

  19. Cation diffusion in calcite: determining closure temperatures and the thermal history for the Allan Hills 84001 meteorite.

    PubMed

    Fisler, D K; Cygan, R T

    1998-07-01

    The presence of zoned Fe, Mg, Ca, and Mn in the carbonate phases associated with the cracks and inclusions of the Allan Hills (ALH) 84001 meteorite provides evidence for constraining the thermal history of the meteorite. Using self- and tracer-diffusion coefficients obtained from laboratory experiments on natural calcite, cooling rates are calculated for various temperatures and diffusion distances to assist in the evaluation of the compositional zoning associated with the carbonate phases in ALH 84001. The closure temperature model provides the average temperature below which compositional zoning will be preserved for a given cooling rate, that is, the temperature at which diffusion will be ineffective in homogenizing the phase. The validity of various theories for the formation of the carbonate globules may be examined, therefore, in view of the diffusion-limited kinetic constraints. Experiments using a thin film-mineral diffusion couple and ion microprobe for depth profiling analysis were performed for the temperature range of 550-800 degrees C to determine self- and tracer-diffusion coefficients for Ca and Mg and in calcite. The resulting activation energies for Ca (Ea(Ca) = 271 +/- 80 kJ/mol) and for Mg (Ea(Mg) = 284 +/- 74 kJ/mol) were used then to calculate a series of cooling rate, grain size, and closure temperature curves. The data indicate, for example, that by the diffusion of Mg in calcite, a 10 micrometers compositional zone would be completely homogenized at a temperature of 300 degrees C for cooling rates <100 K/Ma. These data provide no constraint on formation models that propose a low-temperature fluid precipitation mechanism; however, they indicate that the carbonate globules were not exposed to a high-temperature environment for long time scales following formation.

  20. Restricted sample variance reduces generalizability.

    PubMed

    Lakes, Kimberley D

    2013-06-01

    One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context. PMID:23205627

  1. Investigations into an unknown organism on the martian meteorite Allan Hills 84001

    NASA Technical Reports Server (NTRS)

    Steele, A.; Goddard, D. T.; Stapleton, D.; Toporski, J. K.; Peters, V.; Bassinger, V.; Sharples, G.; Wynn-Williams, D. D.; McKay, D. S.

    2000-01-01

    Examination of fracture surfaces near the fusion crust of the martian meteorite Allan Hills (ALH) 84001 have been conducted using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and has revealed structures strongly resembling mycelium. These structures were compared with similar structures found in Antarctic cryptoendolithic communities. On morphology alone, we conclude that these features are not only terrestrial in origin but probably belong to a member of the Actinomycetales, which we consider was introduced during the Antarctic residency of this meteorite. If true, this is the first documented account of terrestrial microbial activity within a meteorite from the Antarctic blue ice fields. These structures, however, do not bear any resemblance to those postulated to be martian biota, although they are a probable source of the organic contaminants previously reported in this meteorite.

  2. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  3. Allan Hills 76005 Polymict Eucrite Pairing Group: Curatorial and Scientific Update on a Jointly Curated Meteorite

    NASA Technical Reports Server (NTRS)

    Righter, K.

    2011-01-01

    Allan Hills 76005 (or 765) was collected by the joint US-Japan field search for meteorites in 1976-77. It was described in detail as "pale gray in color and consists of finely divided macrocrystalline pyroxene-rich matrix that contains abundant clastic fragments: (1) Clasts of white, plagioclase-rich rocks. (2) Medium-gray, partly devitrified, cryptocrystalline. (3) Monomineralic fragments and grains of pyroxene, plagioclases, oxide minerals, sulfides, and metal. In overall appearance it is very similar to some lunar breccias." Subsequent studies found a great diversity of basaltic clast textures and compositions, and therefore it is best classified as a polymict eucrite. Samples from the 1976-77, 77-78, and 78-79 field seasons (76, 77, and 78 prefixes) were split between US and Japan (NIPR). The US specimens are currently at NASA-JSC, Smithsonian Institution, or the Field Museum in Chicago. After this initial finding of ALH 76005, the next year s team recovered one additional mass ALH 77302, and then four additional masses were found during the third season ALH 78040 and ALH 78132, 78158 and 78165. The joint US-Japan collection effort ended after three years and the US began collecting in the Trans-Antarctic Mountains with the 1979-80 and subsequent field seasons. ALH 79017 and ALH 80102 were recovered in these first two years, and then in 1981-82 field season, 6 additional masses were recovered from the Allan Hills. Of course it took some time to establish pairing of all of these specimens, but altogether the samples comprise 4292.4 g of material. Here will be summarized the scientific findings as well as some curatorial details of how specimens have been subdivided and allocated for study. A detailed summary is also presented on the NASA-JSC curation webpage for the HED meteorite compendium.

  4. The Natural Thermoluminescence of Meteorites. Part 5; Ordinary Chondrites at the Allan Hills Ice Fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Using fairly conservative criteria (including natural and induced TL, find location, and petrographic data), the 167 meteorite fragments are thought to represent a maximum of 129 separate meteorites. Natural TL values for meteorites from the Main ice field are fairly low (typically 5-30 krad, indicative of terrestrial ages of approx. 400 ka), while the Far western field shows a spread with many values 30-80 krad, suggestive of less then 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels (less then 5 krad) at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia (less then 0.85 AU) orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends observed for meteorites collected during the 1977/1978 and 1978/1979 field seasons which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  5. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  6. Simulation testing of unbiasedness of variance estimators

    USGS Publications Warehouse

    Link, W.A.

    1993-01-01

    In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.

  7. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  8. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  9. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  10. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  11. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  12. Understanding the influence of watershed storage caused by human interferences on ET variance

    NASA Astrophysics Data System (ADS)

    Zeng, R.; Cai, X.

    2014-12-01

    Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.

  13. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...

  14. Variance Design and Air Pollution Control

    ERIC Educational Resources Information Center

    Ferrar, Terry A.; Brownstein, Alan B.

    1975-01-01

    Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…

  15. On Some Representations of Sample Variance

    ERIC Educational Resources Information Center

    Joarder, Anwar H.

    2002-01-01

    The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…

  16. Save money by understanding variance and tolerancing.

    PubMed

    Stuart, K

    2007-01-01

    Manufacturing processes are inherently variable, which results in component and assembly variance. Unless process capability, variance and tolerancing are fully understood, incorrect design tolerances may be applied, which will lead to more expensive tooling, inflated production costs, high reject rates, product recalls and excessive warranty costs. A methodology is described for correctly allocating tolerances and performing appropriate analyses.

  17. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  18. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  19. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  20. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application... that the contractor has taken to inform the affected workers of the application, which must include... application and specifying where a copy may be examined at the place or places where notices to workers...

  1. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  2. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... variance to the Administrator at least 30 days before the variance expires. (b) The renewal request...

  3. Variance Estimation of Imputed Survey Data. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Brick, Mike; Kaufman, Steven; Walter, Elizabeth

    Missing data is a common problem in virtually all surveys. This study focuses on variance estimation and its consequences for analysis of survey data from the National Center for Education Statistics (NCES). Methods suggested by C. Sarndal (1992), S. Kaufman (1996), and S. Shao and R. Sitter (1996) are reviewed in detail. In section 3, the…

  4. Entropy, Fisher Information and Variance with Frost-Musulin Potenial

    NASA Astrophysics Data System (ADS)

    Idiodi, J. O. A.; Onate, C. A.

    2016-09-01

    This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.

  5. Turbulence Variance Characteristics in the Unstable Atmospheric Boundary Layer above Flat Pine Forest

    NASA Astrophysics Data System (ADS)

    Asanuma, Jun

    Variances of the velocity components and scalars are important as indicators of the turbulence intensity. They also can be utilized to estimate surface fluxes in several types of "variance methods", and the estimated fluxes can be regional values if the variances from which they are calculated are regionally representative measurements. On these motivations, variances measured by an aircraft in the unstable ABL over a flat pine forest during HAPEX-Mobilhy were analyzed within the context of the similarity scaling arguments. The variances of temperature and vertical velocity within the atmospheric surface layer were found to follow closely the Monin-Obukhov similarity theory, and to yield reasonable estimates of the surface sensible heat fluxes when they are used in variance methods. This gives a validation to the variance methods with aircraft measurements. On the other hand, the specific humidity variances were influenced by the surface heterogeneity and clearly fail to obey MOS. A simple analysis based on the similarity law for free convection produced a comprehensible and quantitative picture regarding the effect of the surface flux heterogeneity on the statistical moments, and revealed that variances of the active and passive scalars become dissimilar because of their different roles in turbulence. The analysis also indicated that the mean quantities are also affected by the heterogeneity but to a less extent than the variances. The temperature variances in the mixed layer (ML) were examined by using a generalized top-down bottom-up diffusion model with some combinations of velocity scales and inversion flux models. The results showed that the surface shear stress exerts considerable influence on the lower ML. Also with the temperature and vertical velocity variances ML variance methods were tested, and their feasibility was investigated. Finally, the variances in the ML were analyzed in terms of the local similarity concept; the results confirmed the original

  6. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  7. Allan Hills 88019: an Antarctic H-chondrite with a very long terrestrial age.

    NASA Astrophysics Data System (ADS)

    Scherer, P.; Schultz, L.; Neupert, U.; Knauer, M.; Neumann, S.; Leya, I.; Michel, R.; Mokos, J.; Lipschutz, M. E.; Metzler, K.; Suter, M.; Kubik, P. W.

    1997-11-01

    We have measured the concentrations of the cosmogenic radionuclides 10Be, 26Al and 36Cl (half-lives 1.51 Ma, 716 ka, 300 ka, respectively) in two different laboratories by AMS techniques, as well as concentrations and isotopic compositions of stable helium, neon and argon in the Antarctic H-chondrite Allan Hills 88019. In addition, nuclear track densities were measured. From these results it is concluded that the meteoroid ALH 88019 had a pre-atmospheric radius of (20 ( 5) cm and a shielding depth for the analyzed samples of between 4 and 8 cm. Using calculated and experimentally determined production rates of cosmogenic nuclides, an exposure age of about 40 Ma is obtained from cosmogenic 21Ne and 38Ar. The extremely low concentrations of radionuclides are explained by a very long terrestrial age for this meteorite of (2.2 ( 0.4) Ma. A similarly long terrestrial age was found so far only for the Antarctic L-chondrite Lewis Cliff 86360. Such long ages establish one boundary condition for the history of meteorites in Antarctica.

  8. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    NASA Astrophysics Data System (ADS)

    Gelfert, Axel

    2014-03-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling `prose-poem' Eureka (1848), in particular, has sometimes been scrutinized for anticipations of later scientific developments. By contrast, the present paper argues that it should be understood as a contribution to the raging debates about scientific methodology at the time. This methodological interest, which is echoed in Poe's `tales of ratiocination', gives rise to a proposed new mode of—broadly abductive—inference, which Poe attributes to the hybrid figure of the `poet-mathematician'. Without creative imagination and intuition, Science would necessarily remain incomplete, even by its own standards. This concern with imaginative (abductive) inference ties in nicely with his coherentism, which grants pride of place to the twin virtues of Simplicity and Consistency, which must constrain imagination lest it degenerate into mere fancy.

  9. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... and evidence of the best available treatment technology and techniques. (2) Economic and legal factors... water in the case of an excessive rise in the contaminant level for which the variance is requested....

  10. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  11. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  12. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT... notify each production or handling operation it certifies to which the temporary variance applies....

  13. Reducing variance in batch partitioning measurements

    SciTech Connect

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  14. 13 CFR 307.22 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22 Business Credit and Assistance ECONOMIC DEVELOPMENT ADMINISTRATION, DEPARTMENT OF COMMERCE ECONOMIC... Federal, State and local law....

  15. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  16. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  17. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  18. [ECoG classification based on wavelet variance].

    PubMed

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  19. Is the ANOVA F-Test Robust to Variance Heterogeneity When Sample Sizes are Equal?: An Investigation via a Coefficient of Variation

    ERIC Educational Resources Information Center

    Rogan, Joanne C.; Keselman, H. J.

    1977-01-01

    The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)

  20. Carbonates in fractures of Martian meteorite Allan Hills 84001: petrologic evidence for impact origin

    NASA Technical Reports Server (NTRS)

    Scott, E. R.; Krot, A. N.; Yamaguchi, A.

    1998-01-01

    Carbonates in Martian meteorite Allan Hills 84001 occur as grains on pyroxene grain boundaries, in crushed zones, and as disks, veins, and irregularly shaped grains in healed pyroxene fractures. Some carbonate disks have tapered Mg-rich edges and are accompanied by smaller, thinner and relatively homogeneous, magnesite microdisks. Except for the microdisks, all types of carbonate grains show the same unique chemical zoning pattern on MgCO3-FeCO3-CaCO3 plots. This chemical characteristic and the close spatial association of diverse carbonate types show that all carbonates formed by a similar process. The heterogeneous distribution of carbonates in fractures, tapered shapes of some disks, and the localized occurrence of Mg-rich microdisks appear to be incompatible with growth from an externally derived CO2-rich fluid that changed in composition over time. These features suggest instead that the fractures were closed as carbonates grew from an internally derived fluid and that the microdisks formed from a residual Mg-rich fluid that was squeezed along fractures. Carbonate in pyroxene fractures is most abundant near grains of plagioclase glass that are located on pyroxene grain boundaries and commonly contain major or minor amounts of carbonate. We infer that carbonates in fractures formed from grain boundary carbonates associated with plagiociase that were melted by impact and dispersed into the surrounding fractured pyroxene. Carbonates in fractures, which include those studied by McKay et al. (1996), could not have formed at low temperatures and preserved mineralogical evidence for Martian organisms.

  1. Variance estimation for nucleotide substitution models.

    PubMed

    Chen, Weishan; Wang, Hsiuying

    2015-09-01

    The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.

  2. A noise variance estimation approach for CT

    NASA Astrophysics Data System (ADS)

    Shen, Le; Jin, Xin; Xing, Yuxiang

    2012-10-01

    The Poisson-like noise model has been widely used for noise suppression and image reconstruction in low dose computed tomography. Various noise estimation and suppression approaches have been developed and studied to enhance the image quality. Among them, the recently proposed generalized Anscombe transform (GAT) has been utilized to stabilize the variance of Poisson-Gaussian noise. In this paper, we present a variance estimation approach using GAT. After the transform, the projection data is denoised conventionally with an assumption that the noise variance is uniformly equals to 1. The difference of the original and the denoised projection is treated as pure noise and the global variance σ2 can be estimated from the residual difference. Thus, the final denoising step with the estimated σ2 is performed. The proposed approach is verified on a cone-beam CT system and demonstrated to obtain a more accurate estimation of the actual parameter. We also examine FBP algorithm with the two-step noise suppression in the projection domain using the estimated noise variance. Reconstruction results with simulated and practical projection data suggest that the presented approach could be effective in practical imaging applications.

  3. Bulk and stable isotopic compositions of carbonate minerals in Martian meteorite Allan Hills 84001: no proof of high formation temperature.

    PubMed

    Treiman, A H; Romanek, C S

    1998-07-01

    Understanding the origin of carbonate minerals in the Martian meteorite Allan Hills (ALH) 84001 is crucial to evaluating the hypothesis that they contain traces of ancient Martian life. Using arguments based on chemical equilibria among carbonates and fluids, an origin at >650 degrees C (inimical to life) has been proposed. However, the bulk and stable isotopic compositions of the carbonate minerals are open to multiple interpretations and so lend no particular support to a high-temperature origin. Other methods (possibly less direct) will have to be used to determine the formation temperature of the carbonates in ALH84001. PMID:11543073

  4. Bulk and Stable Isotopic Compositions of Carbonate Minerals in Martian Meteorite Allan Hills 84001: No Proof of High Formation Temperature

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.; Romanek, Christopher S.

    1998-01-01

    Understanding the origin of carbonate minerals in the Martian meteorite Allan Hills (ALH) 84001 is crucial to evaluating the hypothesis that they contain traces of ancient Martian life. Using arguments based on chemical equilibria among carbonates and fluids, an origin at greater than 650 C (inimical to life) has been proposed. However, the bulk and stable isotopic compositions of the carbonate minerals are open to multiple interpretations and so lend no particular support to a high-temperature origin. Other methods (possibly less direct) will have to be used to determine the formation temperature of the carbonates in ALH 84001.

  5. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  6. Encoding of natural sounds by variance of the cortical local field potential.

    PubMed

    Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V

    2016-06-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  7. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  8. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  9. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  10. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  11. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  12. Regression Calibration with Heteroscedastic Error Variance

    PubMed Central

    Spiegelman, Donna; Logan, Roger; Grove, Douglas

    2011-01-01

    The problem of covariate measurement error with heteroscedastic measurement error variance is considered. Standard regression calibration assumes that the measurement error has a homoscedastic measurement error variance. An estimator is proposed to correct regression coefficients for covariate measurement error with heteroscedastic variance. Point and interval estimates are derived. Validation data containing the gold standard must be available. This estimator is a closed-form correction of the uncorrected primary regression coefficients, which may be of logistic or Cox proportional hazards model form, and is closely related to the version of regression calibration developed by Rosner et al. (1990). The primary regression model can include multiple covariates measured without error. The use of these estimators is illustrated in two data sets, one taken from occupational epidemiology (the ACE study) and one taken from nutritional epidemiology (the Nurses’ Health Study). In both cases, although there was evidence of moderate heteroscedasticity, there was little difference in estimation or inference using this new procedure compared to standard regression calibration. It is shown theoretically that unless the relative risk is large or measurement error severe, standard regression calibration approximations will typically be adequate, even with moderate heteroscedasticity in the measurement error model variance. In a detailed simulation study, standard regression calibration performed either as well as or better than the new estimator. When the disease is rare and the errors normally distributed, or when measurement error is moderate, standard regression calibration remains the method of choice. PMID:22848187

  13. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...

  14. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... whether a proposed structure or other regulated activity would adversely impact navigation, flood...

  15. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  16. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support. PMID:24972420

  17. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.

  18. Videotape Project in Child Variance. Final Report.

    ERIC Educational Resources Information Center

    Morse, William C.; Smith, Judith M.

    The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…

  19. Variance Anisotropy of Solar Wind fluctuations

    NASA Astrophysics Data System (ADS)

    Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.

    2013-12-01

    Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.

  20. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) or 6(d) of the Williams-Steiger Occupational Safety and Health Act of 1970 (29 U.S.C. 655). The... under the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from §§ 1910.13... from the standard under both the Longshoremen's and Harbor Workers' Compensation Act and the...

  1. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION...

  2. Number variance for arithmetic hyperbolic surfaces

    NASA Astrophysics Data System (ADS)

    Luo, W.; Sarnak, P.

    1994-03-01

    We prove that the number variance for the spectrum of an arithmetic surface is highly nonrigid in part of the universal range. In fact it is close to having a Poisson behavior. This fact was discovered numerically by Schmit, Bogomolny, Georgeot and Giannoni. It has its origin in the high degeneracy of the length spectrum, first observed by Selberg.

  3. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...

  4. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a... employment service complaint procedures set forth at §§ 658.421 (i) and (j), 658.422 and 658.423 of...

  5. 78 FR 14122 - Revocation of Permanent Variances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-04

    ... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it was... for tank scaffolds under the general provisions of the final rule (see 61 FR 46033). In this...

  6. No evidence for anomalously low variance circles on the sky

    SciTech Connect

    Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca

    2011-04-01

    In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.

  7. A surface layer variance heat budget for ENSO

    NASA Astrophysics Data System (ADS)

    Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.

    2015-05-01

    Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.

  8. Analyses of joint variance related to voluntary whole-body movements performed in standing

    PubMed Central

    Freitas, Sandra M. S. F.; Scholz, John P.; Latash, Mark L.

    2010-01-01

    This article investigates two methodological issues resulting from a recent study of center of mass (COM) positional stability during performance of whole-body targeting tasks (Freitas et al., 2006): (1) Can identical results be obtained with Uncontrolled Manifold (UCM) variance analysis when it is based on estimating the Jacobian using multiple linear regression (MLR) analysis compared to that using typical analytic formal geometric model? (2) Are kinematic synergies more related to stabilization of the instantaneous anterior-posterior position of the center of mass (COMAP) or the center of pressure (COPAP)? UCM analysis was used to partition the variance of the joint configuration into a ‘bad’ variance, leading to COMAP or COPAP variability, and ‘good’ variance, reflecting the use of motor abundance. Results indicated (1) nearly identical UCM results for both methods of Jacobian estimation; and (2) more ‘good’ and less ‘bad’ joint variance related to stability of COPAP than to COMAP position. The first result requires further investigation with more degrees of freedom, but suggests that when a formal geometric model is unavailable or overly complex, UCM analysis may be possible by estimating the Jacobian using MLR. Correct interpretation of the second result requires analysis of the singular values of the Jacobian for different performance variables, which indicates how certain amount of joint angle variance affects each performance variable. Thus, caution is required when interpreting differences in joint variance structure among various performance variables obtained by UCM analysis without first investigating how the different relationships captured by the Jacobian translate those variances into performance-level variance. PMID:20105441

  9. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... submits concurrently— (1) A request for the variance that documents to his satisfaction that the facility is unable to meet the time requirements for which the variance is requested; and (2) A revised...

  10. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    SciTech Connect

    Chiba, G. Tsuji, M.; Narabayashi, T.

    2015-01-15

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  11. Variant evolutionary trees under phenotypic variance.

    PubMed

    Nishimura, Kinya; Isoda, Yutaka

    2004-01-01

    Evolutionary branching, which is a coevolutionary phenomenon of the development of two or more distinctive traits from a single trait in a population, is the issue of recent studies on adaptive dynamics. In previous studies, it was revealed that trait variance is a minimum requirement for evolutionary branching, and that it does not play an important role in the formation of an evolutionary pattern of branching. Here we demonstrate that the trait evolution exhibits various evolutionary branching paths starting from an identical initial trait to different evolutional terminus traits as determined by only changing the assumption of trait variance. The key feature of this phenomenon is the topological configuration of equilibria and the initial point in the manifold of dimorphism from which dimorphic branches develop. This suggests that the existing monomorphic or polymorphic set in a population is not an unique inevitable consequence of an identical initial phenotype.

  12. PHD filtering with localised target number variance

    NASA Astrophysics Data System (ADS)

    Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel

    2013-05-01

    Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.

  13. The Regents of the University of California, Petitioner, vs. Allan Bakke, Respondent. On Writ of Certiorari to the Supreme Court of California.

    ERIC Educational Resources Information Center

    Supreme Court of the U. S., Washington, DC.

    The main question of this case is whether Allan Bakke was denied the equal protection of the laws in contravention of the 14th Amendment, solely because of his race, as the result of a racial quota admission policy. A statement of the case which reviews pertinent data such as the admission procedure of the medical school, Bakke's interview and…

  14. Variance of sensory threshold measurements: discrimination of feigners from trustworthy performers.

    PubMed

    Yarnitsky, D; Sprecher, E; Tamir, A; Zaslansky, R; Hemli, J A

    1994-09-01

    Sensory threshold measurements are criticized as subjective and therefore not to be relied upon in clinical diagnostic practice, particularly when deliberate deception by the patient is suspected. In an attempt to devise a method which permits dependable sensory threshold interpretation, individual variability of thresholds was examined in normal and neuropathic subjects. Normals were also instructed to feign sensory impairment resulting from hypothetical injury. For each subject, a number of threshold readings were averaged, yielding individual means and variances. Feigning normal subjects evidenced a larger variance compared to trustworthy normal and neuropathic subjects. Thus, alertness to variance reinforces the psychophysical analysis: small variance values suggest trustworthy normal or pathological results, whereas large variance calls the interpreter's attention to feigned results or inattentive test performance. PMID:7807165

  15. Genetic variance of tolerance and the toxicant threshold model.

    PubMed

    Tanaka, Yoshinari; Mano, Hiroyuki; Tatsuta, Haruki

    2012-04-01

    A statistical genetics method is presented for estimating the genetic variance (heritability) of tolerance to pollutants on the basis of a standard acute toxicity test conducted on several isofemale lines of cladoceran species. To analyze the genetic variance of tolerance in the case when the response is measured as a few discrete states (quantal endpoints), the authors attempted to apply the threshold character model in quantitative genetics to the threshold model separately developed in ecotoxicology. The integrated threshold model (toxicant threshold model) assumes that the response of a particular individual occurs at a threshold toxicant concentration and that the individual tolerance characterized by the individual's threshold value is determined by genetic and environmental factors. As a case study, the heritability of tolerance to p-nonylphenol in the cladoceran species Daphnia galeata was estimated by using the maximum likelihood method and nested analysis of variance (ANOVA). Broad-sense heritability was estimated to be 0.199 ± 0.112 by the maximum likelihood method and 0.184 ± 0.089 by ANOVA; both results implied that the species examined had the potential to acquire tolerance to this substance by evolutionary change.

  16. Variance of gene expression identifies altered network constraints in neurological disease.

    PubMed

    Mar, Jessica C; Matigian, Nicholas A; Mackay-Sim, Alan; Mellick, George D; Sue, Carolyn M; Silburn, Peter A; McGrath, John J; Quackenbush, John; Wells, Christine A

    2011-08-01

    Gene expression analysis has become a ubiquitous tool for studying a wide range of human diseases. In a typical analysis we compare distinct phenotypic groups and attempt to identify genes that are, on average, significantly different between them. Here we describe an innovative approach to the analysis of gene expression data, one that identifies differences in expression variance between groups as an informative metric of the group phenotype. We find that genes with different expression variance profiles are not randomly distributed across cell signaling networks. Genes with low-expression variance, or higher constraint, are significantly more connected to other network members and tend to function as core members of signal transduction pathways. Genes with higher expression variance have fewer network connections and also tend to sit on the periphery of the cell. Using neural stem cells derived from patients suffering from Schizophrenia (SZ), Parkinson's disease (PD), and a healthy control group, we find marked differences in expression variance in cell signaling pathways that shed new light on potential mechanisms associated with these diverse neurological disorders. In particular, we find that expression variance of core networks in the SZ patient group was considerably constrained, while in contrast the PD patient group demonstrated much greater variance than expected. One hypothesis is that diminished variance in SZ patients corresponds to an increased degree of constraint in these pathways and a corresponding reduction in robustness of the stem cell networks. These results underscore the role that variation plays in biological systems and suggest that analysis of expression variance is far more important in disease than previously recognized. Furthermore, modeling patterns of variability in gene expression could fundamentally alter the way in which we think about how cellular networks are affected by disease processes.

  17. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  18. Isotopic evidence for a terrestrial source of organic compounds found in martian meteorites Allan Hills 84001 and Elephant Moraine 79001.

    PubMed

    Jull, A J; Courtney, C; Jeffrey, D A; Beck, J W

    1998-01-16

    Stepped-heating experiments on martian meteorites Allan Hills 84001 (ALH84001) and Elephant Moraine 79001 (EETA79001) revealed low-temperature (200 to 430 degrees Celsius) fractions with a carbon isotopic composition delta13C between -22 and -33 per mil and a carbon-14 content that is 40 to 60 percent of that of modern terrestrial carbon, consistent with a terrestrial origin for most of the organic material. Intermediate-temperature (400 to 600 degrees Celsius) carbonate-rich fractions of ALH84001 have delta13C of +32 to +40 per mil with a low carbon-14 content, consistent with an extraterrestrial origin, whereas some of the carbonate fraction of EETA79001 is terrestrial. In addition, ALH84001 contains a small preterrestrial carbon component of unknown origin that combusts at intermediate temperatures. This component is likely a residual acid-insoluble carbonate or a more refractory organic phase. PMID:9430584

  19. Visual SLAM Using Variance Grid Maps

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  20. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  1. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  2. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  3. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering.

  4. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  5. Assessment of the genetic variance of late-onset Alzheimer's disease.

    PubMed

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions.

  6. Assessment of the genetic variance of late-onset Alzheimer's disease.

    PubMed

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079

  7. Discordance of DNA methylation variance between two accessible human tissues.

    PubMed

    Jiang, Ruiwei; Jones, Meaghan J; Chen, Edith; Neumann, Sarah M; Fraser, Hunter B; Miller, Gregory E; Kobor, Michael S

    2015-01-01

    Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations.

  8. Calculating bone-lead measurement variance.

    PubMed Central

    Todd, A C

    2000-01-01

    The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562

  9. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... travel time between the remote facility and each facility listed in paragraph (e) of this section; (f..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time... perform UR within the time requirements for which the variance is requested and its good faith efforts...

  10. Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...

  11. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  12. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. PMID:26409145

  13. Low variance at large scales of WMAP 9 year data

    SciTech Connect

    Gruppuso, A.; Finelli, F.; Rosa, A. De; Mandolesi, N.; Natoli, P.; Paci, F.; Molinari, D. E-mail: natoli@fe.infn.it E-mail: finelli@iasfbo.inaf.it E-mail: derosa@iasfbo.inaf.it

    2013-07-01

    We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution.

  14. From means and variances to persons and patterns

    PubMed Central

    Grice, James W.

    2015-01-01

    A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672

  15. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

  16. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

    PubMed Central

    Mulder, Herman A.; Hill, William G.; Knol, Egbert F.

    2015-01-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318

  17. A Comparative Study of Tests for Homogeneity of Variances with Application to DNA Methylation Data

    PubMed Central

    Li, Xuan; Qiu, Weiliang; Morrow, Jarrett; DeMeo, Dawn L.; Weiss, Scott T.; Fu, Yuejiao; Wang, Xiaogang

    2015-01-01

    Variable DNA methylation has been associated with cancers and complex diseases. Researchers have identified many DNA methylation markers that have different mean methylation levels between diseased subjects and normal subjects. Recently, researchers found that DNA methylation markers with different variabilities between subject groups could also have biological meaning. In this article, we aimed to help researchers choose the right test of equal variance in DNA methylation data analysis. We performed systematic simulation studies and a real data analysis to compare the performances of 7 equal-variance tests, including 2 tests recently proposed in the DNA methylation analysis literature. Our results showed that the Brown-Forsythe test and trimmed-mean-based Levene's test had good performance in testing for equality of variance in our simulation studies and real data analyses. Our results also showed that outlier profiles could be biologically very important. PMID:26683022

  18. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

    ERIC Educational Resources Information Center

    Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

    2004-01-01

    This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

  19. Allan Brooks, naturalist and artist (1869-1946): the travails of an early twentieth century wildlife illustrator in North America.

    PubMed

    Winearls, Joan

    2008-01-01

    British by birth Allan Cyril Brooks (1869-1946) emigrated to Canada in the 1880s, and became one of the most important North American bird illustrators during the first half of the twentieth century. Brooks was one of the leading ornithologists and wildlife collectors of the time; he corresponded extensively with other ornithologists and supplied specimens to many major North American museums. From the 1890s on he hoped to support himself by painting birds and mammals, but this was not possible in Canada at that time and he was forced to turn to American sources for illustration commissions. His work can be compared with that of his contemporary, the leading American bird painter Louis Agassiz Fuertes (1874-1927), and there are striking similarities and differences in their careers. This paper discusses the work of a talented, self-taught wildlife artist working in a North American milieu, his difficulties and successes in a newly developing field, and his quest for Canadian recognition.

  20. Fine-Grained Rims in the Allan Hills 81002 and Lewis Cliff 90500 CM2 Meteorites: Their Origin and Modification

    NASA Technical Reports Server (NTRS)

    Hua, X.; Wang, J.; Buseck, P. R.

    2002-01-01

    Antarctic CM meteorites Allan Hills (ALH) 8 1002 and Lewis Cliff (LEW) 90500 contain abundant fine-grained rims (FGRs) that surround a variety of coarse-grained objects. FGRs from both meteorites have similar compositions and petrographic features, independent of their enclosed objects. The FGRs are chemically homogeneous at the 10 m scale for major and minor elements and at the 25 m scale for trace elements. They display accretionary features and contain large amounts of volatiles, presumably water. They are depleted in Ca, Mn, and S but enriched in P. All FGRs show a slightly fractionated rare earth element (REE) pattern, with enrichments of Gd and Yb and depletion of Er. Gd is twice as abundant as Er. Our results indicate that those FGRs are not genetically related to their enclosed cores. They were sampled from a reservoir of homogeneously mixed dust, prior to accretion to their parent body. The rim materials subsequently experienced aqueous alteration under identical conditions. Based on their mineral, textural, and especially chemical similarities, we conclude that ALH 8 1002 and LEW 90500 likely have a similar or identical source.

  1. Linear minimum variance filters applied to carrier tracking

    NASA Technical Reports Server (NTRS)

    Gustafson, D. E.; Speyer, J. L.

    1976-01-01

    A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.

  2. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  3. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  4. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PROTECTION PROGRAMS ENVIRONMENTAL RADIATION PROTECTION STANDARDS FOR NUCLEAR POWER OPERATIONS Environmental Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified...

  5. Estimation of Variance Components of Quantitative Traits in Inbred Populations

    PubMed Central

    Abney, Mark; McPeek, Mary Sara; Ober, Carole

    2000-01-01

    Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322

  6. The phenotypic variance gradient – a novel concept

    PubMed Central

    Pertoldi, Cino; Bundgaard, Jørgen; Loeschcke, Volker; Barker, James Stuart Flinton

    2014-01-01

    Evolutionary ecologists commonly use reaction norms, which show the range of phenotypes produced by a set of genotypes exposed to different environments, to quantify the degree of phenotypic variance and the magnitude of plasticity of morphometric and life-history traits. Significant differences among the values of the slopes of the reaction norms are interpreted as significant differences in phenotypic plasticity, whereas significant differences among phenotypic variances (variance or coefficient of variation) are interpreted as differences in the degree of developmental instability or canalization. We highlight some potential problems with this approach to quantifying phenotypic variance and suggest a novel and more informative way to plot reaction norms: namely “a plot of log (variance) on the y-axis versus log (mean) on the x-axis, with a reference line added”. This approach gives an immediate impression of how the degree of phenotypic variance varies across an environmental gradient, taking into account the consequences of the scaling effect of the variance with the mean. The evolutionary implications of the variation in the degree of phenotypic variance, which we call a “phenotypic variance gradient”, are discussed together with its potential interactions with variation in the degree of phenotypic plasticity and canalization. PMID:25540685

  7. Water vapor variance measurements using a Raman lidar

    NASA Technical Reports Server (NTRS)

    Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.

    1992-01-01

    Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.

  8. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...

  9. Characterizing the evolution of genetic variance using genetic covariance tensors.

    PubMed

    Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W

    2009-06-12

    Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations.

  10. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  11. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  12. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE...

  13. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the Williams... the Williams-Steiger Occupational Safety and Health Act of 1970, and any variance from a standard... the Williams-Steiger Occupational Safety and Health Act of 1970. In accordance with the...

  14. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  15. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  16. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  17. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  18. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  19. Evaluation of Mean and Variance Integrals without Integration

    ERIC Educational Resources Information Center

    Joarder, A. H.; Omar, M. H.

    2007-01-01

    The mean and variance of some continuous distributions, in particular the exponentially decreasing probability distribution and the normal distribution, are considered. Since they involve integration by parts, many students do not feel comfortable. In this note, a technique is demonstrated for deriving mean and variance through differential…

  20. Productive Failure in Learning the Concept of Variance

    ERIC Educational Resources Information Center

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  1. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  2. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  3. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  4. A Variance Explanation Paradox: When a Little Is a Lot.

    ERIC Educational Resources Information Center

    Abelson, Robert P.

    1985-01-01

    Argues that percent variance explanation is a misleading index of the influence of systematic factors in cases where there are processes by which individually tiny influences cumulate to produce meaningful outcomes. An example is the computation of percentage of variance in batting performance among major league baseball players. (Author/CB)

  5. On the Endogeneity of the Mean-Variance Efficient Frontier.

    ERIC Educational Resources Information Center

    Somerville, R. A.; O'Connell, Paul G. J.

    2002-01-01

    Explains that the endogeneity of the efficient frontier in the mean-variance model of portfolio selection is commonly obscured in portfolio selection literature and in widely used textbooks. Demonstrates endogeneity and discusses the impact of parameter changes on the mean-variance efficient frontier and on the beta coefficients of individual…

  6. Olivine in Martian Meteorite Allan Hills 84001: Evidence for a High-Temperature Origin and Implications for Signs of Life

    NASA Technical Reports Server (NTRS)

    Shearer, C. K.; Leshin, L. A.; Adcock, C. T.

    1999-01-01

    Olivine from Martian meteorite Allan Hills (ALH) 84001 occurs as clusters within orthopyroxene adjacent to fractures containing disrupted carbonate globules and feldspathic shock glass. The inclusions are irregular in shape and range in size from approx. 40 microns to submicrometer. Some of the inclusions are elongate and boudinage-like. The olivine grains are in sharp contact with the enclosing orthopyroxene and often contain small inclusions of chromite The olivine exhibits a very limited range of composition from Fo(sub 65) to Fo(sub 66) (n = 25). The delta(sup 18)O values of the olivine and orthopyroxene analyzed by ion microprobe range from +4.3 to +5.3% and are indistinguishable from each other within analytical uncertainty. The mineral chemistries, O-isotopic data, and textural relationships indicate that the olivine inclusions were produced at a temperature greater than 800 C. It is unlikely that the olivines formed during the same event that gave rise to the carbonates in ALH 84001, which have more elevated and variable delta(sup 18)O values, and were probably formed from fluids that were not in isotopic equilibrium with the orthopyroxene or olivine The reactions most likely instrumental in the formation of olivine could be either the dehydration of hydrous silicates that formed during carbonate precipitation or the reduction of orthopyroxene and spinel If the olivine was formed by either reaction during a postcarbonate beating event, the implications are profound with regards to the interpretations of McKay et al. Due to the low diffusion rates in carbonates, this rapid, high-temperature event would have resulted in the preservation of the fine-scale carbonate zoning' while partially devolatilizing select carbonate compositions on a submicrometer scale. This may have resulted in the formation of the minute magnetite grains that McKay et al attributed to biogenic activity.

  7. Magnesian anorthositic granulites in lunar meteorites Allan Hills A81005 and Dhofar 309: Geochemistry and global significance

    NASA Astrophysics Data System (ADS)

    Treiman, Allan H.; Maloy, Amy K.; Shearer, Charles K.; Gross, Juliane

    2010-02-01

    Fragments of magnesian anorthositic granulite are found in the lunar highlands meteorites Allan Hills (ALH) A81005 and Dhofar (Dho) 309. Five analyzed clasts of meteoritic magnesian anorthositic granulite have Mg' [molar Mg/(Mg+Fe)]=81-87 FeO~5%wt Al2O3~22% wt; rare earth elements abundances~0.5-2×CI (except Eu~10×CI) and low Ni and Co in a non-chondritic ratio. The clasts have nearly identical chemical compositions, even though their host meteorites formed at different places on the Moon. These magnesian anorthositic granulites are distinct from other highlands materials in their unique combination of mineral proportions, Mg', REE abundances and patterns, Ti/Sm ratio, and Sc/Sm ratio. Their Mg' is too high for a close relationship to ferroan anorthosites, or to have formed as flotation cumulates from the lunar magma ocean. Compositions of these magnesian anorthositic granulites cannot be modeled as mixtures of, or fractionates from, known lunar rocks. However, compositions of lunar highlands meteorites can be represented as mixtures of magnesian anorthositic granulite, ferroan anorthosite, mare basalt, and KREEP. Meteoritic magnesian anorthositic granulite is a good candidate for the magnesian highlands component inferred from Apollo highland impactites: magnesian, feldspathic, and REE-poor. Bulk compositions of meteorite magnesian anorthositic granulites are comparable to those inferred for parts of the lunar farside (the Feldspathic Highlands Terrane): ~4.5 wt% FeO; ~28 wt% Al2O3; and Th<1ppm. Thus, magnesian anorthositic granulite may be a widespread and abundant component of the lunar highlands.

  8. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  9. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  10. Utility functions predict variance and skewness risk preferences in monkeys.

    PubMed

    Genest, Wilfried; Stauffer, William R; Schultz, Wolfram

    2016-07-26

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.

  11. Variance After-Effects Distort Risk Perception in Humans.

    PubMed

    Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel

    2016-06-01

    In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500

  12. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  13. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  14. Budgeting and controllable cost variances. The case of multiple diagnoses, multiple services, and multiple resources.

    PubMed

    Broyles, R W; Lay, C M

    1982-12-01

    This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731

  15. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    SciTech Connect

    Yu, Zhiyong

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  16. Variances of the components and magnitude of the polar heliospheric magnetic field

    NASA Technical Reports Server (NTRS)

    Balogh, A.; Horbury, T. S.; Forsyth, R. J.; Smith, E. J.

    1995-01-01

    The heliolatitude dependences of the variances in the components and the magnitude of the heliospheric magnetic field have been analysed, using the Ulysses magnetic field observations from close to the ecliptic plane to 80 southern solar latitude. The normalized variances in the components of the field increased significantly (by a factor about 5) as Ulysses entered the purely polar flows from the southern coronal hole. At the same time, there was at most a small increase in the variance of the field magnitude. The analysis of the different components indicates that the power in the fluctuations is not isotropically distributed: most of the power is in the components of the field transverse to the radial direction. Examining the variances calculated over different time scales from minutes to hours shows that the anisotropy of the field variances is different on different scales, indicating the influence of the two distinct populations of fluctuations in the polar solar wind which have been previously identified. We discuss these results in terms of evolutionary, dynamic processes as a function of heliocentric distance and as a function of the large scale geometry of the magnetic field associated with the polar coronal hole.

  17. Combining metabolomic non-targeted GC×GC-ToF-MS analysis and chemometric ASCA-based study of variances to assess dietary influence on type 2 diabetes development in a mouse model.

    PubMed

    Ly-Verdú, Saray; Gröger, Thomas Maximilian; Arteaga-Salas, Jose Manuel; Brandmaier, Stefan; Kahle, Melanie; Neschen, Susanne; Harbě de Angelis, Martin; Zimmermann, Ralf

    2015-01-01

    Insulin resistance (IR) lies at the origin of type 2 diabetes. It induces initial compensatory insulin secretion until insulin exhaustion and subsequent excessive levels of glucose (hyperglycemia). A high-calorie diet is a major risk factor contributing to the development of this metabolic disease. For this study, a time-course experiment was designed that consisted of two groups of mice. The aim of this design was to reproduce the dietary conditions that parallel the progress of IR over time. The first group was fed with a high-fatty-acid diet for several weeks and followed by 1 week of a low-fatty-acid intake, while the second group was fed with a low-fatty-acid diet during the entire experiment. The metabolomic fingerprint of C3HeB/FeJ mice liver tissue extracts was determined by means of two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-ToF-MS). This article addresses the application of ANOVA-simultaneous component analysis (ASCA) to the found metabolomic profile. By performing hyphenated high-throughput analytical techniques together with multivariate chemometric methodology on metabolomic analysis, it enables us to investigate the sources of variability in the data related to each experimental factor of the study design (defined as time, diet and individual). The contribution of the diet factor in the dissimilarities between the samples appeared to be predominant over the time factor contribution. Nevertheless, there is a significant contribution of the time-diet interaction factor. Thus, evaluating the influences of the factors separately, as it is done in classical statistical methods, may lead to inaccurate interpretation of the data, preventing achievement of consistent biological conclusions.

  18. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    PubMed

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates.

  19. Employing components-of-variance to evaluate forensic breath test instruments.

    PubMed

    Gullberg, Rod G

    2008-03-01

    The evaluation of breath alcohol instruments for forensic suitability generally includes the assessment of accuracy, precision, linearity, blood/breath comparisons, etc. Although relevant and important, these methods fail to evaluate other important analytical and biological components related to measurement variability. An experimental design comparing different instruments measuring replicate breath samples from several subjects is presented here. Three volunteers provided n = 10 breath samples into each of six different instruments within an 18 minute time period. Two-way analysis of variance was employed which quantified the between-instrument effect and the subject/instrument interaction. Variance contributions were also determined for the analytical and biological components. Significant between-instrument and subject/instrument interaction were observed. The biological component of total variance ranged from 56% to 98% among all subject instrument combinations. Such a design can help quantify the influence of and optimize breath sampling parameters that will reduce total measurement variability and enhance overall forensic confidence.

  20. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    NASA Astrophysics Data System (ADS)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  1. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...

  2. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260.33 Section 260.33 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES... from classification as a solid waste, for variances to be classified as a boiler, or for...

  3. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  4. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  5. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  6. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  7. Comparison of multiplicative heterogeneous variance adjustment models for genetic evaluations.

    PubMed

    Márkus, Sz; Mäntysaari, E A; Strandén, I; Eriksson, J-Å; Lidauer, M H

    2014-06-01

    Two heterogeneous variance adjustment methods and two variance models were compared in a simulation study. The method used for heterogeneous variance adjustment in the Nordic test-day model, which is a multiplicative method based on Meuwissen (J. Dairy Sci., 79, 1996, 310), was compared with a restricted multiplicative method where the fixed effects were not scaled. Both methods were tested with two different variance models, one with a herd-year and the other with a herd-year-month random effect. The simulation study was built on two field data sets from Swedish Red dairy cattle herds. For both data sets, 200 herds with test-day observations over a 12-year period were sampled. For one data set, herds were sampled randomly, while for the other, each herd was required to have at least 10 first-calving cows per year. The simulations supported the applicability of both methods and models, but the multiplicative mixed model was more sensitive in the case of small strata sizes. Estimation of variance components for the variance models resulted in different parameter estimates, depending on the applied heterogeneous variance adjustment method and variance model combination. Our analyses showed that the assumption of a first-order autoregressive correlation structure between random-effect levels is reasonable when within-herd heterogeneity is modelled by year classes, but less appropriate for within-herd heterogeneity by month classes. Of the studied alternatives, the multiplicative method and a variance model with a random herd-year effect were found most suitable for the Nordic test-day model for dairy cattle evaluation.

  8. Video-based and interference-free axial force detection and analysis for optical tweezers

    NASA Astrophysics Data System (ADS)

    Knust, Sebastian; Spiering, Andre; Vieker, Henning; Beyer, André; Gölzhäuser, Armin; Tönsing, Katja; Sischka, Andy; Anselmetti, Dario

    2012-10-01

    For measuring the minute forces exerted on single molecules during controlled translocation through nanopores with sub-piconewton precision, we have developed a video-based axial force detection and analysis system for optical tweezers. Since our detection system is equipped with a standard and versatile CCD video camera with a limited bandwidth offering operation at moderate light illumination with minimal sample heating, we integrated Allan variance analysis for trap stiffness calibration. Upon manipulating a microbead in the vicinity of a weakly reflecting surface with simultaneous axial force detection, interference effects have to be considered and minimized. We measured and analyzed the backscattering light properties of polystyrene and silica microbeads with different diameters and propose distinct and optimized experimental configurations (microbead material and diameter) for minimal light backscattering and virtually interference-free microbead position detection. As a proof of principle, we investigated the nanopore threading forces of a single dsDNA strand attached to a microbead with an overall force resolution of ±0.5 pN at a sample rate of 123 Hz.

  9. Variance Function Partially Linear Single-Index Models1

    PubMed Central

    LIAN, HENG; LIANG, HUA; CARROLL, RAYMOND J.

    2014-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function. PMID:25642139

  10. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in... interest, and (b) Information is promptly made a matter of public record delineating the nature of...

  11. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  12. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  13. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  14. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  15. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  16. Thermoluminescence survey of 12 meteorites collected by the European 1988 Antarctic meteorite expedition to Allan Hills and the importance of acid washing for thermoluminescence sensitivity measurements

    NASA Technical Reports Server (NTRS)

    Benoit, P. H.; Sears, H.; Sears, D. W. G.

    1991-01-01

    Natural and induced thermoluminescence (TL) data are reported for 12 meteorites recovered from the Allan Hills region of Antarctica by the European field party during the 1988/1989 field season. The samples include one with extremely high natural TL, ALH88035, suggestive of exposure to unusually high radiation doses (i.e., low degrees of shielding), and one, ALH88034, whose low natural TL suggests reheating within the last 100,000 years. The remainder have natural TL values suggestive of terrestrial ages similar to those of other meteorites from Allan Hills. ALH88015 (L6) has induced TL data suggestive of intense shock. TL sensitivities of these meteorites are generally lower than observed falls of their petrologic types, as is also observed for Antarctic meteorites in general. Acid-washing experiments indicate that this is solely the result of terrestrial weathering rather than a nonterrestrial Antarctic-non-Antarctic difference. However, other TL parameters, such as natural TL and induced peak temperature-width, are unchanged by acid washing and are sensitive indicators of a meteorite's metamorphic and recent radiation history.

  17. Thermoluminescence survey of 12 meteorites collected by the European 1988 Antarctic meteorite expedition to Allan Hills and the importance of acid washing for thermoluminescence sensitivity measurements

    SciTech Connect

    Benoit, P.H.; Sears, H.; Sears, D.W.G. )

    1991-06-01

    Natural and induced thermoluminescence (TL) data are reported for 12 meteorites recovered from the Allan Hills region of Antarctica by the European field party during the 1988/1989 field season. The samples include one with extremely high natural TL, ALH88035, suggestive of exposure to unusually high radiation doses (i.e., low degrees of shielding), and one, ALH88034, whose low natural TL suggests reheating within the last 100,000 years. The remainder have natural TL values suggestive of terrestrial ages similar to those of other meteorites from Allan Hills. ALH88015 (L6) has induced TL data suggestive of intense shock. TL sensitivities of these meteorites are generally lower than observed falls of their petrologic types, as is also observed for Antarctic meteorites in general. Acid-washing experiments indicate that this is solely the result of terrestrial weathering rather than a nonterrestrial Antarctic-non-Antarctic difference. However, other TL parameters, such as natural TL and induced peak temperature-width, are unchanged by acid washing and are sensitive indicators of a meteorite's metamorphic and recent radiation history. 16 refs.

  18. Variance Based Measure for Optimization of Parametric Realignment Algorithms

    PubMed Central

    Mehring, Carsten

    2016-01-01

    Neuronal responses to sensory stimuli or neuronal responses related to behaviour are often extracted by averaging neuronal activity over large number of experimental trials. Such trial-averaging is carried out to reduce noise and to diminish the influence of other signals unrelated to the corresponding stimulus or behaviour. However, if the recorded neuronal responses are jittered in time with respect to the corresponding stimulus or behaviour, averaging over trials may distort the estimation of the underlying neuronal response. Temporal jitter between single trial neural responses can be partially or completely removed using realignment algorithms. Here, we present a measure, named difference of time-averaged variance (dTAV), which can be used to evaluate the performance of a realignment algorithm without knowing the internal triggers of neural responses. Using simulated data, we show that using dTAV to optimize the parameter values for an established parametric realignment algorithm improved its efficacy and, therefore, reduced the jitter of neuronal responses. By removing the jitter more effectively and, therefore, enabling more accurate estimation of neuronal responses, dTAV can improve analysis and interpretation of the neural responses. PMID:27159490

  19. A nitrogen and argon stable isotope study of Allan Hills 84001: implications for the evolution of the Martian atmosphere.

    PubMed

    Grady, M M; Wright, I P; Pillinger, C T

    1998-07-01

    The abundances and isotopic compositions of N and Ar have been measured by stepped combustion of the Allan Hills 84001 (ALH 84001) Martian orthopyroxenite. Material described as shocked is N-poor ([N] approximately 0.34 ppm; delta 15N approximately +23%); although during stepped combustion, 15N-enriched N (delta 15N approximately +143%) is released in a narrow temperature interval between 700 degrees C and 800 degrees C (along with 13C-enriched C (delta 13C approximately +19%) and 40Ar). Cosmogenic species are found to be negligible at this temperature; thus, the isotopically heavy component is identified, in part, as Martian atmospheric gas trapped relatively recently in the history of ALH84001. The N and Ar data show that ALH84001 contains species from the Martian lithosphere, a component interpreted as ancient trapped atmosphere (in addition to the modern atmospheric species), and excess 40Ar from K decay. Deconvolution of radiogenic 40Ar from other Ar components, on the basis of end-member 36Ar/14N and 40Ar/36Ar ratios, has enabled calculation of a K-Ar age for ALH 84001 as 3.5-4.6 Ga, depending on assumed K abundance. If the component believed to be Martian palaeoatmosphere was introduced to ALH 84001 at the time the K-Ar age was set, then the composition of the atmosphere at this time is constrained to: delta 15N > or = +200%, 40Ar/36Ar < or = 3000 and 36Ar/14N > or = 17 x 10(-5). In terms of the petrogenetic history of the meteorite, ALH 84001 crystallised soon after differentiation of the planet, may have been shocked and thermally metamorphosed in an early period of bombardment, and then subjected to a second event. This later process did not reset the K-Ar system but perhaps was responsible for introducing (recent) atmospheric gases into ALH 84001; and it might mark the time at which ALH 84001 suffered fluid alteration resulting in the formation of the plagioclase and carbonate mineral assemblages. PMID:11543078

  20. Extreme metamorphism in a firn core from the Allan Hills, Antarctica, as an analogue for glacial conditions

    NASA Astrophysics Data System (ADS)

    Dadic, Ruzica; Schneebeli, Martin; Bertler, Nancy; Schwikowski, Margit; Matzl, Margret

    2015-04-01

    Understanding processes in near-zero accumulation areas can help to better understand the ranges of isotopic composition in ice cores, particularly during ice ages, when accumulation rates were lower than today. Snow metamorphism is a primary driver of the transition from snow to ice and can be accompanied by altered isotopic compositions and chemical species concentration. High degree snow metamorphism, which results in major structural changes, is little-studied but has been identified in certain places in Antarctica. Here we report on a 5-m firn core collected adjacent to a blue-ice field in the Allan Hills, Antarctica. We determined the physical properties of the snow using computer tomography (microCT) and measured the isotopic composition of δD and δ18O, as well as 210Pb activity. The core shows a high degree of snow metamorphism and an exponential decrease in specific surface area (SSA), but no clear densification, with depth. The micro-CT measurements show a homogenous and stable structure throughout the entire core, with obvious erosion features in the near-surface, where high-resolution data is available. The observed firn structure is likely caused by a combination of unique depositional and post-depositional processes. The defining depositional process is the impact deposition under high winds and with a high initial density. The defining post-depositional processes are a) increased moisture transport due to forced ventilation and high winds and b) decades of temperature-gradient driven metamorphic growth in the near surface due to prolonged exposure to seasonal temperature cycling. Both post-processes are enhanced in low accumulation regions where snow stays close to surface for a long time. We observe an irregular signal in δD and δ18O that does not follow the stratigraphic sequence. The isotopic signal is likely caused by the same post-depositional processes that are responsible for the firn structure, and that are driven by local climate

  1. A multicomb variance reduction scheme for Monte Carlo semiconductor simulators

    SciTech Connect

    Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.

    1998-04-01

    The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.

  2. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

  3. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.

  4. Variance and covariance estimates for weaning weight of Senepol cattle.

    PubMed

    Wright, D W; Johnson, Z B; Brown, C J; Wildeus, S

    1991-10-01

    Variance and covariance components were estimated for weaning weight from Senepol field data for use in the reduced animal model for a maternally influenced trait. The 4,634 weaning records were used to evaluate 113 sires and 1,406 dams on the island of St. Croix. Estimates of direct additive genetic variance (sigma 2A), maternal additive genetic variance (sigma 2M), covariance between direct and maternal additive genetic effects (sigma AM), permanent maternal environmental variance (sigma 2PE), and residual variance (sigma 2 epsilon) were calculated by equating variances estimated from a sire-dam model and a sire-maternal grandsire model, with and without the inverse of the numerator relationship matrix (A-1), to their expectations. Estimates were sigma 2A, 139.05 and 138.14 kg2; sigma 2M, 307.04 and 288.90 kg2; sigma AM, -117.57 and -103.76 kg2; sigma 2PE, -258.35 and -243.40 kg2; and sigma 2 epsilon, 588.18 and 577.72 kg2 with and without A-1, respectively. Heritability estimates for direct additive (h2A) were .211 and .210 with and without A-1, respectively. Heritability estimates for maternal additive (h2M) were .47 and .44 with and without A-1, respectively. Correlations between direct and maternal (IAM) effects were -.57 and -.52 with and without A-1, respectively. PMID:1778806

  5. Analytic variance estimates of Swank and Fano factors

    SciTech Connect

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  6. Analysis of a magnetically trapped atom clock

    SciTech Connect

    Kadio, D.; Band, Y. B.

    2006-11-15

    We consider optimization of a rubidium atom clock that uses magnetically trapped Bose condensed atoms in a highly elongated trap, and determine the optimal conditions for minimum Allan variance of the clock using microwave Ramsey fringe spectroscopy. Elimination of magnetic field shifts and collisional shifts are considered. The effects of spin-dipolar relaxation are addressed in the optimization of the clock. We find that for the interstate interaction strength equal to or larger than the intrastate interaction strengths, a modulational instability results in phase separation and symmetry breaking of the two-component condensate composed of the ground and excited hyperfine clock levels, and this mechanism limits the clock accuracy.

  7. On-orbit frequency stability analysis of the GPS NAVSTAR-1 quartz clock and the NAVSTARs-6 and -8 rubidium clocks

    NASA Technical Reports Server (NTRS)

    Mccaskill, T. B.; Buisson, J. A.; Reid, W. G.

    1984-01-01

    An on-orbit frequency stability performance analysis of the GPS NAVSTAR-1 quartz clock and the NAVSTARs-6 and -8 rubidium clocks is presented. The clock offsets were obtained from measurements taken at the GPS monitor stations which use high performance cesium standards as a reference. Clock performance is characterized through the use of the Allan variance, which is evaluated for sample times of 15 minutes to two hours, and from one day to 10 days. The quartz and rubidium clocks' offsets were corrected for aging rate before computing the frequency stability. The effect of small errors in aging rate is presented for the NAVSTAR-8 rubidium clock's stability analysis. The analysis includes presentation of time and frequency residuals with respect to linear and quadratic models, which aid in obtaining aging rate values and identifying systematic and random effects. The frequency stability values were further processed with a time domain noise process analysis, which is used to classify random noise process and modulation type.

  8. Forecast Variance Estimates Using Dart Inversion

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2014-12-01

    The tsunami forecast tool developed by the NOAA Center for Tsunami Research (NCTR) provides real-time tsunami forecast and is composed of the following major components: a pre-computed tsunami propagation database, an inversion algorithm that utilizes real-time tsunami data recorded at DART stations to define the tsunami source, and inundation models that predict tsunami wave characteristics at specific coastal locations. The propagation database is a collection of basin-wide tsunami model runs generated from 50x100 km "unit sources" with a slip of 1 meter. Linear combination and scaling of unit sources is possible since the nonlinearity in the deep ocean is negligible. To define the tsunami source using the unit sources, real-time DART data is ingested into an inversion algorithm. Based on the selected DART and length of tsunami time series, the inversion algorithm will select the best combination of unit sources and scaling factors that best fit the observed data at the selected locations. This combined source then serves as boundary condition for the inundation models. Different combinations of DARTs and length of tsunami time series used in the inversion algorithm will result in different selection of unit sources and scaling factors. Since the combined unit sources are used as boundary condition for inundation modeling, different sources will produce variations in the tsunami wave characteristics. As part of the testing procedures for the tsunami forecast tool, staff at NCTR and both National and Pacific Tsunami Warning Centers, performed post-event forecasts for several historical tsunamis. The extent of variation due to different source definitions obtained from the testing is analyzed by comparing the simulated maximum tsunami wave amplitude with recorded data at tide gauge locations. Results of the analysis will provide an error estimate defining the possible range of the simulated maximum tsunami wave amplitude for each specific inundation model.

  9. University student understanding of cancer: analysis of ethnic group variances.

    PubMed

    Estaville, Lawrence; Trad, Megan; Martinez, Gloria

    2012-06-01

    Traditional university and college students ages 18-24 are traversing an important period in their lives in which behavioral intervention is critical in reducing their risk of cancer in later years. The study's purpose was to determine the perceptions and level of knowledge about cancer of white, Hispanic, and black university students (n=958). Sources of student information about cancer were also identified. The survey results showed all students know very little about cancer and their perceptions of cancer are bad with many students thinking that cancer and death are synonymous. We also discovered university students do not discuss cancer often in their classrooms nor with their family or friends. Moreover, university students are unlikely to perform monthly or even yearly self-examinations for breast or testicular cancers; black students have the lowest rate of self-examinations. PMID:22477236

  10. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    ERIC Educational Resources Information Center

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  11. Variance Analysis of Unevenly Spaced Time Series Data

    NASA Technical Reports Server (NTRS)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  12. Statistical error in isothermal titration calorimetry: variance function estimation from generalized least squares.

    PubMed

    Tellinghuisen, Joel

    2005-08-01

    The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether. In the GLS method, the least squares (LS) residuals from the data fit are themselves fitted to a variance function, with iterative adjustment of the weighting function in the data analysis to produce consistency. The data are treated in a pooled fashion, providing 321 fitted residuals from 35 data sets in the final analysis. Heteroscedasticity (nonconstant variance) is clearly indicated. Data error terms proportional to q(i) and q(i)/v are well defined statistically, where q(i) is the heat from the ith injection of titrant and v is the injected volume. The statistical significance of the variance function parameters is confirmed through Monte Carlo calculations that mimic the actual data set. For the data in question, which fall mostly in the range of q(i)=100-2000 microcal, the contributions to the data variance from the terms in q(i)(2) typically exceed the background constant term for q(i)>300 microcal and v<10 microl. Conversely, this means that in reactions with q(i) much less than this, heteroscedasticity is not a significant problem. Accordingly, in such cases the standard unweighted fitting procedures provide reliable results for the key parameters, K and DeltaH(degrees) and their statistical errors. These results also support an important earlier finding: in most ITC work on 1:1 binding processes, the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm. For high-q reactions, where weighting is needed for optimal LS analysis, tips are given for using the weighting option in the commercial software commonly employed to process ITC data. PMID:15936713

  13. Modern diet and metabolic variance – a recipe for disaster?

    PubMed Central

    2014-01-01

    Objective Recently, a positive correlation between alanine transaminase activity and body mass was established among healthy young individuals of normal weight. Here we explore further this relationship and propose a physiological rationale for this link. Design Cross-sectional statistical analysis of adiposity across large samples of adults differing by age, diet and lifestyle. Subjects 46,684 19–20 years old Swiss male conscripts and published data on 1000 Eskimos, 518 Toronto residents and 97,000 North American Adventists. Measurements Serum concentrations of the alanine transaminase, post-prandial glucose levels, cholesterol, body height and weight, blood pressure and routine blood analysis (thrombocytes and leukocytes) for Swiss conscripts. Adiposity measures and dietary information for other groups were also obtained. Results Stepwise multiple regression after correction for random errors of physiological tests showed that 28% of the total variance in body mass is associated with ALT concentrations. This relationship remained significant when only metabolically healthy (as defined by the American Heart Association) Swiss conscripts were selected. The data indicated that high protein only or high carbohydrate only diets are associated with lower levels of obesity than a diet combining proteins and carbohydrates. Conclusion Elevated levels of alanine transaminase, and likely other transaminases, may result in overactivity of the alanine cycle that produces pyruvate from protein. When a mixed meal of protein, carbohydrate and fat is consumed, carbohydrates and fats are digested faster and metabolised to satisfy body’s energetic needs while slower digested protein is ultimately converted to malonyl CoA and stored as fat. Chronicity of this sequence is proposed to cause accumulation of somatic fat stores and thus obesity. PMID:24502225

  14. The Neuro/PsyGRID calibration experiment: identifying sources of variance and bias in multicenter MRI studies.

    PubMed

    Suckling, John; Barnes, Anna; Job, Dominic; Brennan, David; Lymer, Katherine; Dazzan, Paola; Marques, Tiago Reis; MacKay, Clare; McKie, Shane; Williams, Steve R; Williams, Steven C R; Deakin, Bill; Lawrie, Stephen

    2012-02-01

    Calibration experiments precede multicenter trials to identify potential sources of variance and bias. In support of future imaging studies of mental health disorders and their treatment, the Neuro/PsyGRID consortium commissioned a calibration experiment to acquire functional and structural MRI from twelve healthy volunteers attending five centers on two occasions. Measures were derived of task activation from a working memory paradigm, fractal scaling (Hurst exponent) from resting fMRI, and grey matter distributions from T(1) -weighted sequences. At each intracerebral voxel a fixed-effects analysis of variance estimated components of variance corresponding to factors of center, subject, occasion, and within-occasion order, and interactions of center-by-occasion, subject-by-occasion, and center-by-subject, the latter (since there is no intervention) a surrogate of the expected variance of the treatment effect standard error across centers. A rank order test of between-center differences was indicative of crossover or noncrossover subject-by-center interactions. In general, factors of center, subject and error variance constituted >90% of the total variance, whereas occasion, order, and all interactions were generally <5%. Subject was the primary source of variance (70%-80%) for grey-matter, with error variance the dominant component for fMRI-derived measures. Spatially, variance was broadly homogenous with the exception of fractal scaling measures which delineated white matter, related to the flip angle of the EPI sequence. Maps of P values for the associated F-tests were also derived. Rank tests were highly significant indicating the order of measures across centers was preserved. In summary, center effects should be modeled at the voxel-level using existing and long-standing statistical recommendations.

  15. Proposed Construction of the Lompoc Valley Center in the Allan Hancock Joint Community College District. A Report to the Governor and Legislature in Response to a Request from the Chancellor's Office of the California Community Colleges.

    ERIC Educational Resources Information Center

    California State Postsecondary Education Commission, Sacramento.

    The Allan Hancock Joint Community College District proposes establishing a permanent educational center in the Lompoc area of Santa Barbara County, primarily to consolidate its current outreach operations in the area but also to accommodate anticipated enrollment growth in the area. Donated by the United States Army, the 155-acre site will be…

  16. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    PubMed

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  17. Thermodynamic and dynamic contributions to future changes in regional precipitation variance: focus on the Southeastern United States

    NASA Astrophysics Data System (ADS)

    Li, Laifang; Li, Wenhong

    2015-07-01

    The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950-1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding

  18. Election 84: Search for a New Coalition. Proceedings of the Allan Shivers Election Analysis Conference (Austin, Texas, November 17, 1984).

    ERIC Educational Resources Information Center

    Jeffrey, Robert C., Ed.

    This booklet contains the proceedings of a conference that focused on the psychological and fiscal impact of the electronic media in the 1984 election campaign. Comments are made by Robert Teeter, the principal pollster for the national Republican party, and Peter Hart, the principal pollster for the national Democratic party. Both describe their…

  19. A comparison of two methods for detecting abrupt changes in the variance of climatic time series

    NASA Astrophysics Data System (ADS)

    Rodionov, Sergei N.

    2016-06-01

    Two methods for detecting abrupt shifts in the variance - Integrated Cumulative Sum of Squares (ICSS) and Sequential Regime Shift Detector (SRSD) - have been compared on both synthetic and observed time series. In Monte Carlo experiments, SRSD outperformed ICSS in the overwhelming majority of the modeled scenarios with different sequences of variance regimes. The SRSD advantage was particularly apparent in the case of outliers in the series. On the other hand, SRSD has more parameters to adjust than ICSS, which requires more experience from the user in order to select those parameters properly. Therefore, ICSS can serve as a good starting point of a regime shift analysis. When tested on climatic time series, in most cases both methods detected the same change points in the longer series (252-787 monthly values). The only exception was the Arctic Ocean sea surface temperature (SST) series, when ICSS found one extra change point that appeared to be spurious. As for the shorter time series (66-136 yearly values), ICSS failed to detect any change points even when the variance doubled or tripled from one regime to another. For these time series, SRSD is recommended. Interestingly, all the climatic time series tested, from the Arctic to the tropics, had one thing in common: the last shift detected in each of these series was toward a high-variance regime. This is consistent with other findings of increased climate variability in recent decades.

  20. Neurobiological studies of risk assessment: a comparison of expected utility and mean-variance approaches.

    PubMed

    D'Acremont, Mathieu; Bossaerts, Peter

    2008-12-01

    When modeling valuation under uncertainty, economists generally prefer expected utility because it has an axiomatic foundation, meaning that the resulting choices will satisfy a number of rationality requirements. In expected utility theory, values are computed by multiplying probabilities of each possible state of nature by the payoff in that state and summing the results. The drawback of this approach is that all state probabilities need to be dealt with separately, which becomes extremely cumbersome when it comes to learning. Finance academics and professionals, however, prefer to value risky prospects in terms of a trade-off between expected reward and risk, where the latter is usually measured in terms of reward variance. This mean-variance approach is fast and simple and greatly facilitates learning, but it impedes assigning values to new gambles on the basis of those of known ones. To date, it is unclear whether the human brain computes values in accordance with expected utility theory or with mean-variance analysis. In this article, we discuss the theoretical and empirical arguments that favor one or the other theory. We also propose a new experimental paradigm that could determine whether the human brain follows the expected utility or the mean-variance approach. Behavioral results of implementation of the paradigm are discussed.