Science.gov

Sample records for allan variance analysis

  1. The quantum Allan variance

    NASA Astrophysics Data System (ADS)

    Chabuda, Krzysztof; Leroux, Ian D.; Demkowicz-Dobrzański, Rafał

    2016-08-01

    The instability of an atomic clock is characterized by the Allan variance, a measure widely used to describe the noise of frequency standards. We provide an explicit method to find the ultimate bound on the Allan variance of an atomic clock in the most general scenario where N atoms are prepared in an arbitrarily entangled state and arbitrary measurement and feedback are allowed, including those exploiting coherences between succeeding interrogation steps. While the method is rigorous and general, it becomes numerically challenging for large N and long averaging times.

  2. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  3. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  4. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.

  5. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.

  6. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    PubMed

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series.

  7. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  8. On the application of Allan variance method for Ring Laser Gyro performance characterization

    SciTech Connect

    Ng, L.C.

    1993-10-15

    This report describes the method of Allan variance and its application to the characterization of a Ring Laser Gyro`s (RLG) performance. Allan variance, a time domain analysis technique, is an accepted IEEE standard for gyro specifications. The method was initially developed by David Allan of the National Bureau of Standards to quantify the error statistics of a Cesium beam frequency standard employed as the US Frequency Standards in 1960`s. The method can, in general, be applied to analyze the error characteristics of any precision measurement instrument. The key attribute of the method is that it allows for a finer, easier characterization and identification of error sources and their contribution to the overall noise statistics. This report presents an overview of the method, explains the relationship between Allan variance and power spectral density distribution of underlying noise sources, describes the batch and recursive implementation approaches, validates the Allan variance computation with a simulation model, and illustrates the Allan variance method using data collected from several Honeywell LIMU units.

  9. Allan Variance Calculation for Nonuniformly Spaced Input Data

    DTIC Science & Technology

    2015-01-01

    Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Allan Variance ( AV ) characterizes the...temporal randomness in sensor output data streams at various times scales. The conventional formula for calculating the AV assumes that the data...presents a modified approach to AV calculation, which accommodates nonuniformly spaced time samples. The basic concept of the modified approach is

  10. Numbers Of Degrees Of Freedom Of Allan-Variance Estimators

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    1992-01-01

    Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.

  11. Allan variance of time series models for measurement data

    NASA Astrophysics Data System (ADS)

    Zhang, Nien Fan

    2008-10-01

    The uncertainty of the mean of autocorrelated measurements from a stationary process has been discussed in the literature. However, when the measurements are from a non-stationary process, how to assess their uncertainty remains unresolved. Allan variance or two-sample variance has been used in time and frequency metrology for more than three decades as a substitute for the classical variance to characterize the stability of clocks or frequency standards when the underlying process is a 1/f noise process. However, its applications are related only to the noise models characterized by the power law of the spectral density. In this paper, from the viewpoint of the time domain, we provide a statistical underpinning of the Allan variance for discrete stationary processes, random walk and long-memory processes such as the fractional difference processes including the noise models usually considered in time and frequency metrology. Results show that the Allan variance is a better measure of the process variation than the classical variance of the random walk and the non-stationary fractional difference processes including the 1/f noise.

  12. Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.

    PubMed

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-23

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor.

  13. The Third-Difference Approach to Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1995-01-01

    This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.

  14. Relationship between Allan variances and Kalman Filter parameters

    NASA Technical Reports Server (NTRS)

    Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

    1984-01-01

    A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

  15. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies.

  16. Measurement of Allan variance and phase noise at fractions of a millihertz

    NASA Technical Reports Server (NTRS)

    Conroy, Bruce L.; Le, Duc

    1990-01-01

    Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.

  17. Allan Variance Computed in Space Domain: Definition and Application to InSAR Data to Characterize Noise and Geophysical Signal.

    PubMed

    Cavalié, Olivier; Vernotte, François

    2016-04-01

    The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time

  18. The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps

    NASA Astrophysics Data System (ADS)

    Formichella, Valerio

    2016-12-01

    Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.

  19. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    PubMed

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis.

  20. Estimating the Allan variance in the presence of long periods of missing data and outliers

    NASA Astrophysics Data System (ADS)

    Sesia, Ilaria; Tavella, Patrizia

    2008-12-01

    The ability of the Allan variance (AVAR) to identify and estimate the typical clock noise is widely accepted, and its use is recommended by international standards. Recently, a time-varying version called Dynamic Allan variance (DAVAR) was suggested and exploited. Currently, the AVAR is commonly used in applications to space and satellite systems, in particular in monitoring the clocks of the Global Positioning System, and also in the framework of the European project Galileo. In these applications stability estimation, either AVAR or DAVAR (or other similar variances), presents some peculiar aspects which are not commonly encountered when the clock data are measured in a laboratory. In particular, data from space clocks may typically present outliers and missing values. Hence, special attention has to be paid when dealing with such experimental measurements. In this work we propose an estimation algorithm and its implementation in a robust software code (in MATLAB® language) able to estimate the AVAR in the case of missing data, unequally spaced data, outliers, and with long periods of missing observation, so that the Allan variance estimates turn out unbiased and with the maximum use of all the available data.

  1. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    PubMed

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV).

  2. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    NASA Technical Reports Server (NTRS)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  3. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling.

    PubMed

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-12-07

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long ( 6 × 10 5 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series.

  4. Application of Fast Dynamic Allan Variance for the Characterization of FOGs-Based Measurement While Drilling

    PubMed Central

    Wang, Lu; Zhang, Chunxi; Gao, Shuang; Wang, Tao; Lin, Tie; Li, Xianmu

    2016-01-01

    The stability of a fiber optic gyroscope (FOG) in measurement while drilling (MWD) could vary with time because of changing temperature, high vibration, and sudden power failure. The dynamic Allan variance (DAVAR) is a sliding version of the Allan variance. It is a practical tool that could represent the non-stationary behavior of the gyroscope signal. Since the normal DAVAR takes too long to deal with long time series, a fast DAVAR algorithm has been developed to accelerate the computation speed. However, both the normal DAVAR algorithm and the fast algorithm become invalid for discontinuous time series. What is worse, the FOG-based MWD underground often keeps working for several days; the gyro data collected aboveground is not only very time-consuming, but also sometimes discontinuous in the timeline. In this article, on the basis of the fast algorithm for DAVAR, we make a further advance in the fast algorithm (improved fast DAVAR) to extend the fast DAVAR to discontinuous time series. The improved fast DAVAR and the normal DAVAR are used to responsively characterize two sets of simulation data. The simulation results show that when the length of the time series is short, the improved fast DAVAR saves 78.93% of calculation time. When the length of the time series is long (6×105 samples), the improved fast DAVAR reduces calculation time by 97.09%. Another set of simulation data with missing data is characterized by the improved fast DAVAR. Its simulation results prove that the improved fast DAVAR could successfully deal with discontinuous data. In the end, a vibration experiment with FOGs-based MWD has been implemented to validate the good performance of the improved fast DAVAR. The results of the experience testify that the improved fast DAVAR not only shortens computation time, but could also analyze discontinuous time series. PMID:27941600

  5. The periodogram and Allan variance reveal fractal exponents greater than unity in auditory-nerve spike trains.

    PubMed

    Lowen, S B; Teich, M C

    1996-06-01

    Auditory-nerve spike trains exhibit fractal behavior, and therefore traditional renewal-point-process models fail to describe them adequately. Previous measures of the fractal exponent of these spike trains are based on the Fano factor and consequently cannot exceed unity. Two estimates of the fractal exponent are considered which do not suffer from this limit: one derived from the Allan variance, which was developed by the authors, and one based on the periodogram. These measures indicate that fractal exponents do indeed exceed unity for some nerve-spike recordings from stimulated primary afferent cat auditory-nerve fibers.

  6. Nominal analysis of "variance".

    PubMed

    Weiss, David J

    2009-08-01

    Nominal responses are the natural way for people to report actions or opinions. Because nominal responses do not generate numerical data, they have been underutilized in behavioral research. On those occasions in which nominal responses are elicited, the responses are customarily aggregated over people or trials so that large-sample statistics can be employed. A new analysis is proposed that directly associates differences among responses with particular sources in factorial designs. A pair of nominal responses either matches or does not; when responses do not match, they vary. That analogue to variance is incorporated in the nominal analysis of "variance" (NANOVA) procedure, wherein the proportions of matches associated with sources play the same role as do sums of squares in an ANOVA. The NANOVA table is structured like an ANOVA table. The significance levels of the N ratios formed by comparing proportions are determined by resampling. Fictitious behavioral examples featuring independent groups and repeated measures designs are presented. A Windows program for the analysis is available.

  7. A Historical Perspective on the Development of the Allan Variances and Their Strengths and Weaknesses.

    PubMed

    Allan, David W; Levine, Judah

    2016-04-01

    Over the past 50 years, variances have been developed for characterizing the instabilities of precision clocks and oscillators. These instabilities are often modeled as nonstationary processes, and the variances have been shown to be well-behaved and to be unbiased, efficient descriptors of these types of processes. This paper presents a historical overview of the development of these variances. The time-domain and frequency-domain formulations are presented and their development is described. The strengths and weaknesses of these characterization metrics are discussed. These variances are also shown to be useful in other applications, such as in telecommunication.

  8. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution…

  9. Warped functional analysis of variance.

    PubMed

    Gervini, Daniel; Carter, Patrick A

    2014-09-01

    This article presents an Analysis of Variance model for functional data that explicitly incorporates phase variability through a time-warping component, allowing for a unified approach to estimation and inference in presence of amplitude and time variability. The focus is on single-random-factor models but the approach can be easily generalized to more complex ANOVA models. The behavior of the estimators is studied by simulation, and an application to the analysis of growth curves of flour beetles is presented. Although the model assumes a smooth latent process behind the observed trajectories, smootheness of the observed data is not required; the method can be applied to irregular time grids, which are common in longitudinal studies.

  10. Generalized analysis of molecular variance.

    PubMed

    Nievergelt, Caroline M; Libiger, Ondrej; Schork, Nicholas J

    2007-04-06

    Many studies in the fields of genetic epidemiology and applied population genetics are predicated on, or require, an assessment of the genetic background diversity of the individuals chosen for study. A number of strategies have been developed for assessing genetic background diversity. These strategies typically focus on genotype data collected on the individuals in the study, based on a panel of DNA markers. However, many of these strategies are either rooted in cluster analysis techniques, and hence suffer from problems inherent to the assignment of the biological and statistical meaning to resulting clusters, or have formulations that do not permit easy and intuitive extensions. We describe a very general approach to the problem of assessing genetic background diversity that extends the analysis of molecular variance (AMOVA) strategy introduced by Excoffier and colleagues some time ago. As in the original AMOVA strategy, the proposed approach, termed generalized AMOVA (GAMOVA), requires a genetic similarity matrix constructed from the allelic profiles of individuals under study and/or allele frequency summaries of the populations from which the individuals have been sampled. The proposed strategy can be used to either estimate the fraction of genetic variation explained by grouping factors such as country of origin, race, or ethnicity, or to quantify the strength of the relationship of the observed genetic background variation to quantitative measures collected on the subjects, such as blood pressure levels or anthropometric measures. Since the formulation of our test statistic is rooted in multivariate linear models, sets of variables can be related to genetic background in multiple regression-like contexts. GAMOVA can also be used to complement graphical representations of genetic diversity such as tree diagrams (dendrograms) or heatmaps. We examine features, advantages, and power of the proposed procedure and showcase its flexibility by using it to analyze a

  11. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In both…

  12. Directional variance analysis of annual rings

    NASA Astrophysics Data System (ADS)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  13. Analysis of Variance of Multiply Imputed Data.

    PubMed

    van Ginkel, Joost R; Kroonenberg, Pieter M

    2014-01-01

    As a procedure for handling missing data, Multiple imputation consists of estimating the missing data multiple times to create several complete versions of an incomplete data set. All these data sets are analyzed by the same statistical procedure, and the results are pooled for interpretation. So far, no explicit rules for pooling F-tests of (repeated-measures) analysis of variance have been defined. In this paper we outline the appropriate procedure for the results of analysis of variance for multiply imputed data sets. It involves both reformulation of the ANOVA model as a regression model using effect coding of the predictors and applying already existing combination rules for regression models. The proposed procedure is illustrated using three example data sets. The pooled results of these three examples provide plausible F- and p-values.

  14. Analysis of variance of microarray data.

    PubMed

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available.

  15. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  16. Allan Deviation Plot as a Tool for Quartz-Enhanced Photoacoustic Sensors Noise Analysis.

    PubMed

    Giglio, Marilena; Patimisco, Pietro; Sampaolo, Angelo; Scamarcio, Gaetano; Tittel, Frank K; Spagnolo, Vincenzo

    2016-04-01

    We report here on the use of the Allan deviation plot to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor. The Allan plot provides information about the optimum averaging time for the QEPAS signal and allows the prediction of its ultimate detection limit. The Allan deviation can also be used to determine the main sources of noise coming from the individual components of the sensor. Quartz tuning fork thermal noise dominates for integration times up to 275 s, whereas at longer averaging times, the main contribution to the sensor noise originates from laser power instabilities.

  17. Correcting an analysis of variance for clustering.

    PubMed

    Hedges, Larry V; Rhoads, Christopher H

    2011-02-01

    A great deal of educational and social data arises from cluster sampling designs where clusters involve schools, classrooms, or communities. A mistake that is sometimes encountered in the analysis of such data is to ignore the effect of clustering and analyse the data as if it were based on a simple random sample. This typically leads to an overstatement of the precision of results and too liberal conclusions about precision and statistical significance of mean differences. This paper gives simple corrections to the test statistics that would be computed in an analysis of variance if clustering were (incorrectly) ignored. The corrections are multiplicative factors depending on the total sample size, the cluster size, and the intraclass correlation structure. For example, the corrected F statistic has Fisher's F distribution with reduced degrees of freedom. The corrected statistic reduces to the F statistic computed by ignoring clustering when the intraclass correlations are zero. It reduces to the F statistic computed using cluster means when the intraclass correlations are unity, and it is in between otherwise. A similar adjustment to the usual statistic for testing a linear contrast among group means is described.

  18. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  19. Functional analysis of variance for association studies.

    PubMed

    Vsevolozhskaya, Olga A; Zaykin, Dmitri V; Greenwood, Mark C; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods - SKAT and a previously proposed method based on functional linear models (FLM), - especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity.

  20. Analysis of variance of designed chromatographic data sets: The analysis of variance-target projection approach.

    PubMed

    Marini, Federico; de Beer, Dalene; Joubert, Elizabeth; Walczak, Beata

    2015-07-31

    Direct application of popular approaches, e.g., Principal Component Analysis (PCA) or Partial Least Squares (PLS) to chromatographic data originating from a well-designed experimental study including more than one factor is not recommended. In the case of a well-designed experiment involving two or more factors (crossed or nested), data are usually decomposed into the contributions associated with the studied factors (and with their interactions), and the individual effect matrices are then analyzed using, e.g., PCA, as in the case of ASCA (analysis of variance combined with simultaneous component analysis). As an alternative to the ASCA method, we propose the application of PLS followed by target projection (TP), which allows a one-factor representation of the model for each column in the design dummy matrix. PLS application follows after proper deflation of the experimental matrix, i.e., to what are called the residuals under the reduced ANOVA model. The proposed approach (ANOVA-TP) is well suited for the study of designed chromatographic data of complex samples. It allows testing of statistical significance of the studied effects, 'biomarker' identification, and enables straightforward visualization and accurate estimation of between- and within-class variance. The proposed approach has been successfully applied to a case study aimed at evaluating the effect of pasteurization on the concentrations of various phenolic constituents of rooibos tea of different quality grades and its outcomes have been compared to those of ASCA.

  1. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  2. Cyclostationary analysis with logarithmic variance stabilisation

    NASA Astrophysics Data System (ADS)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  3. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  4. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  5. Exploratory Multivariate Analysis of Variance: Contrasts and Variables.

    ERIC Educational Resources Information Center

    Barcikowski, Robert S.; Elliott, Ronald S.

    The contribution of individual variables to overall multivariate significance in a multivariate analysis of variance (MANOVA) is investigated using a combination of canonical discriminant analysis and Roy-Bose simultaneous confidence intervals. Difficulties with this procedure are discussed, and its advantages are illustrated using examples based…

  6. Analysis of Variance Components for Genetic Markers with Unphased Genotypes.

    PubMed

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions.

  7. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  8. Analysis of variance in spectroscopic imaging data from human tissues.

    PubMed

    Kwak, Jin Tae; Reddy, Rohith; Sinha, Saurabh; Bhargava, Rohit

    2012-01-17

    The analysis of cell types and disease using Fourier transform infrared (FT-IR) spectroscopic imaging is promising. The approach lacks an appreciation of the limits of performance for the technology, however, which limits both researcher efforts in improving the approach and acceptance by practitioners. One factor limiting performance is the variance in data arising from biological diversity, measurement noise or from other sources. Here we identify the sources of variation by first employing a high throughout sampling platform of tissue microarrays (TMAs) to record a sufficiently large and diverse set data. Next, a comprehensive set of analysis of variance (ANOVA) models is employed to analyze the data. Estimating the portions of explained variation, we quantify the primary sources of variation, find the most discriminating spectral metrics, and recognize the aspects of the technology to improve. The study provides a framework for the development of protocols for clinical translation and provides guidelines to design statistically valid studies in the spectroscopic analysis of tissue.

  9. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  10. Analysis of micrometeorological data using a two sample variance

    NASA Astrophysics Data System (ADS)

    Werle, Peter; Falge, Eva

    2010-05-01

    In ecosystem research infrared gas analyzers are increasingly used to measure fluxes of carbon dioxide, water vapour, methane, nitrous oxide and even stable carbon isotopes. As these complex measurement devices under field conditions cannot be considered as absolutely stable, drift characterisation is an issue to distinguish between atmospheric data and sensor drift. In this paper the concept of the two sample variance is utilized in analogy to previous stability investigations to characterize the stationarity of both, spectroscopic measurements of concentration time series and micrometeorological data in the time domain, which is a prerequisite for covariance calculations. As an example, the method is applied to assess the time constant for detrending of time series data and the optimum trace gas flux integration time. The method described here provides information similar to existing characterizations as the ogive analysis, the normalized error variance of the second order moment and the spectral characteristics of turbulence in the inertial subrange. The method is easy to implement and, therefore, well suited to assist as a useful tool for a routine data quality check for both, new practitioners and experts in the field. Werle, P., Time domain characterization of micrometeorological data based on a two sample variance. Agric. Forest Meteorol. (2010), doi:10.1016/j.agrformet.2009.12.007

  11. FMRI group analysis combining effect estimates and their variances

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Nath, Audrey R.; Beauchamp, Michael S.; Cox, Robert W.

    2012-01-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  12. Correct use of repeated measures analysis of variance.

    PubMed

    Park, Eunsik; Cho, Meehye; Ki, Chang-Seok

    2009-02-01

    In biomedical research, researchers frequently use statistical procedures such as the t-test, standard analysis of variance (ANOVA), or the repeated measures ANOVA to compare means between the groups of interest. There are frequently some misuses in applying these procedures since the conditions of the experiments or statistical assumptions necessary to apply these procedures are not fully taken into consideration. In this paper, we demonstrate the correct use of repeated measures ANOVA to prevent or minimize ethical or scientific problems due to its misuse. We also describe the appropriate use of multiple comparison tests for follow-up analysis in repeated measures ANOVA. Finally, we demonstrate the use of repeated measures ANOVA by using real data and the statistical software package SPSS (SPSS Inc., USA).

  13. Analysis of variance of an underdetermined geodetic displacement problem

    SciTech Connect

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  14. Objective Bayesian Comparison of Constrained Analysis of Variance Models.

    PubMed

    Consonni, Guido; Paroli, Roberta

    2016-10-04

    In the social sciences we are often interested in comparing models specified by parametric equality or inequality constraints. For instance, when examining three group means [Formula: see text] through an analysis of variance (ANOVA), a model may specify that [Formula: see text], while another one may state that [Formula: see text], and finally a third model may instead suggest that all means are unrestricted. This is a challenging problem, because it involves a combination of nonnested models, as well as nested models having the same dimension. We adopt an objective Bayesian approach, requiring no prior specification from the user, and derive the posterior probability of each model under consideration. Our method is based on the intrinsic prior methodology, suitably modified to accommodate equality and inequality constraints. Focussing on normal ANOVA models, a comparative assessment is carried out through simulation studies. We also present an application to real data collected in a psychological experiment.

  15. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  16. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  17. UV Spectral Fringerprinting and Analysis of Variance-Principal Component Analysis: A Tool for Characterizing Sources of Variance in Plant Materials

    Technology Transfer Automated Retrieval System (TEKTRAN)

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...

  18. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  19. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  20. Analysis of variance (ANOVA) models in lower extremity wounds.

    PubMed

    Reed, James F

    2003-06-01

    Consider a study in which 2 new treatments are being compared with a control group. One way to compare outcomes would simply be to compare the 2 treatments with the control and the 2 treatments against each using 3 Student t tests (t test). If we were to compare 4 treatment groups, then we would need to use 6 t tests. The difficulty with using multiple t tests is that as the number of groups increases, so will the likelihood of finding a difference between any pair of groups simply by change when no real difference exists by definition a Type I error. If we were to perform 3 separate t tests each at alpha = .05, the experimental error rate increases to .14. As the number of multiple t tests increases, the experiment-wise error rate increases rather rapidly. The solution to the experimental error rate problem is to use analysis of variance (ANOVA) methods. Three basic ANOVA designs are reviewed that give hypothetical examples drawn from the literature to illustrate single-factor ANOVA, repeated measures ANOVA, and randomized block ANOVA. "No frills" SPSS or SAS code for each of these designs and examples used are available from the author on request.

  1. A model selection approach to analysis of variance and covariance.

    PubMed

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.

  2. Analysis of variance in neuroreceptor ligand imaging studies.

    PubMed

    Ko, Ji Hyun; Reilhac, Anthonin; Ray, Nicola; Rusjan, Pablo; Bloomfield, Peter; Pellecchia, Giovanna; Houle, Sylvain; Strafella, Antonio P

    2011-01-01

    Radioligand positron emission tomography (PET) with dual scan paradigms can provide valuable insight into changes in synaptic neurotransmitter concentration due to experimental manipulation. The residual t-test has been utilized to improve the sensitivity of the t-test in PET studies. However, no further development of statistical tests using residuals has been proposed so far to be applied in cases when there are more than two conditions. Here, we propose the residual f-test, a one-way analysis of variance (ANOVA), and examine its feasibility using simulated [(11)C]raclopride PET data. We also re-visit data from our previously published [(11)C]raclopride PET study, in which 10 individuals underwent three PET scans under different conditions. We found that the residual f-test is superior in terms of sensitivity than the conventional f-test while still controlling for type 1 error. The test will therefore allow us to reliably test hypotheses in the smaller sample sizes often used in explorative PET studies.

  3. A fast minimum variance beamforming method using principal component analysis.

    PubMed

    Kim, Kyuhong; Park, Suhyun; Kim, Jungho; Park, Sung-Bae; Bae, MooHo

    2014-06-01

    Minimum variance (MV) beamforming has been studied for improving the performance of a diagnostic ultrasound imaging system. However, it is not easy for the MV beamforming to be implemented in a real-time ultrasound imaging system because of the enormous amount of computation time associated with the covariance matrix inversion. In this paper, to address this problem, we propose a new fast MV beamforming method that almost optimally approximates the MV beamforming while reducing the computational complexity greatly through dimensionality reduction using principal component analysis (PCA). The principal components are estimated offline from pre-calculated conventional MV weights. Thus, the proposed method does not directly calculate the MV weights but approximates them by a linear combination of a few selected dominant principal components. The combinational weights are calculated in almost the same way as in MV beamforming, but in the transformed domain of beamformer input signal by the PCA, where the dimension of the transformed covariance matrix is identical to the number of some selected principal component vectors. Both computer simulation and experiment were carried out to verify the effectiveness of the proposed method with echo signals from simulation as well as phantom and in vivo experiments. It is confirmed that our method can reduce the dimension of the covariance matrix down to as low as 2 × 2 while maintaining the good image quality of MV beamforming.

  4. Variance estimation in the analysis of microarray data.

    PubMed

    Wang, Yuedong; Ma, Yanyuan; Carroll, Raymond J

    2009-04-01

    Microarrays are one of the most widely used high throughput technologies. One of the main problems in the area is that conventional estimates of the variances that are required in the t-statistic and other statistics are unreliable owing to the small number of replications. Various methods have been proposed in the literature to overcome this lack of degrees of freedom problem. In this context, it is commonly observed that the variance increases proportionally with the intensity level, which has led many researchers to assume that the variance is a function of the mean. Here we concentrate on estimation of the variance as a function of an unknown mean in two models: the constant coefficient of variation model and the quadratic variance-mean model. Because the means are unknown and estimated with few degrees of freedom, naive methods that use the sample mean in place of the true mean are generally biased because of the errors-in-variables phenomenon. We propose three methods for overcoming this bias. The first two are variations on the theme of the so-called heteroscedastic simulation-extrapolation estimator, modified to estimate the variance function consistently. The third class of estimators is entirely different, being based on semiparametric information calculations. Simulations show the power of our methods and their lack of bias compared with the naive method that ignores the measurement error. The methodology is illustrated by using microarray data from leukaemia patients.

  5. Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.

    ERIC Educational Resources Information Center

    Amado, Alfred J.

    Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…

  6. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  7. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Allan Cox 1926”1987

    NASA Astrophysics Data System (ADS)

    Coe, Rob; Dalrymple, Brent

    More than 1000 friends, students, and colleagues from all over the country filled Stanford Memorial Chapel (Stanford, Calif.) on February 3, 1987, to join in “A Celebration of the Life of Allan Cox.” Allan died early on the morning of January 27 while bicycling, the sport he had come to love the most. Between pieces of his favorite music by Bach and Mozart, Stanford administrators and colleagues spoke in tribute of Allan's unique qualities as friend, scientist, teacher, and dean of the School of Earth Sciences. James Rosse, Vice President and Provost of Stanford University, struck a particularly resonant chord with his personal remarks: "Allan reached out to each person he knew with the warmth and attention that can only come from deep respect and affection for others. I never heard him speak ill of others, and I do not believe he was capable of doing anything that would harm another being. He cared too much to intrude where he was not wanted, but his curiosity about people and the loving care with which he approached them broke down reserve to create remarkable friendships. His enthusiasm and good humor made him a welcome guest in the hearts of the hundreds of students and colleagues who shared the opportunity of knowing Allan Cox as a person."

  9. Inheritance of dermatoglyphic traits in twins: univariate and bivariate variance decomposition analysis.

    PubMed

    Karmakar, Bibha; Malkin, Ida; Kobyliansky, Eugene

    2012-01-01

    Dermatoglyphic traits in a sample of twins were analyzed to estimate the resemblance between MZ and DZ twins and to evaluate the mode of inheritance by using the maximum likelihood-based Variance decomposition analysis. The additive genetic variance component was significant in both sexes for four traits--PII, AB_RC, RC_HB, and ATD_L. AB RC and RC_HB had significant sex differences in means, whereas PII and ATD_L did not. The results of the Bivariate Variance decomposition analysis revealed that PII and RC_HB have a significant correlation in both genetic and residual components. Significant correlation in the additive genetic variance between AB_RC and ATD_L was observed. The same analysis only for the females sub-sample in the three traits RBL, RBR and AB_DIS shows that the additive genetic RBR component was significant and the AB_DIS sibling component was not significant while others cannot be constrained to zero. The additive variance for AB DIS sibling component was not significant. The three components additive, sibling and residual were significantly correlated between each pair of traits revealed by the Bivariate Variance decomposition analysis.

  10. Variance Analysis and Comparison in Computer-Aided Design

    NASA Astrophysics Data System (ADS)

    Ullrich, T.; Schiffer, T.; Schinko, C.; Fellner, D. W.

    2011-09-01

    The need to analyze and visualize differences of very similar objects arises in many research areas: mesh compression, scan alignment, nominal/actual value comparison, quality management, and surface reconstruction to name a few. In computer graphics, for example, differences of surfaces are used for analyzing mesh processing algorithms such as mesh compression. They are also used to validate reconstruction and fitting results of laser scanned surfaces. As laser scanning has become very important for the acquisition and preservation of artifacts, scanned representations are used for documentation as well as analysis of ancient objects. Detailed mesh comparisons can reveal smallest changes and damages. These analysis and documentation tasks are needed not only in the context of cultural heritage but also in engineering and manufacturing. Differences of surfaces are analyzed to check the quality of productions. Our contribution to this problem is a workflow, which compares a reference / nominal surface with an actual, laser-scanned data set. The reference surface is a procedural model whose accuracy and systematics describe the semantic properties of an object; whereas the laser-scanned object is a real-world data set without any additional semantic information.

  11. Comparison of Variance Component Estimators in Geodetics Science through Noise Analysis

    NASA Astrophysics Data System (ADS)

    Rasi, J.; Razeghi, S. M.

    2012-04-01

    In this article, we tried to examine and compare the existing Variance Components Estimation Methods in Geodetic science. For this purpose, first, definitions of statistical and functional models were used and then effects of errors on these (co) variance models were assessed. Finally we achieve five (co) variance models for estimation in geodetic problems, namely, Helmert method, Best Invariant Quadratic Unbiased Estimator (BIQUE), Minimum Norm Quadratic Unbiased Estimator (MINQUE), Least Squares (LS) and Restricted Maximum likelihood (REML). These models involve different statistical and functional models; the first three models are similar and the last two models are similar, too. However, in examining statistical models we found that BIQUE and REML models were dependent on distribution function and so we had to consider observation distribution for them. But, to examine more, we required a numerical analysis for estimation of (co)variance. Therefore, noise assessment of GPS time series was selected and results of five (co) variance estimation methods were compared. Results showed that if observations had normal distribution, all the five methods had same results. However, the main difference was after (co)variances estimation. For example, if we had negative variance (which is anomalous with statistics science), only the LS method has solution and enough flexibility for such problems. Also, compared with other models, LS and REML methods provide better precision in some components. It can be concluded that for noise assessment of GPS time series, use of LSVCE and REMLVCE methods are preferred.

  12. A Bayesian Solution for Two-Way Analysis of Variance. ACT Technical Bulletin No. 8.

    ERIC Educational Resources Information Center

    Lindley, Dennis V.

    The standard statistical analysis of data classified in two ways (say into rows and columns) is through an analysis of variance that splits the total variation of the data into the main effect of rows, the main effect of columns, and the interaction between rows and columns. This paper presents an alternative Bayesian analysis of the same…

  13. On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.

    2000-01-01

    Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)

  14. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    PubMed

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.

  15. Edgar Allan Poe and neurology.

    PubMed

    Teive, Hélio Afonso Ghizoni; Paola, Luciano de; Munhoz, Renato Puppi

    2014-06-01

    Edgar Allan Poe was one of the most celebrated writers of all time. He published several masterpieces, some of which include references to neurological diseases. Poe suffered from recurrent depression, suggesting a bipolar disorder, as well as alcohol and drug abuse, which in fact led to his death from complications related to alcoholism. Various hypotheses were put forward, including Wernicke's encephalopathy.

  16. Allan Bloom, America, and Education.

    ERIC Educational Resources Information Center

    West, Thomas

    2000-01-01

    Refutes the claims of Allan Bloom that the source of the problem with today's universities is modern philosophy, that the writings and ideas of Hobbes and Locke planted the seeds of relativism in American culture, and that the cure is Great Books education. Suggests instead that America's founding principles are the only solution to the failure of…

  17. Allan Bloom's Quarrel with History.

    ERIC Educational Resources Information Center

    Thompson, James

    1988-01-01

    Responds to Allan Bloom's "The Closing of the American Mind." Concludes that despite cranky comments about bourgeois culture, the focus of Bloom's attack is on historicism, which undercuts his nostalgic vision of a prosperous and just America. Condemns Bloom's exclusion of Blacks, Hispanics, and women from America's cultural heritage.…

  18. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  19. Partitioning Predicted Variance into Constituent Parts: How To Conduct Commonality Analysis.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    This paper explains how commonality analysis (CA) can be conducted using a specific Statistical Analysis System (SAS) procedure and some simple computations. CA is used in educational and social science research to partition the variance of a dependent variable into its constituent predicted parts. CA determines the proportion of explained…

  20. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  1. Modal comparison of Yamato and Allan Hills polymict eucrites

    NASA Technical Reports Server (NTRS)

    Delaney, J. S.; Prinz, M.; Takeda, H.

    1983-01-01

    Seven Yamato and six Allan Hills polymict eucrite specimens were compared by modal analysis. The analyses reveal differences of plagioclase and pyroxene content between the two groups. The Yamato suite has more 'pigeonitic' pyroxene and less plagioclase and low-calcium pyroxene than the Allan Hills suite. Variations within each suite are small and three sections of Allan Hills A78040 are more variable than the Allen Hills suite considered as a group. Modal data provides a basis for pairing polymict eucrite specimens when used together with mineralogical and petrographic criteria. Modal data furthermore confirms the presence of several rock types previously identified using pyroxene crystallography and hints at the presence of an augite-rich component.

  2. [Wavelength selection of the oximetry based on test analysis of variance].

    PubMed

    Lin, Ling; Li, Wei; Zeng, Rui-Li; Liu, Rui-An; Li, Gang; Wu, Xiao-Rong

    2014-07-01

    In order to improve the precision and reliability of the spectral measurement of blood oxygen saturation, and enhance the validity of the measurement, the method of test analysis of variance was employed. Preferred wavelength combination was selected by the analysis of the distribution of the coefficient of oximetry at different wavelength combinations and rational use of statistical theory. Calculated by different combinations of wavelengths (660 and 940 nm, 660 and 805 nm and 805 and 940 nm) through the clinical data under different oxygen saturation, the single factor test analysis of variance model of the oxygen saturation coefficient was established, the relative preferabe wavelength combination can be selected by comparative analysis of different combinations of wavelengths from the photoelectric volume pulse to provide a reliable intermediate data for further modeling. The experiment results showed that the wavelength combination of 660 and 805 nm responded more significantly to the changes in blood oxygen saturation and the introduced noise and method error were relatively smaller of this combination than other wavelength combination, which could improve the measurement accuracy of oximetry. The study applied the test variance analysis to the selection of wavelength combination in the blood oxygen result measurement, and the result was significant. The study provided a new idea for the blood oxygen measurements and other related spectroscopy quantitative analysis. The method of test analysis of variance can help extract the valid information which represents the measured values from the spectrum.

  3. Decomposing genomic variance using information from GWA, GWE and eQTL analysis.

    PubMed

    Ehsani, A; Janss, L; Pomp, D; Sørensen, P

    2016-04-01

    A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance.

  4. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    ERIC Educational Resources Information Center

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  5. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  6. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    ERIC Educational Resources Information Center

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  7. A Visualization Tool for One- and Two-Way Analysis of Variance

    ERIC Educational Resources Information Center

    Sturm-Beiss, Rachel

    2005-01-01

    Analysis of variance (ANOVA), a technique included in many introductory statistics courses, analyzes the relationship between a quantitative dependent variable and one or more independent qualitative variables. The nature of the relationship is expressed in a model with unknown parameters. Many textbooks emphasize the mechanics of the technique…

  8. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus…

  9. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    ERIC Educational Resources Information Center

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  10. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  11. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2014-01-01

    Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

  12. Pairwise Comparison Procedures for One-Way Analysis of Variance Designs. Research Report.

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    Research in the behavioral and health sciences frequently involves the application of one-factor analysis of variance models. The goal may be to compare several independent groups of subjects on a quantitative dependent variable or to compare measurements made on a single group of subjects on different occasions or under different conditions. In…

  13. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    PubMed

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  14. MCT8 mutation analysis and identification of the first female with Allan-Herndon-Dudley syndrome due to loss of MCT8 expression.

    PubMed

    Frints, Suzanna Gerarda Maria; Lenzner, Steffen; Bauters, Mareike; Jensen, Lars Riff; Van Esch, Hilde; des Portes, Vincent; Moog, Ute; Macville, Merryn Victor Erik; van Roozendaal, Kees; Schrander-Stumpel, Constance Theresia Rimbertha Maria; Tzschach, Andreas; Marynen, Peter; Fryns, Jean-Pierre; Hamel, Ben; van Bokhoven, Hans; Chelly, Jamel; Beldjord, Chérif; Turner, Gillian; Gecz, Jozef; Moraine, Claude; Raynaud, Martine; Ropers, Hans Hilger; Froyen, Guy; Kuss, Andreas Walter

    2008-09-01

    Mutations in the thyroid monocarboxylate transporter 8 gene (MCT8/SLC16A2) have been reported to result in X-linked mental retardation (XLMR) in patients with clinical features of the Allan-Herndon-Dudley syndrome (AHDS). We performed MCT8 mutation analysis including 13 XLMR families with LOD scores >2.0, 401 male MR sibships and 47 sporadic male patients with AHDS-like clinical features. One nonsense mutation (c.629insA) and two missense changes (c.1A>T and c.1673G>A) were identified. Consistent with previous reports on MCT8 missense changes, the patient with c.1673G>A showed elevated serum T3 level. The c.1A>T change in another patient affects a putative translation start codon, but the same change was present in his healthy brother. In addition normal serum T3 levels were present, suggesting that the c.1A>T (NM_006517) variation is not responsible for the MR phenotype but indicates that MCT8 translation likely starts with a methionine at position p.75. Moreover, we characterized a de novo translocation t(X;9)(q13.2;p24) in a female patient with full blown AHDS clinical features including elevated serum T3 levels. The MCT8 gene was disrupted at the X-breakpoint. A complete loss of MCT8 expression was observed in a fibroblast cell-line derived from this patient because of unfavorable nonrandom X-inactivation. Taken together, these data indicate that MCT8 mutations are not common in non-AHDS MR patients yet they support that elevated serum T3 levels can be indicative for AHDS and that AHDS clinical features can be present in female MCT8 mutation carriers whenever there is unfavorable nonrandom X-inactivation.

  15. Sensitivity analysis of a two-dimensional probabilistic risk assessment model using analysis of variance.

    PubMed

    Mokhtari, Amirhossein; Frey, H Christopher

    2005-12-01

    This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.

  16. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  17. Hierarchical linear model: thinking outside the traditional repeated-measures analysis-of-variance box.

    PubMed

    Lininger, Monica; Spybrook, Jessaca; Cheatham, Christopher C

    2015-04-01

    Longitudinal designs are common in the field of athletic training. For example, in the Journal of Athletic Training from 2005 through 2010, authors of 52 of the 218 original research articles used longitudinal designs. In 50 of the 52 studies, a repeated-measures analysis of variance was used to analyze the data. A possible alternative to this approach is the hierarchical linear model, which has been readily accepted in other medical fields. In this short report, we demonstrate the use of the hierarchical linear model for analyzing data from a longitudinal study in athletic training. We discuss the relevant hypotheses, model assumptions, analysis procedures, and output from the HLM 7.0 software. We also examine the advantages and disadvantages of using the hierarchical linear model with repeated measures and repeated-measures analysis of variance for longitudinal data.

  18. A new concept for variance analysis of hyphenated chromatographic data avoiding signal warping.

    PubMed

    Zerzucha, Piotr; Kazura, Małgorzata; de Beer, Dalene; Joubert, Elizabeth; Schulze, Alexandra E; Beelders, Theresa; de Villiers, André; Walczak, Beata

    2013-05-24

    Analysis of variance of chromatographic data is usually performed on the peak table or on entire chromatograms. These two data forms require signal pretreatment. Peak table requires peak detection, their standards and quantification, and the second form of data organization requires warping of the studied chromatograms to eliminate the observed peak shifts, which occurs due to minor variations in chromatographic conditions. In our study, a new form of data representation well suited for chromatographic data originating from multi-channel detection is proposed. It requires neither warping of chromatograms, nor peak detection. Its principles and performance are demonstrated for a real data set (being a part of a larger research project initiated to characterize the infusion of fermented rooibos herbal tea in terms of phenolic composition and antioxidant activity). As the method of choice for the analysis of data variation, the Multiple Analysis of Variance applied to the pairwise data representation was chosen.

  19. Combining multivariate statistics and analysis of variance to redesign a water quality monitoring network.

    PubMed

    Guigues, Nathalie; Desenfant, Michèle; Hance, Emmanuel

    2013-09-01

    The objective of this paper was to demonstrate how multivariate statistics combined with the analysis of variance could support decision-making during the process of redesigning a water quality monitoring network with highly heterogeneous datasets in terms of time and space. Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were selected to optimise the selection of water quality parameters to be monitored as well as the number and location of monitoring stations. Sampling frequency was specifically investigated through the analysis of variance. The data used were obtained between 2007 and 2010 at the Long-term Environmental Research Monitoring and Testing System (OPE) located in the north-eastern part of France in relation with a geological disposal of radioactive waste project. PCA results showed that no substantial reduction among the parameters was possible as strong correlation only exists between electrical conductivity, calcium or bicarbonates. HCA results were geospatially represented for each field campaign and compared to one another in terms of similarities and differences allowing us to group the monitoring stations into 12 categories. This approach enabled us to take into account not only the spatial variability of water quality but also its temporal variability. Finally, the analysis of variances showed that three very different behaviours occurred: parameters with high temporal variability and low spatial variability (e.g. suspended matter), parameters with high spatial variability and average temporal variability (e.g. calcium) and finally parameters with both high temporal and spatial variability (e.g. nitrate).

  20. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach

    NASA Astrophysics Data System (ADS)

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-06-01

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to seven selected model parameters using a modified volatility basis-set (VBS) approach: four involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semivolatile and intermediate volatility organics (SIVOCs), and NOx; two involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the model parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether or not SOA that starts as semivolatile is rapidly transformed to nonvolatile SOA by particle-phase processes such as oligomerization and/or accretion, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into two subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to nonvolatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. However

  1. Publishing nutrition research: a review of multivariate techniques--part 2: analysis of variance.

    PubMed

    Harris, Jeffrey E; Sheean, Patricia M; Gleason, Philip M; Bruemmer, Barbara; Boushey, Carol

    2012-01-01

    This article is the eighth in a series exploring the importance of research design, statistical analysis, and epidemiology in nutrition and dietetics research, and the second in a series focused on multivariate statistical analytical techniques. The purpose of this review is to examine the statistical technique, analysis of variance (ANOVA), from its simplest to multivariate applications. Many dietetics practitioners are familiar with basic ANOVA, but less informed of the multivariate applications such as multiway ANOVA, repeated-measures ANOVA, analysis of covariance, multiple ANOVA, and multiple analysis of covariance. The article addresses all these applications and includes hypothetical and real examples from the field of dietetics.

  2. Study on Analysis of Variance on the indigenous wild and cultivated rice species of Manipur Valley

    NASA Astrophysics Data System (ADS)

    Medhabati, K.; Rohinikumar, M.; Rajiv Das, K.; Henary, Ch.; Dikash, Th.

    2012-10-01

    The analysis of variance revealed considerable variation among the cultivars and the wild species for yield and other quantitative characters in both the years of investigation. The highly significant differences among the cultivars in year wise and pooled analysis of variance for all the 12 characters reveal that there are enough genetic variabilities for all the characters studied. The existence of genetic variability is of paramount importance for starting a judicious plant breeding programme. Since introduced high yielding rice cultivars usually do not perform well. Improvement of indigenous cultivars is a clear choice for increase of rice production. The genetic variability of 37 rice germplasms in 12 agronomic characters estimated in the present study can be used in breeding programme

  3. A new periodogram using one-way analysis of variance for circadian rhythms.

    PubMed

    Shono, M; Shono, H; Ito, Y; Muro, M; Maeda, Y; Sugimori, H

    2000-06-01

    A new periodogram was proposed using one-way analysis of variance (ANOVA), termed ANOVA periodogram, in order to reveal a precise significant periodicity. Thirty 3-day complex computer-simulated time series with known periodicity (24 h) and three 2-h data-missing occurring periodically (23 h, 20 min) were used to compare the ANOVA periodogram with Enright's one. In results, the ANOVA periodogram was superior to Enright's periodogram in the accuracy of assessing the major periodicity.

  4. Structure analysis of simulated molecular clouds with the Δ-variance

    DOE PAGES

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density mapsmore » for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.« less

  5. Structure analysis of simulated molecular clouds with the Δ-variance

    SciTech Connect

    Bertram, Erik; Klessen, Ralf S.; Glover, Simon C. O.

    2015-05-27

    Here, we employ the Δ-variance analysis and study the turbulent gas dynamics of simulated molecular clouds (MCs). Our models account for a simplified treatment of time-dependent chemistry and the non-isothermal nature of the gas. We investigate simulations using three different initial mean number densities of n0 = 30, 100 and 300 cm-3 that span the range of values typical for MCs in the solar neighbourhood. Furthermore, we model the CO line emission in a post-processing step using a radiative transfer code. We evaluate Δ-variance spectra for centroid velocity (CV) maps as well as for integrated intensity and column density maps for various chemical components: the total, H2 and 12CO number density and the integrated intensity of both the 12CO and 13CO (J = 1 → 0) lines. The spectral slopes of the Δ-variance computed on the CV maps for the total and H2 number density are significantly steeper compared to the different CO tracers. We find slopes for the linewidth–size relation ranging from 0.4 to 0.7 for the total and H2 density models, while the slopes for the various CO tracers range from 0.2 to 0.4 and underestimate the values for the total and H2 density by a factor of 1.5–3.0. We demonstrate that optical depth effects can significantly alter the Δ-variance spectra. Furthermore, we report a critical density threshold of 100 cm-3 at which the Δ-variance slopes of the various CO tracers change sign. We thus conclude that carbon monoxide traces the total cloud structure well only if the average cloud density lies above this limit.

  6. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    NASA Astrophysics Data System (ADS)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  7. Comparison of performance between rescaled range analysis and rescaled variance analysis in detecting abrupt dynamic change

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Liu, Qun-Qun; Jiang, Yun-Di; Lu, Ying

    2015-04-01

    In the present paper, a comparison of the performance between moving cutting data-rescaled range analysis (MC-R/S) and moving cutting data-rescaled variance analysis (MC-V/S) is made. The results clearly indicate that the operating efficiency of the MC-R/S algorithm is higher than that of the MC-V/S algorithm. In our numerical test, the computer time consumed by MC-V/S is approximately 25 times that by MC-R/S for an identical window size in artificial data. Except for the difference in operating efficiency, there are no significant differences in performance between MC-R/S and MC-V/S for the abrupt dynamic change detection. MC-R/S and MC-V/S both display some degree of anti-noise ability. However, it is important to consider the influences of strong noise on the detection results of MC-R/S and MC-V/S in practical application processes. Project supported by the National Basic Research Program of China (Grant No. 2012CB955902) and the National Natural Science Foundation of China (Grant Nos. 41275074, 41475073, and 41175084).

  8. Smoothing spline analysis of variance models: A new tool for the analysis of cyclic biomechanical data.

    PubMed

    Helwig, Nathaniel E; Shorter, K Alex; Ma, Ping; Hsiao-Wecksler, Elizabeth T

    2016-10-03

    Cyclic biomechanical data are commonplace in orthopedic, rehabilitation, and sports research, where the goal is to understand and compare biomechanical differences between experimental conditions and/or subject populations. A common approach to analyzing cyclic biomechanical data involves averaging the biomechanical signals across cycle replications, and then comparing mean differences at specific points of the cycle. This pointwise analysis approach ignores the functional nature of the data, which can hinder one׳s ability to find subtle differences between experimental conditions and/or subject populations. To overcome this limitation, we propose using mixed-effects smoothing spline analysis of variance (SSANOVA) to analyze differences in cyclic biomechanical data. The SSANOVA framework makes it possible to decompose the estimated function into the portion that is common across groups (i.e., the average cycle, AC) and the portion that differs across groups (i.e., the contrast cycle, CC). By partitioning the signal in such a manner, we can obtain estimates of the CC differences (CCDs), which are the functions directly describing group differences in the cyclic biomechanical data. Using both simulated and experimental data, we illustrate the benefits of using SSANOVA models to analyze differences in noisy biomechanical (gait) signals collected from multiple locations (joints) of subjects participating in different experimental conditions. Using Bayesian confidence intervals, the SSANOVA results can be used in clinical and research settings to reliably quantify biomechanical differences between experimental conditions and/or subject populations.

  9. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  10. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

    PubMed

    Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution.

  11. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    PubMed

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model.

  12. Analysis of T-RFLP data using analysis of variance and ordination methods: a comparative study.

    PubMed

    Culman, S W; Gauch, H G; Blackwood, C B; Thies, J E

    2008-09-01

    The analysis of T-RFLP data has developed considerably over the last decade, but there remains a lack of consensus about which statistical analyses offer the best means for finding trends in these data. In this study, we empirically tested and theoretically compared ten diverse T-RFLP datasets derived from soil microbial communities using the more common ordination methods in the literature: principal component analysis (PCA), nonmetric multidimensional scaling (NMS) with Sørensen, Jaccard and Euclidean distance measures, correspondence analysis (CA), detrended correspondence analysis (DCA) and a technique new to T-RFLP data analysis, the Additive Main Effects and Multiplicative Interaction (AMMI) model. Our objectives were i) to determine the distribution of variation in T-RFLP datasets using analysis of variance (ANOVA), ii) to determine the more robust and informative multivariate ordination methods for analyzing T-RFLP data, and iii) to compare the methods based on theoretical considerations. For the 10 datasets examined in this study, ANOVA revealed that the variation from Environment main effects was always small, variation from T-RFs main effects was large, and variation from T-RFxEnvironment (TxE) interactions was intermediate. Larger variation due to TxE indicated larger differences in microbial communities between environments/treatments and thus demonstrated the utility of ANOVA to provide an objective assessment of community dissimilarity. The comparison of statistical methods typically yielded similar empirical results. AMMI, T-RF-centered PCA, and DCA were the most robust methods in terms of producing ordinations that consistently reached a consensus with other methods. In datasets with high sample heterogeneity, NMS analyses with Sørensen and Jaccard distance were the most sensitive for recovery of complex gradients. The theoretical comparison showed that some methods hold distinct advantages for T-RFLP analysis, such as estimations of variation

  13. The analysis of variance in anaesthetic research: statistics, biography and history.

    PubMed

    Pandit, J J

    2010-12-01

    Multiple t-tests (or their non-parametric equivalents) are often used erroneously to compare the means of three or more groups in anaesthetic research. Methods for correcting the p value regarded as significant can be applied to take account of multiple testing, but these are somewhat arbitrary and do not avoid several unwieldy calculations. The appropriate method for most such comparisons is the 'analysis of variance' that not only economises on the number of statistical procedures, but also indicates if underlying factors or sub-groups have contributed to any significant results. This article outlines the history, rationale and method of this analysis.

  14. Chasing change: repeated-measures analysis of variance is so yesterday!

    PubMed

    Dijkers, Marcel P

    2013-03-01

    Change and growth are the bread and butter of rehabilitation research, but to date, most researchers have used less than optimal statistical methods to quantify change, its nature, speed, and form. Hierarchical linear modeling (HLM) (random/mixed effects or latent growth or multilevel modeling, individual/latent growth curve analysis) generally is superior to analysis of (co)variance and other methods, but has been underused in rehabilitation research. Apropos of the publication of 2 didactic articles setting forth the basics of HLM, this commentary sketches some of the advantages of this technique.

  15. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

  16. Full pedigree quantitative trait locus analysis in commercial pigs using variance components.

    PubMed

    de Koning, D J; Pong-Wong, R; Varona, L; Evans, G J; Giuffra, E; Sanchez, A; Plastow, G; Noguera, J L; Andersson, L; Haley, C S

    2003-09-01

    In commercial livestock populations, QTL detection methods often use existing half-sib family structures and ignore additional relationships within and between families. We reanalyzed the data from a large QTL confirmation experiment with 10 pig lines and 10 chromosome regions using identity-by-descent (IBD) scores and variance component analyses. The IBD scores were obtained using a Monte Carlo Markov Chain method, as implemented in the LOKI software, and were used to model a putative QTL in a mixed animal model. The analyses revealed 61 QTL at a nominal 5% level (out of 650 tests). Twenty-seven QTL mapped to areas where QTL have been reported, and eight of these exceeded the threshold to claim confirmed linkage (P < 0.01). Forty-two of the putative QTL were detected previously using half-sib analyses, whereas 46 QTL previously identified by half-sib analyses could not be confirmed using the variance component approach. Some of the differences could be traced back to the underlying assumptions between the two methods. Using a deterministic approach to estimate IBD scores on a subset of the data gave very similar results to LOKI. We have demonstrated the feasibility of applying variance component QTL analysis to a large amount of data, equivalent to a genome scan. In many situations, the deterministic IBD approach offers a fast alternative to LOKI.

  17. Simulations of the Hadamard Variance: Probability Distributions and Confidence Intervals.

    PubMed

    Ashby, Neil; Patla, Bijunath

    2016-04-01

    Power-law noise in clocks and oscillators can be simulated by Fourier transforming a modified spectrum of white phase noise. This approach has been applied successfully to simulation of the Allan variance and the modified Allan variance in both overlapping and nonoverlapping forms. When significant frequency drift is present in an oscillator, at large sampling times the Allan variance overestimates the intrinsic noise, while the Hadamard variance is insensitive to frequency drift. The simulation method is extended in this paper to predict the Hadamard variance for the common types of power-law noise. Symmetric real matrices are introduced whose traces-the sums of their eigenvalues-are equal to the Hadamard variances, in overlapping or nonoverlapping forms, as well as for the corresponding forms of the modified Hadamard variance. We show that the standard relations between spectral densities and Hadamard variance are obtained with this method. The matrix eigenvalues determine probability distributions for observing a variance at an arbitrary value of the sampling interval τ, and hence for estimating confidence in the measurements.

  18. Non-parametric multivariate analysis of variance in the proteomic response of potato to drought stress.

    PubMed

    Zerzucha, Piotr; Boguszewska, Dominika; Zagdańska, Barbara; Walczak, Beata

    2012-03-16

    Spot detection is a mandatory step in all available software packages dedicated to the analysis of 2D gel images. As the majority of spots do not represent individual proteins, spot detection can obscure the results of data analysis significantly. This problem can be overcome by a pixel-level analysis of 2D images. Differences between the spot and the pixel-level approaches are demonstrated by variance analysis for real data sets (part of a larger research project initiated to investigate the molecular mechanism of the response of the potato to drought stress). As the method of choice for the analysis of data variation, the non-parametric MANOVA was chosen. NP-MANOVA is recommended as a flexible and very fast tool for the evaluation of the statistical significance of the factor(s) studied.

  19. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis.

    PubMed

    Luthria, Devanand L; Lin, Long-Ze; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-11-12

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of two cultivars of broccoli, Majestic and Legacy, the first grown with four different levels of Se and the second grown organically and conventionally with two rates of irrigation. Chemical composition differences in the two cultivars and seven treatments produced patterns that were visually and statistically distinguishable using ANOVA-PCA. PCA loadings allowed identification of the molecular and fragment ions that provided the most significant chemical differences. A standardized profiling method for phenolic compounds showed that important discriminating ions were not phenolic compounds. The elution times of the discriminating ions and previous results suggest that they were common sugars and organic acids. ANOVA calculations of the positive and negative ionization MS fingerprints showed that 33% of the variance came from the cultivar, 59% from the growing treatment, and 8% from analytical uncertainty. Although the positive and negative ionization fingerprints differed significantly, there was no difference in the distribution of variance. High variance of individual masses with cultivars or growing treatment was correlated with high PCA loadings. The ANOVA data suggest that only variables with high variance for analytical uncertainty should be deleted. All other variables represent discriminating masses that allow separation of the samples with respect to cultivar and treatment.

  20. Analysis of variance: is there a difference in means and what does it mean?

    PubMed

    Kao, Lillian S; Green, Charles E

    2008-01-01

    To critically evaluate the literature and to design valid studies, surgeons require an understanding of basic statistics. Despite the increasing complexity of reported statistical analyses in surgical journals and the decreasing use of inappropriate statistical methods, errors such as in the comparison of multiple groups still persist. This review introduces the statistical issues relating to multiple comparisons, describes the theoretical basis behind analysis of variance (ANOVA), discusses the essential differences between ANOVA and multiple t-tests, and provides an example of the computations and computer programming used in performing ANOVA.

  1. Edgar Allan Poe's Physical Cosmology

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    1994-06-01

    In this paper I describe the scientific content of Eureka, the prose poem written by Edgar Allan Poe in 1848. In that work, starting from metaphysical assumptions, Poe claims that the Universe is finite in an infinite Space, and that it was originated from a primordial Particle, whose fragmentation under the action of a repulsive force caused a diffusion of atoms in space. I will show that his subsequently collapsing universe represents a scientifically acceptable Newtonian model. In the framework of his evolving universe, Poe makes use of contemporary astronomical knowledge, deriving modern concepts such as a primordial atomic state of the universe and a common epoch of galaxy formation. Harrison found in Eureka the first, qualitative solution of the Olbers' paradox; I show that Poe also applies in a modern way the anthropic principle, trying to explain why the Universe is so large.

  2. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques

    NASA Technical Reports Server (NTRS)

    Wu, Andy

    1995-01-01

    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  3. The apportionment of total genetic variation by categorical analysis of variance.

    PubMed

    Khang, Tsung Fei; Yap, Von Bing

    2010-01-01

    We wish to suggest the categorical analysis of variance as a means of quantifying the proportion of total genetic variation attributed to different sources of variation. This method potentially challenges researchers to rethink conclusions derived from a well-known method known as the analysis of molecular variance (AMOVA). The CATANOVA framework allows explicit definition, and estimation, of two measures of genetic differentiation. These parameters form the subject of interest in many research programmes, but are often confused with the correlation measures defined in AMOVA, which cannot be interpreted as relative contributions of particular sources of variation. Through a simulation approach, we show that under certain conditions, researchers who use AMOVA to estimate these measures of genetic differentiation may attribute more than justified amounts of total variation to population labels. Moreover, the two measures can also lead to incongruent conclusions regarding the genetic structure of the populations of interest. Fortunately, one of the two measures seems robust to variations in relative sample sizes used. Its merits are illustrated in this paper using mitochondrial haplotype and amplified fragment length polymorphism (AFLP) data.

  4. Analysis of variance components reveals the contribution of sample processing to transcript variation.

    PubMed

    van der Veen, Douwe; Oliveira, José Miguel; van den Berg, Willy A M; de Graaff, Leo H

    2009-04-01

    The proper design of DNA microarray experiments requires knowledge of biological and technical variation of the studied biological model. For the filamentous fungus Aspergillus niger, a fast, quantitative real-time PCR (qPCR)-based hierarchical experimental design was used to determine this variation. Analysis of variance components determined the contribution of each processing step to total variation: 68% is due to differences in day-to-day handling and processing, while the fermentor vessel, cDNA synthesis, and qPCR measurement each contributed equally to the remainder of variation. The global transcriptional response to d-xylose was analyzed using Affymetrix microarrays. Twenty-four statistically differentially expressed genes were identified. These encode enzymes required to degrade and metabolize D-xylose-containing polysaccharides, as well as complementary enzymes required to metabolize complex polymers likely present in the vicinity of D-xylose-containing substrates. These results confirm previous findings that the d-xylose signal is interpreted by the fungus as the availability of a multitude of complex polysaccharides. Measurement of a limited number of transcripts in a defined experimental setup followed by analysis of variance components is a fast and reliable method to determine biological and technical variation present in qPCR and microarray studies. This approach provides important parameters for the experimental design of batch-grown filamentous cultures and facilitates the evaluation and interpretation of microarray data.

  5. Comparative performance of heterogeneity variance estimators in meta-analysis: a review of simulation studies.

    PubMed

    Langan, Dean; Higgins, Julian P T; Simmonds, Mark

    2016-04-06

    Random-effects meta-analysis methods include an estimate of between-study heterogeneity variance. We present a systematic review of simulation studies comparing the performance of different estimation methods for this parameter. We summarise the performance of methods in relation to estimation of heterogeneity and of the overall effect estimate, and of confidence intervals for the latter. Among the twelve included simulation studies, the DerSimonian and Laird method was most commonly evaluated. This estimate is negatively biased when heterogeneity is moderate to high and therefore most studies recommended alternatives. The Paule-Mandel method was recommended by three studies: it is simple to implement, is less biased than DerSimonian and Laird and performs well in meta-analyses with dichotomous and continuous outcomes. In many of the included simulation studies, results were based on data that do not represent meta-analyses observed in practice, and only small selections of methods were compared. Furthermore, potential conflicts of interest were present when authors of novel methods interpreted their results. On the basis of current evidence, we provisionally recommend the Paule-Mandel method for estimating the heterogeneity variance, and using this estimate to calculate the mean effect and its 95% confidence interval. However, further simulation studies are required to draw firm conclusions. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities.

    PubMed

    Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco

    2017-03-14

    Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.

  7. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  8. Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities

    NASA Astrophysics Data System (ADS)

    Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco

    2017-03-01

    Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues.

  9. Differential Variance Analysis: a direct method to quantify and visualize dynamic heterogeneities

    PubMed Central

    Pastore, Raffaele; Pesce, Giuseppe; Caggioni, Marco

    2017-01-01

    Many amorphous materials show spatially heterogenous dynamics, as different regions of the same system relax at different rates. Such a signature, known as Dynamic Heterogeneity, has been crucial to understand the nature of the jamming transition in simple model systems and is currently considered very promising to characterize more complex fluids of industrial and biological relevance. Unfortunately, measurements of dynamic heterogeneities typically require sophisticated experimental set-ups and are performed by few specialized groups. It is now possible to quantitatively characterize the relaxation process and the emergence of dynamic heterogeneities using a straightforward method, here validated on video microscopy data of hard-sphere colloidal glasses. We call this method Differential Variance Analysis (DVA), since it focuses on the variance of the differential frames, obtained subtracting images at different time-lags. Moreover, direct visualization of dynamic heterogeneities naturally appears in the differential frames, when the time-lag is set to the one corresponding to the maximum dynamic susceptibility. This approach opens the way to effectively characterize and tailor a wide variety of soft materials, from complex formulated products to biological tissues. PMID:28290540

  10. Meta-analysis of binary data: which within study variance estimate to use?

    PubMed

    Chang, B H; Waternaux, C; Lipsitz, S

    2001-07-15

    We applied a mixed effects model to investigate between- and within-study variation in improvement rates of 180 schizophrenia outcome studies. The between-study variation was explained by the fixed study characteristics and an additional random study effect. Both rate difference and logit models were used. For a binary proportion outcome p(i) with sample size n(i) in the ith study, (circumflexp(i)(1-circumflexp(i))n)(-1) is the usual estimate of the within-study variance sigma(i)(2) in the logit model, where circumflexpi) is the sample mean of the binary outcome for subjects in study i. This estimate can be highly correlated with logit(circumflexp(i)). We used (macronp(i)(1-macronp)n(i))(-1) as an alternative estimate of sigma(i)(2), where macronp is the weighted mean of circumflexp(i)'s. We estimated regression coefficients (beta) of the fixed effects and the variance (tau(2)) of the random study effect using a quasi-likelihood estimating equations approach. Using the schizophrenia meta-analysis data, we demonstrated how the choice of the estimate of sigma(2)(i) affects the resulting estimates of beta and tau(2). We also conducted a simulation study to evaluate the performance of the two estimates of sigma(2)(i) in different conditions, where the conditions vary by number of studies and study size. Using the schizophrenia meta-analysis data, the estimates of beta and tau(2) were quite different when different estimates of sigma(2)(i) were used in the logit model. The simulation study showed that the estimates of beta and tau(2) were less biased, and the 95 per cent CI coverage was closer to 95 per cent when the estimate of sigma(2)(i) was (macronp(1-macronp)n(i))(-1) rather than (circumflexp(i)(1-circumflexp)n(i))(-1). Finally, we showed that a simple regression analysis is not appropriate unless tau(2) is much larger than sigma(2)(i), or a robust variance is used.

  11. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    SciTech Connect

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  12. Structural damage detection in an aeronautical panel using analysis of variance

    NASA Astrophysics Data System (ADS)

    Gonsalez, Camila Gianini; da Silva, Samuel; Brennan, Michael J.; Lopes Junior, Vicente

    2015-02-01

    This paper describes a procedure for structural health assessment based on one-way analysis of variance (ANOVA) together with Tukey's multiple comparison test, to determine whether the results are statistically significant. The feature indices are obtained from electromechanical impedance measurements using piezoceramic sensor/actuator patches bonded to the structure. Compared to the classical approach based on a simple change of the observed signals, using for example root mean square responses, the decision procedure in this paper involves a rigorous statistical test. Experimental tests were carried out on an aeronautical panel in the laboratory to validate the approach. In order to include uncontrolled variability in the dynamic responses, the measurements were taken over several days in different environmental conditions using all eight sensor/actuator patches. The damage was simulated by controlling the tightness and looseness of the bolts and was correctly diagnosed. The paper discusses the strengths and weakness of the approach in light of the experimental results.

  13. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  14. Identification of mitochondrial proteins of malaria parasite using analysis of variance.

    PubMed

    Ding, Hui; Li, Dongmei

    2015-02-01

    As a parasitic protozoan, Plasmodium falciparum (P. falciparum) can cause malaria. The mitochondrial proteins of malaria parasite play important roles in the discovery of anti-malarial drug targets. Thus, accurate identification of mitochondrial proteins of malaria parasite is a key step for understanding their functions and finding potential drug targets. In this work, we developed a sequence-based method to identify the mitochondrial proteins of malaria parasite. At first, we extended adjoining dipeptide composition to g-gap dipeptide composition for discretely formulating the protein sequences. Subsequently, the analysis of variance (ANOVA) combined with incremental feature selection (IFS) was used to pick out the optimal features. Finally, the jackknife cross-validation was used to evaluate the performance of the proposed model. Evaluation results showed that the maximum accuracy of 97.1% could be achieved by using 101 optimal 5-gap dipeptides. The comparison with previous methods demonstrated that our method was accurate and efficient.

  15. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    NASA Astrophysics Data System (ADS)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  16. Minimum variance imaging based on correlation analysis of Lamb wave signals.

    PubMed

    Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

    2016-08-01

    In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance.

  17. fullfact: an R package for the analysis of genetic and maternal variance components from full factorial mating designs.

    PubMed

    Houde, Aimee Lee S; Pitcher, Trevor E

    2016-03-01

    Full factorial breeding designs are useful for quantifying the amount of additive genetic, nonadditive genetic, and maternal variance that explain phenotypic traits. Such variance estimates are important for examining evolutionary potential. Traditionally, full factorial mating designs have been analyzed using a two-way analysis of variance, which may produce negative variance values and is not suited for unbalanced designs. Mixed-effects models do not produce negative variance values and are suited for unbalanced designs. However, extracting the variance components, calculating significance values, and estimating confidence intervals and/or power values for the components are not straightforward using traditional analytic methods. We introduce fullfact - an R package that addresses these issues and facilitates the analysis of full factorial mating designs with mixed-effects models. Here, we summarize the functions of the fullfact package. The observed data functions extract the variance explained by random and fixed effects and provide their significance. We then calculate the additive genetic, nonadditive genetic, and maternal variance components explaining the phenotype. In particular, we integrate nonnormal error structures for estimating these components for nonnormal data types. The resampled data functions are used to produce bootstrap-t confidence intervals, which can then be plotted using a simple function. We explore the fullfact package through a worked example. This package will facilitate the analyses of full factorial mating designs in R, especially for the analysis of binary, proportion, and/or count data types and for the ability to incorporate additional random and fixed effects and power analyses.

  18. Analysis of variance with unbalanced data: an update for ecology & evolution.

    PubMed

    Hector, Andy; von Felten, Stefanie; Schmid, Bernhard

    2010-03-01

    1. Factorial analysis of variance (anova) with unbalanced (non-orthogonal) data is a commonplace but controversial and poorly understood topic in applied statistics. 2. We explain that anova calculates the sum of squares for each term in the model formula sequentially (type I sums of squares) and show how anova tables of adjusted sums of squares are composite tables assembled from multiple sequential analyses. A different anova is performed for each explanatory variable or interaction so that each term is placed last in the model formula in turn and adjusted for the others. 3. The sum of squares for each term in the analysis can be calculated after adjusting only for the main effects of other explanatory variables (type II sums of squares) or, controversially, for both main effects and interactions (type III sums of squares). 4. We summarize the main recent developments and emphasize the shift away from the search for the 'right'anova table in favour of presenting one or more models that best suit the objectives of the analysis.

  19. Longitudinal variance-components analysis of the Framingham Heart Study data

    PubMed Central

    Macgregor, Stuart; Knott, Sara A; White, Ian; Visscher, Peter M

    2003-01-01

    The Framingham Heart Study offspring cohort, a complex data set with irregularly spaced longitudinal phenotype data, was made available as part of Genetic Analysis Workshop 13. To allow an analysis of all of the data simultaneously, a mixed-model- based random-regression (RR) approach was used. The RR accounted for the variation in genetic effects (including marker-specific quantitative trait locus (QTL) effects) across time by fitting polynomials of age. The use of a mixed model allowed both fixed (such as sex) and random (such as familial environment) effects to be accounted for appropriately. Using this method we performed a QTL analysis of all of the available adult phenotype data (26,106 phenotypic records). In addition to RR, conventional univariate variance component techniques were applied. The traits of interest were BMI, HDLC, total cholesterol, and height. The longitudinal method allowed the characterization of the change in QTL effects with aging. A QTL affecting BMI was shown to act mainly at early ages. PMID:14975090

  20. Longitudinal variance-components analysis of the Framingham Heart Study data.

    PubMed

    Macgregor, Stuart; Knott, Sara A; White, Ian; Visscher, Peter M

    2003-12-31

    The Framingham Heart Study offspring cohort, a complex data set with irregularly spaced longitudinal phenotype data, was made available as part of Genetic Analysis Workshop 13. To allow an analysis of all of the data simultaneously, a mixed-model- based random-regression (RR) approach was used. The RR accounted for the variation in genetic effects (including marker-specific quantitative trait locus (QTL) effects) across time by fitting polynomials of age. The use of a mixed model allowed both fixed (such as sex) and random (such as familial environment) effects to be accounted for appropriately. Using this method we performed a QTL analysis of all of the available adult phenotype data (26,106 phenotypic records). In addition to RR, conventional univariate variance component techniques were applied. The traits of interest were BMI, HDLC, total cholesterol, and height. The longitudinal method allowed the characterization of the change in QTL effects with aging. A QTL affecting BMI was shown to act mainly at early ages.

  1. Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis

    ERIC Educational Resources Information Center

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P. T.; Langan, Dean; Salanti, Georgia

    2016-01-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance,…

  2. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    SciTech Connect

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  3. Multivariate analysis of variance of designed chromatographic data. A case study involving fermentation of rooibos tea.

    PubMed

    Marini, Federico; de Beer, Dalene; Walters, Nico A; de Villiers, André; Joubert, Elizabeth; Walczak, Beata

    2017-03-17

    An ultimate goal of investigations of rooibos plant material subjected to different stages of fermentation is to identify the chemical changes taking place in the phenolic composition, using an untargeted approach and chromatographic fingerprints. Realization of this goal requires, among others, identification of the main components of the plant material involved in chemical reactions during the fermentation process. Quantitative chromatographic data for the compounds for extracts of green, semi-fermented and fermented rooibos form the basis of preliminary study following a targeted approach. The aim is to estimate whether treatment has a significant effect based on all quantified compounds and to identify the compounds, which contribute significantly to it. Analysis of variance is performed using modern multivariate methods such as ANOVA-Simultaneous Component Analysis, ANOVA - Target Projection and regularized MANOVA. This study is the first one in which all three approaches are compared and evaluated. For the data studied, all tree methods reveal the same significance of the fermentation effect on the extract compositions, but they lead to its different interpretation.

  4. Variance components models for gene-environment interaction in twin analysis.

    PubMed

    Purcell, Shaun

    2002-12-01

    Gene-environment interaction is likely to be a common and important source of variation for complex behavioral traits. Often conceptualized as the genetic control of sensitivity to the environment, it can be incorporated in variance components twin analyses by partitioning genetic effects into a mean part, which is independent of the environment, and a part that is a linear function of the environment. The model allows for one or more environmental moderator variables (that possibly interact with each other) that may i). be continuous or binary ii). differ between twins within a pair iii). interact with residual environmental as well as genetic effects iv) have nonlinear moderating properties v). show scalar (different magnitudes) or qualitative (different genes) interactions vi). be correlated with genetic effects acting upon the trait, to allow for a test of gene-environment interaction in the presence of gene-environment correlation. Aspects and applications of a class of models are explored by simulation, in the context of both individual differences twin analysis and, in a companion paper (Purcell & Sham, 2002) sibpair quantitative trait locus linkage analysis. As well as elucidating environmental pathways, consideration of gene-environment interaction in quantitative and molecular studies will potentially direct and enhance gene-mapping efforts.

  5. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  6. Effects of Violations of Data Set Assumptions When Using the Analysis of Variance and Covariance with Unequal Group Sizes.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    This research explored the degree to which group sizes can differ before the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) are jeopardized. Monte Carlo methodology was used, allowing for the experimental investigation of potential threats to robustness under conditions common to researchers in education. The…

  7. The Genetic Architecture of Quantitative Traits Cannot Be Inferred from Variance Component Analysis

    PubMed Central

    Huang, Wen; Mackay, Trudy F. C.

    2016-01-01

    Classical quantitative genetic analyses estimate additive and non-additive genetic and environmental components of variance from phenotypes of related individuals without knowing the identities of quantitative trait loci (QTLs). Many studies have found a large proportion of quantitative trait variation can be attributed to the additive genetic variance (VA), providing the basis for claims that non-additive gene actions are unimportant. In this study, we show that arbitrarily defined parameterizations of genetic effects seemingly consistent with non-additive gene actions can also capture the majority of genetic variation. This reveals a logical flaw in using the relative magnitudes of variance components to indicate the relative importance of additive and non-additive gene actions. We discuss the implications and propose that variance component analyses should not be used to infer the genetic architecture of quantitative traits. PMID:27812106

  8. Analysis of variance of communication latencies in anesthesia: comparing means of multiple log-normal distributions.

    PubMed

    Ledolter, Johannes; Dexter, Franklin; Epstein, Richard H

    2011-10-01

    Anesthesiologists rely on communication over periods of minutes. The analysis of latencies between when messages are sent and responses obtained is an essential component of practical and regulatory assessment of clinical and managerial decision-support systems. Latency data including times for anesthesia providers to respond to messages have moderate (> n = 20) sample sizes, large coefficients of variation (e.g., 0.60 to 2.50), and heterogeneous coefficients of variation among groups. Highly inaccurate results are obtained both by performing analysis of variance (ANOVA) in the time scale or by performing it in the log scale and then taking the exponential of the result. To overcome these difficulties, one can perform calculation of P values and confidence intervals for mean latencies based on log-normal distributions using generalized pivotal methods. In addition, fixed-effects 2-way ANOVAs can be extended to the comparison of means of log-normal distributions. Pivotal inference does not assume that the coefficients of variation of the studied log-normal distributions are the same, and can be used to assess the proportional effects of 2 factors and their interaction. Latency data can also include a human behavioral component (e.g., complete other activity first), resulting in a bimodal distribution in the log-domain (i.e., a mixture of distributions). An ANOVA can be performed on a homogeneous segment of the data, followed by a single group analysis applied to all or portions of the data using a robust method, insensitive to the probability distribution.

  9. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

    USGS Publications Warehouse

    Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

    1995-01-01

    The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

  10. Analysis of latitudinal distribution of Pi2 geomagnetic pulsations using the generalized variance method

    NASA Astrophysics Data System (ADS)

    Kleimenova, N. G.; Zelinsky, N. R.; Kotikov, A. L.

    2014-05-01

    The spatial dynamics of bursts of geomagnetic Pi2-type pulsations during a typical event of a magnetospheric substorm (April 13, 2010) drifting to the pole was investigated using the method of generalized variance characterizing the integral time increment of the total horizontal amplitude of the wave at a given point in the selected time interval. The digital data of Scandinavian profile observations from IMAGE magnetometers with 10-second sampling and data of the INTERMAGNET project observations at the equatorial, middle-latitude and subauroral latitudes with a 1-second sampling were used in the analysis. It was shown that Pi2 pulsation bursts in a frequency band of 8-20 mHz appear simultaneously on a global scale: from the polar to equatorial latitudes with maximum amplitudes at latitudes of the maximum intensity of the auroral electrojet and with a maximum amplitude of geomagnetic pulsations Pi3 within a band of 1.5-6 mHz. The first (left-polarized) intensive Pi2 burst appeared at auroral latitudes several minutes after breakup, while the second (right-polarized) burst occurred 15 min after breakup but at higher (polar) latitudes where the substorm had displaced by that time. The direction of wave-polarization vector rotation was opposite for auroral and subauroral latitudes, but it was identical at the equator and in the subauroral zone. The pulsation amplitude at the equator was maximal in the night sector.

  11. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects.

  12. Odor measurements according to EN 13725: A statistical analysis of variance components

    NASA Astrophysics Data System (ADS)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore

  13. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  14. Principal component analysis and analysis of variance on the effects of Entellan New on the Raman spectra of fibers.

    PubMed

    Yu, Marcia M L; Sandercock, P Mark L

    2012-01-01

    During the forensic examination of textile fibers, fibers are usually mounted on glass slides for visual inspection and identification under the microscope. One method that has the capability to accurately identify single textile fibers without subsequent demounting is Raman microspectroscopy. The effect of the mountant Entellan New on the Raman spectra of fibers was investigated to determine if it is suitable for fiber analysis. Raman spectra of synthetic fibers mounted in three different ways were collected and subjected to multivariate analysis. Principal component analysis score plots revealed that while spectra from different fiber classes formed distinct groups, fibers of the same class formed a single group regardless of the mounting method. The spectra of bare fibers and those mounted in Entellan New were found to be statistically indistinguishable by analysis of variance calculations. These results demonstrate that fibers mounted in Entellan New may be identified directly by Raman microspectroscopy without further sample preparation.

  15. Regional flood frequency analysis based on a Weibull model: Part 1. Estimation and asymptotic variances

    NASA Astrophysics Data System (ADS)

    Heo, Jun-Haeng; Boes, D. C.; Salas, J. D.

    2001-02-01

    Parameter estimation in a regional flood frequency setting, based on a Weibull model, is revisited. A two parameter Weibull distribution at each site, with common shape parameter over sites that is rationalized by a flood index assumption, and with independence in space and time, is assumed. The estimation techniques of method of moments and method of probability weighted moments are studied by proposing a family of estimators for each technique and deriving the asymptotic variance of each estimator. Then a single estimator and its asymptotic variance for each technique, suggested by trying to minimize the asymptotic variance over the family of estimators, is obtained. These asymptotic variances are compared to the Cramer-Rao Lower Bound, which is known to be the asymptotic variance of the maximum likelihood estimator. A companion paper considers the application of this model and these estimation techniques to a real data set. It includes a simulation study designed to indicate the sample size required for compatibility of the asymptotic results to fixed sample sizes.

  16. Borrowing information across genes and experiments for improved error variance estimation in microarray data analysis.

    PubMed

    Ji, Tieming; Liu, Peng; Nettleton, Dan

    2012-01-01

    Statistical inference for microarray experiments usually involves the estimation of error variance for each gene. Because the sample size available for each gene is often low, the usual unbiased estimator of the error variance can be unreliable. Shrinkage methods, including empirical Bayes approaches that borrow information across genes to produce more stable estimates, have been developed in recent years. Because the same microarray platform is often used for at least several experiments to study similar biological systems, there is an opportunity to improve variance estimation further by borrowing information not only across genes but also across experiments. We propose a lognormal model for error variances that involves random gene effects and random experiment effects. Based on the model, we develop an empirical Bayes estimator of the error variance for each combination of gene and experiment and call this estimator BAGE because information is Borrowed Across Genes and Experiments. A permutation strategy is used to make inference about the differential expression status of each gene. Simulation studies with data generated from different probability models and real microarray data show that our method outperforms existing approaches.

  17. John A. Scigliano Interviews Allan B. Ellis.

    ERIC Educational Resources Information Center

    Scigliano, John A.

    2000-01-01

    This interview with Allan Ellis focuses on a history of computer applications in education. Highlights include work at the Harvard Graduate School of Education; the New England Education Data System; and efforts to create a computer-based distance learning and development program called ISVD (Information System for Vocational Decisions). (LRW)

  18. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  19. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  20. Nonparametric One-Way Multivariate Analysis of Variance: A Computational Approach Based on the Pillai-Bartlett Trace.

    ERIC Educational Resources Information Center

    Zwick, Rebecca

    1985-01-01

    Describes how the test statistic for nonparametric one-way multivariate analysis of variance can be obtained by submitting the data to a packaged computer program. Monte Carlo evidence indicates that the nonparametric approach is advantageous under certain violations of the assumptions of multinormality and homogeneity of covariance matrices.…

  1. Bias and Precision of Measures of Association for a Fixed-Effect Multivariate Analysis of Variance Model

    ERIC Educational Resources Information Center

    Kim, Soyoung; Olejnik, Stephen

    2005-01-01

    The sampling distributions of five popular measures of association with and without two bias adjusting methods were examined for the single factor fixed-effects multivariate analysis of variance model. The number of groups, sample sizes, number of outcomes, and the strength of association were manipulated. The results indicate that all five…

  2. Approximate confidence intervals for moment-based estimators of the between-study variance in random effects meta-analysis.

    PubMed

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-12-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment effects follow a normal distribution. Recently proposed moment-based confidence intervals for the between-study variance are exact under the random effects model but are quite elaborate. Here, we present a much simpler method for calculating approximate confidence intervals of this type. This method uses variance-stabilising transformations as its basis and can be used for a very wide variety of moment-based estimators in both the random effects meta-analysis and meta-regression models.

  3. Sodium channel subconductance levels measured with a new variance-mean analysis

    PubMed Central

    1988-01-01

    The currents through single Na+ channels were recorded from dissociated cells of the flexor digitorum brevis muscle of the mouse. At 15 degrees C the prolonged bursts of Na+ channel openings produced by application of the drug DPI 201-106 had brief sojourns to subconductance levels. The subconductance events were relatively rare and brief, but could be identified using a new technique that sorts amplitude estimates based on their variance. The resulting "levels histogram" had a resolution of the conductance levels during channel activity that was superior to that of standard amplitude histograms. Cooling the preparation to 0 degrees C prolonged the subconductance events, and permitted further quantitative analysis of their amplitudes, as well as clear observations of single-channel subconductance events from untreated Na+ channels. In all cases the results were similar: a subconductance level, with an amplitude of roughly 35% of the fully open conductance and similar reversal potential, was present in both drug-treated and normal Na+ channels. Drug-treated channels spent approximately 3-6% of their total open time in the subconductance state over a range of potentials that caused the open probability to vary between 0.1 and 0.9. The summed levels histograms from many channels had a distinctive form, with broader, asymmetrical open and substate distributions compared with those of the closed state. Individual subconductance events to levels other than the most common 35% were also observed. I conclude that subconductance events are a normal subset of the open state of Na+ channels, whether or not they are drug treated. The subconductance events may represent a conformational alteration of the channel that occurs when it conducts ions. PMID:2849627

  4. Variability of indoor and outdoor VOC measurements: An analysis using variance components

    PubMed Central

    Jia, Chunrong; Batterman, Stuart A.; Relyea, George E.

    2014-01-01

    This study examines concentrations of volatile organic compounds (VOCs) measured inside and outside of 162 residences in southeast Michigan, U.S.A. Nested analyses apportioned four sources of variation: city, residence, season, and measurement uncertainty. Indoor measurements were dominated by seasonal and residence effects, accounting for 50 and 31%, respectively, of the total variance. Contributions from measurement uncertainty (<20%) and city effects (<10%) were small. For outdoor measurements, season, city and measurement variation accounted for 43, 29 and 27% of variance, respectively, while residence location had negligible impact (<2%). These results show that, to obtain representative estimates of indoor concentrations, measurements in multiple seasons are required. In contrast, outdoor VOC concentrations can use multi-seasonal measurements at centralized locations. Error models showed that uncertainties at low concentrations might obscure effects of other factors. Variance component analyses can be used to interpret existing measurements, design effective exposure studies, and determine whether the instrumentation and protocols are satisfactory. PMID:21995872

  5. A Budget Analysis of the Variances of Temperature and Moisture in Precipitating Shallow Cumulus Convection

    NASA Astrophysics Data System (ADS)

    Schemann, Vera; Seifert, Axel

    2017-01-01

    Large-eddy simulations of an evolving cloud field are used to investigate the contribution of microphysical processes to the evolution of the variance of total water and liquid water potential temperature in the boundary layer. While the first hours of such simulations show a transient behaviour and have to be analyzed with caution, the final portion of the simulation provides a quasi-equilibrium situation. This allows investigation of the budgets of the variances of total water and liquid water potential temperature and quantification of the contribution of several source and sink terms. Accretion is found to act as a strong sink for the variances, while the contributions from the processes of evaporation and autoconversion are small. A simple parametrization for the sink term connected to accretion is suggested and tested with a different set of simulations.

  6. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  7. X-RAY EMISSION ANALYSIS OF PORTLAND CEMENT. REPORT 1. VARIANCES IN ANALYSIS,

    DTIC Science & Technology

    iron (Fe), aluminum (Al), magnesium (Mg), and sulfur (S) in portland cement . The factors evaluated were: instrument conditions (optimized versus...direct effect on the analysis of portland cement . Binders were found to have a direct effect on the elemental analyses, and the effect was determined

  8. Analysis of variance is easily misapplied in the analysis of randomized trials: a critique and discussion of alternative statistical approaches.

    PubMed

    Vickers, Andrew J

    2005-01-01

    Analysis of variance (ANOVA) is a statistical method that is widely used in the psychosomatic literature to analyze the results of randomized trials, yet ANOVA does not provide an estimate for the difference between groups, the key variable of interest in a randomized trial. Although the use of ANOVA is frequently justified on the grounds that a trial incorporates more than two groups, the hypothesis tested by ANOVA for these trials--"Are all groups equivalent?"--is often scientifically uninteresting. Regression methods are not only applicable to trials with many groups, but can be designed to address specific questions arising from the study design. ANOVA is also frequently used for trials with repeated measures, but the consequent reporting of "group effects," "time effects," and "time-by-group interactions" is a distraction from statistics of clinical and scientific value. Given that ANOVA is easily misapplied in the analysis of randomized trials, alternative approaches such as regression methods should be considered in preference.

  9. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  10. A Bias and Variance Analysis for Multistep-Ahead Time Series Forecasting.

    PubMed

    Ben Taieb, Souhaib; Atiya, Amir F

    2016-01-01

    Multistep-ahead forecasts can either be produced recursively by iterating a one-step-ahead time series model or directly by estimating a separate model for each forecast horizon. In addition, there are other strategies; some of them combine aspects of both aforementioned concepts. In this paper, we present a comprehensive investigation into the bias and variance behavior of multistep-ahead forecasting strategies. We provide a detailed review of the different multistep-ahead strategies. Subsequently, we perform a theoretical study that derives the bias and variance for a number of forecasting strategies. Finally, we conduct a Monte Carlo experimental study that compares and evaluates the bias and variance performance of the different strategies. From the theoretical and the simulation studies, we analyze the effect of different factors, such as the forecast horizon and the time series length, on the bias and variance components, and on the different multistep-ahead strategies. Several lessons are learned, and recommendations are given concerning the advantages, disadvantages, and best conditions of use of each strategy.

  11. Analysis of Quantitative Traits in Two Long-Term Randomly Mated Soybean Populations I. Genetic Variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...

  12. Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance

    PubMed Central

    Lacy, Robert C.; Lynch, Carol Becker

    1979-01-01

    Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909

  13. Variance analysis by use of a low cost desk top calculator.

    PubMed

    González Revaldería, J; Villafruela, J J; Sabater, J; Lamas, S; Ortuño, J

    1986-01-01

    A simple program for an HP-97 desk top calculator, which can be adapted to an HP-67, is presented. This program detects the presence of an added component of variance in any series classified with a unique criterion. Each series can be formed by any number of data. The program supplies additional information about this component. A brief theoretical description and a practical example are also included.

  14. [The medical history of Edgar Allan Poe].

    PubMed

    Miranda C, Marcelo

    2007-09-01

    Edgar Allan Poe, one of the best American storytellers and poets, suffered an episodic behaviour disorder partially triggered by alcohol and opiate use. Much confusion still exists about the last days of his turbulent life and the cause of his death at an early age. Different etiologies have been proposed to explain his main medical problem, however, complex partial seizures triggered by alcohol, poorly recognized at the time when Poe lived, seems to be one of the most acceptable hypothesis, among others discussed.

  15. The Cosmology of Edgar Allan Poe

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    2011-06-01

    Eureka is a ``prose poem'' published in 1848, where Edgar Allan Poe presents his original cosmology. While starting from metaphysical assumptions, Poe develops an evolving Newtonian model of the Universe which has many and non casual analogies with modern cosmology. Poe was well informed about astronomical and physical discoveries, and he was influenced by both contemporary science and ancient ideas. For these reasons, Eureka is a unique synthesis of metaphysics, art and science.

  16. Exploring Hydrological Flow Paths in Conceptual Catchment Models using Variance-based Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Mockler, E. M.; O'Loughlin, F.; Bruen, M. P.

    2013-12-01

    Conceptual rainfall runoff (CRR) models aim to capture the dominant hydrological processes in a catchment in order to predict the flows in a river. Most flood forecasting models focus on predicting total outflows from a catchment and often perform well without the correct distribution between individual pathways. However, modelling of water flow paths within a catchment, rather than its overall response, is specifically needed to investigate the physical and chemical transport of matter through the various elements of the hydrological cycle. Focus is increasingly turning to accurately quantifying the internal movement of water within these models to investigate if the simulated processes contributing to the total flows are realistic in the expectation of generating more robust models. Parameter regionalisation is required if such models are to be widely used, particularly in ungauged catchments. However, most regionalisation studies to date have typically consisted of calibrations and correlations of parameters with catchment characteristics, or some variations of this. In order for a priori parameter estimation in this manner to be possible, a model must be parametrically parsimonious while still capturing the dominant processes of the catchment. The presence of parameter interactions within most CRR model structures can make parameter prediction in ungauged basins very difficult, as the functional role of the parameter within the model may not be uniquely identifiable. We use a variance based sensitivity analysis method to investigate parameter sensitivities and interactions in the global parameter space of three CRR models, simulating a set of 30 Irish catchments within a variety of hydrological settings over a 16 year period. The exploration of sensitivities of internal flow path partitioning was a specific focus and correlations between catchment characteristics and parameter sensitivities were also investigated to assist in evaluating model performances

  17. Princess Marie Bonaparte, Edgar Allan Poe, and psychobiography.

    PubMed

    Warner, S L

    1991-01-01

    Princess Marie Bonaparte was a colorful yet mysterious member of Freud's inner circle of psychoanalysis. In analysis with Freud beginning in 1925 (she was then 45 years old), she became a lay analyst and writer of many papers and books. Her most ambitious task was a 700-page psychobiography of Edgar Allan Poe that was first published in French in 1933. She was fascinated by Poe's gothic stories--with the return to life of dead persons and the eerie, unexpected turns of events. Her fascination with Poe can be traced to the similarity of their early traumatic life experiences. Bonaparte had lost her mother a month after her birth. Poe's father deserted the family when Edgar was two years old, and his mother died of tuberculosis when he was three. Poe's stories helped him to accommodate to these early traumatic losses. Bonaparte vicariously shared in Poe's loss and the fantasies of the return of the deceased parent in his stories. She was sensitive and empathetic to Poe's inner world because her inner world was similar. The result of this psychological fit between Poe and Bonaparte was her psychobiography, The Life and Works of Edgar Allan Poe. It was a milestone in psychobiography but limited in its psychological scope by its strong emphasis on early childhood trauma. Nevertheless it proved Bonaparte a bona fide creative psychoanalyst and not a dilettante propped up by her friendship with Freud.

  18. Patient population management: taking the leap from variance analysis to outcomes measurement.

    PubMed

    Allen, K M

    1998-01-01

    Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable."

  19. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  20. Bayesian hierarchical analysis of within-units variances in repeated measures experiments.

    PubMed

    Ten Have, T R; Chinchilli, V M

    1994-09-30

    We develop hierarchical Bayesian models for biomedical data that consist of multiple measurements on each individual under each of several conditions. The focus is on investigating differences in within-subject variation between conditions. We present both population-level and individual-level comparisons. We extend the partial likelihood models of Chinchilli et al. with a unique Bayesian hierarchical framework for variance components and associated degrees of freedom. We use the Gibbs sampler to estimate posterior marginal distributions for the parameters of the Bayesian hierarchical models. The application involves a comparison of two cholesterol analysers each applied repeatedly to a sample of subjects. Both the partial likelihood and Bayesian approaches yield similar results, although confidence limits tend to be wider under the Bayesian models.

  1. View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.

    2013-01-01

    Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.

  2. The Effects of Violations of Data Set Assumptions When Using the Oneway, Fixed-Effects Analysis of Variance and the One Concomitant Analysis of Covariance.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    1994-01-01

    This research is an empirical study, through Monte Carlo simulation, of the effects of violations of the assumptions for the oneway fixed-effects analysis of variance (ANOVA) and analysis of covariance (ANCOVA). Research reaffirms findings of previous studies that suggest that ANOVA and ANCOVA be avoided when group sizes are not equal. (SLD)

  3. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

    NASA Technical Reports Server (NTRS)

    Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

    1994-01-01

    Cosmic anisotrophy produces an excess variance sq sigma(sub sky) in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigma(sub sky)(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)(exp 0.5). The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)(exp 0.5) to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigma(sub sky)(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)(exp 0.5) = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

  4. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  5. Commonality Analysis: A Method of Analyzing Unique and Common Variance Proportions.

    ERIC Educational Resources Information Center

    Kroff, Michael W.

    This paper considers the use of commonality analysis as an effective tool for analyzing relationships between variables in multiple regression or canonical correlational analysis (CCA). The merits of commonality analysis are discussed and the procedure for running commonality analysis is summarized as a four-step process. A heuristic example is…

  6. Variance Component Quantitative Trait Locus Analysis for Body Weight Traits in Purebred Korean Native Chicken

    PubMed Central

    Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon

    2016-01-01

    Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327

  7. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

    SciTech Connect

    Phillips, J; Wolfgang, J

    2014-06-01

    Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least insensitive

  8. An introduction to analysis of variance (ANOVA) with special reference to data from clinical experiments in optometry.

    PubMed

    Armstrong, R A; Slade, S V; Eperjesi, F

    2000-05-01

    This article is aimed primarily at eye care practitioners who are undertaking advanced clinical research, and who wish to apply analysis of variance (ANOVA) to their data. ANOVA is a data analysis method of great utility and flexibility. This article describes why and how ANOVA was developed, the basic logic which underlies the method and the assumptions that the method makes for it to be validly applied to data from clinical experiments in optometry. The application of the method to the analysis of a simple data set is then described. In addition, the methods available for making planned comparisons between treatment means and for making post hoc tests are evaluated. The problem of determining the number of replicates or patients required in a given experimental situation is also discussed.

  9. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    PubMed

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  10. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  11. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    NASA Astrophysics Data System (ADS)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  12. Radio-Echo Sounding in the Allan Hills, Antarctica, in Support of the Meteorite Field Program.

    DTIC Science & Technology

    1980-05-01

    ice. The results also revealed internal layering within the snow on Ross Island and in the snow filling an ice depres- sion west of Allan Nunatak ...Radio-echo sounding also gave the depth to bedrock near the west side of Allan Nunatak . The greatest ice depth measured was 310 m. • . 9Q3...Institute of Polar Research had installed across the blue ice surface extending westward from the Allan Nunatak . Allan Nunatak is located in the Allan

  13. Analysis of Molecular Diffusion by First-Passage Time Variance Identifies the Size of Confinement Zones

    PubMed Central

    Rajani, Vishaal; Carrero, Gustavo; Golan, David E.; de Vries, Gerda; Cairo, Christopher W.

    2011-01-01

    The diffusion of receptors within the two-dimensional environment of the plasma membrane is a complex process. Although certain components diffuse according to a random walk model (Brownian diffusion), an overwhelming body of work has found that membrane diffusion is nonideal (anomalous diffusion). One of the most powerful methods for studying membrane diffusion is single particle tracking (SPT), which records the trajectory of a label attached to a membrane component of interest. One of the outstanding problems in SPT is the analysis of data to identify the presence of heterogeneity. We have adapted a first-passage time (FPT) algorithm, originally developed for the interpretation of animal movement, for the analysis of SPT data. We discuss the general application of the FPT analysis to molecular diffusion, and use simulations to test the method against data containing known regions of confinement. We conclude that FPT can be used to identify the presence and size of confinement within trajectories of the receptor LFA-1, and these results are consistent with previous reports on the size of LFA-1 clusters. The analysis of trajectory data for cell surface receptors by FPT provides a robust method to determine the presence and size of confined regions of diffusion. PMID:21402028

  14. Biomarker profiling and reproducibility study of MALDI-MS measurements of Escherichia coli by analysis of variance-principal component analysis.

    PubMed

    Chen, Ping; Lu, Yao; Harrington, Peter B

    2008-03-01

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has proved useful for the characterization of bacteria and the detection of biomarkers. Key challenges for MALDI-MS measurements of bacteria are overcoming the relatively large variability in peak intensities. A soft tool, combining analysis of variance and principal component analysis (ANOVA-PCA) (Harrington, P. D.; Vieira, N. E.; Chen, P.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Chemom. Intell. Lab. Syst. 2006, 82, 283-293. Harrington, P. D.; Vieira, N. E.; Espinoza, J.; Nien, J. K.; Romero, R.; Yergey, A. L. Anal. Chim. Acta. 2005, 544, 118-127) was applied to investigate the effects of the experimental factors associated with MALDI-MS studies of microorganisms. The variance of the measurements was partitioned with ANOVA and the variance of target factors combined with the residual error was subjected to PCA to provide an easy to understand statistical test. The statistical significance of these factors can be visualized with 95% Hotelling T2 confidence intervals. ANOVA-PCA is useful to facilitate the detection of biomarkers in that it can remove the variance corresponding to other experimental factors from the measurements that might be mistaken for a biomarker. Four strains of Escherichia coli at four different growth ages were used for the study of reproducibility of MALDI-MS measurements. ANOVA-PCA was used to disclose potential biomarker proteins associated with different growth stages.

  15. Uranium series dating of Allan Hills ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.

    1986-01-01

    Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.

  16. Variance components, heritability and correlation analysis of anther and ovary size during the floral development of bread wheat.

    PubMed

    Guo, Zifeng; Chen, Dijun; Schnurbusch, Thorsten

    2015-06-01

    Anther and ovary development play an important role in grain setting, a crucial factor determining wheat (Triticum aestivum L.) yield. One aim of this study was to determine the heritability of anther and ovary size at different positions within a spikelet at seven floral developmental stages and conduct a variance components analysis. Relationships between anther and ovary size and other traits were also assessed. The thirty central European winter wheat genotypes used in this study were based on reduced height (Rht) and photoperiod sensitivity (Ppd) genes with variable genetic backgrounds. Identical experimental designs were conducted in a greenhouse and field simultaneously. Heritability of anther and ovary size indicated strong genetic control. Variance components analysis revealed that anther and ovary sizes of floret 3 (i.e. F3, the third floret from the spikelet base) and floret 4 (F4) were more sensitive to the environment compared with those in floret 1 (F1). Good correlations were found between spike dry weight and anther and ovary size in both greenhouse and field, suggesting that anther and ovary size are good predictors of each other, as well as spike dry weight in both conditions. Relationships between spike dry weight and anther and ovary size at F3/4 positions were stronger than at F1, suggesting that F3/4 anther and ovary size are better predictors of spike dry weight. Generally, ovary size showed a closer relationship with spike dry weight than anther size, suggesting that ovary size is a more reliable predictor of spike dry weight.

  17. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

    NASA Astrophysics Data System (ADS)

    liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

    2016-04-01

    Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

  18. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems.

  19. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment

    PubMed Central

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-01-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in Supplementary materials. The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  20. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  1. Comparing tongue shapes from ultrasound imaging using smoothing spline analysis of variance.

    PubMed

    Davidson, Lisa

    2006-07-01

    Ultrasound imaging of the tongue is increasingly common in speech production research. However, there has been little standardization regarding the quantification and statistical analysis of ultrasound data. In linguistic studies, researchers may want to determine whether the tongue shape for an articulation under two different conditions (e.g., consonants in word-final versus word-medial position) is the same or different. This paper demonstrates how the smoothing spline ANOVA (SS ANOVA) can be applied to the comparison of tongue curves [Gu, Smoothing Spline ANOVA Models (Springer, New York, 2002)]. The SS ANOVA is a technique for determining whether or not there are significant differences between the smoothing splines that are the best fits for two data sets being compared. If the interaction term of the SS ANOVA model is statistically significant, then the groups have different shapes. Since the interaction may be significant even if only a small section of the curves are different (i.e., the tongue root is the same, but the tip of one group is raised), Bayesian confidence intervals are used to determine which sections of the curves are statistically different. SS ANOVAs are illustrated with some data comparing obstruents produced in word-final and word-medial coda position.

  2. 32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE AT CONTROL PANEL, ABOUT 1965. - U.S. Geological Survey, Rock Magnetics Laboratory, 345 Middlefield Road, Menlo Park, San Mateo County, CA

  3. A Variance Decomposition Approach to Uncertainty Quantification and Sensitivity Analysis of the J&E Model

    PubMed Central

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.

    2015-01-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  4. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  5. Analysis of degree of nonlinearity and stochastic nature of HRV signal during meditation using delay vector variance method.

    PubMed

    Reddy, L Ram Gopal; Kuntamalla, Srinivas

    2011-01-01

    Heart rate variability analysis is fast gaining acceptance as a potential non-invasive means of autonomic nervous system assessment in research as well as clinical domains. In this study, a new nonlinear analysis method is used to detect the degree of nonlinearity and stochastic nature of heart rate variability signals during two forms of meditation (Chi and Kundalini). The data obtained from an online and widely used public database (i.e., MIT/BIH physionet database), is used in this study. The method used is the delay vector variance (DVV) method, which is a unified method for detecting the presence of determinism and nonlinearity in a time series and is based upon the examination of local predictability of a signal. From the results it is clear that there is a significant change in the nonlinearity and stochastic nature of the signal before and during the meditation (p value > 0.01). During Chi meditation there is a increase in stochastic nature and decrease in nonlinear nature of the signal. There is a significant decrease in the degree of nonlinearity and stochastic nature during Kundalini meditation.

  6. Comparison of Two Meta-Analysis Methods: Inverse-Variance-Weighted Average and Weighted Sum of Z-Scores

    PubMed Central

    Lee, Cue Hyunkyu; Cook, Seungho; Lee, Ji Sung

    2016-01-01

    The meta-analysis has become a widely used tool for many applications in bioinformatics, including genome-wide association studies. A commonly used approach for meta-analysis is the fixed effects model approach, for which there are two popular methods: the inverse variance-weighted average method and weighted sum of z-scores method. Although previous studies have shown that the two methods perform similarly, their characteristics and their relationship have not been thoroughly investigated. In this paper, we investigate the optimal characteristics of the two methods and show the connection between the two methods. We demonstrate that the each method is optimized for a unique goal, which gives us insight into the optimal weights for the weighted sum of z-scores method. We examine the connection between the two methods both analytically and empirically and show that their resulting statistics become equivalent under certain assumptions. Finally, we apply both methods to the Wellcome Trust Case Control Consortium data and demonstrate that the two methods can give distinct results in certain study designs. PMID:28154508

  7. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    PubMed

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss.

  8. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight

    PubMed Central

    Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  9. An Empirical Investigation of the Effect of Heteroscedasticity and Heterogeneity of Variance on the Analysis of Covariance and the Johnson-Neyman Technique.

    ERIC Educational Resources Information Center

    Shields, Joyce Lee

    The robustness of the Johnson-Neyman technique and analysis of covariance (ANCOVA) to violations of assumptions of homoscedasticity and homogeneity of variance was tested through the use of Monte Carlo computer procedures. The study simulated a one-way, fixed-effects analysis with two treatment groups, one criterion, Y, and one covariate, X. Five…

  10. Evaluation of single-cell gel electrophoresis data: combination of variance analysis with sum of ranking differences.

    PubMed

    Héberger, Károly; Kolarević, Stoimir; Kračun-Kolarević, Margareta; Sunjog, Karolina; Gačić, Zoran; Kljajić, Zoran; Mitrić, Milena; Vuković-Gačić, Branka

    2014-09-01

    Specimens of the mussel Mytilus galloprovincialis were collected from five sites in the Boka Kotorska Bay (Adriatic Sea, Montenegro) during the period summer 2011-autumn 2012. Three types of tissue, haemolymph, digestive gland were used for assessment of DNA damage. Images of randomly selected cells were analyzed with a fluorescence microscope and image analysis by the Comet Assay IV Image-analysis system. Three parameters, viz. tail length, tail intensity and Olive tail moment were analyzed on 4200 nuclei per cell type. We observed variations in the level of DNA damage in mussels collected at different sites, as well as seasonal variations in response. Sum of ranking differences (SRD) was implemented to compare use of different types of cell and different measure of comet tail per nucleus. Numerical scales were transferred into ranks, range scaling between 0 and 1; standardization and normalization were carried out. SRD selected the best (and worst) combinations: tail moment is the best for all data treatment and for all organs; second best is tail length, and intensity ranks third (except for digestive gland). The differences were significant at the 5% level. Whereas gills and haemolymph cells do not differ significantly, cells of the digestive gland are much more suitable to estimate genotoxicity. Variance analysis decomposed the effect of different factors on the SRD values. This unique combination has provided not only the relative importance of factors, but also an overall evaluation: the best evaluation method, the best data pre-treatment, etc., were chosen even for partially contradictory data. The rank transformation is superior to any other way of scaling, which is proven by ordering the SRD values by SRD again, and by cross validation.

  11. Association analysis using next-generation sequence data from publicly available control groups: the robust variance score statistic

    PubMed Central

    Derkach, Andriy; Chiang, Theodore; Gong, Jiafen; Addis, Laura; Dobbins, Sara; Tomlinson, Ian; Houlston, Richard; Pal, Deb K.; Strug, Lisa J.

    2014-01-01

    Motivation: Sufficiently powered case–control studies with next-generation sequence (NGS) data remain prohibitively expensive for many investigators. If feasible, a more efficient strategy would be to include publicly available sequenced controls. However, these studies can be confounded by differences in sequencing platform; alignment, single nucleotide polymorphism and variant calling algorithms; read depth; and selection thresholds. Assuming one can match cases and controls on the basis of ethnicity and other potential confounding factors, and one has access to the aligned reads in both groups, we investigate the effect of systematic differences in read depth and selection threshold when comparing allele frequencies between cases and controls. We propose a novel likelihood-based method, the robust variance score (RVS), that substitutes genotype calls by their expected values given observed sequence data. Results: We show theoretically that the RVS eliminates read depth bias in the estimation of minor allele frequency. We also demonstrate that, using simulated and real NGS data, the RVS method controls Type I error and has comparable power to the ‘gold standard’ analysis with the true underlying genotypes for both common and rare variants. Availability and implementation: An RVS R script and instructions can be found at strug.research.sickkids.ca, and at https://github.com/strug-lab/RVS. Contact: lisa.strug@utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24733292

  12. Selecting a linear mixed model for longitudinal data: repeated measures analysis of variance, covariance pattern model, and growth curve approaches.

    PubMed

    Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M

    2012-03-01

    With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.

  13. Variance component score test for time-course gene set analysis of longitudinal RNA-seq data.

    PubMed

    Agniel, Denis; Hejblum, Boris P

    2017-03-10

    As gene expression measurement technology is shifting from microarrays to sequencing, the statistical tools available for their analysis must be adapted since RNA-seq data are measured as counts. It has been proposed to model RNA-seq counts as continuous variables using nonparametric regression to account for their inherent heteroscedasticity. In this vein, we propose tcgsaseq, a principled, model-free, and efficient method for detecting longitudinal changes in RNA-seq gene sets defined a priori. The method identifies those gene sets whose expression varies over time, based on an original variance component score test accounting for both covariates and heteroscedasticity without assuming any specific parametric distribution for the (transformed) counts. We demonstrate that despite the presence of a nonparametric component, our test statistic has a simple form and limiting distribution, and both may be computed quickly. A permutation version of the test is additionally proposed for very small sample sizes. Applied to both simulated data and two real datasets, tcgsaseq is shown to exhibit very good statistical properties, with an increase in stability and power when compared to state-of-the-art methods ROAST (rotation gene set testing), edgeR, and DESeq2, which can fail to control the type I error under certain realistic settings. We have made the method available for the community in the R package tcgsaseq.

  14. Variance associated with the use of relative velocity for force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Volstad, Nicola; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials.

  15. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  16. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  17. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2014-12-01

    Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P <0.05. Mean dog bodyweight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P <0.001). Trial number significantly affected thoracic limb PVF (P <0.001). Limb (left or right) significantly affected thoracic limb VI (P = 0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5-2.0 m/s, 1.8-2.2 m/s, and 1.9-2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5-2.2 m/s, captured a large percentage of trials per dog (84.2 ± 21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. It was concluded that wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design.

  18. Radial forcing and Edgar Allan Poe's lengthening pendulum

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Blasing, David; Whitney, Heather M.

    2013-09-01

    Inspired by Edgar Allan Poe's The Pit and the Pendulum, we investigate a radially driven, lengthening pendulum. We first show that increasing the length of an undriven pendulum at a uniform rate does not amplify the oscillations in a manner consistent with the behavior of the scythe in Poe's story. We discuss parametric amplification and the transfer of energy (through the parameter of the pendulum's length) to the oscillating part of the system. In this manner, radial driving can easily and intuitively be understood, and the fundamental concept applied in many other areas. We propose and show by a numerical model that appropriately timed radial forcing can increase the oscillation amplitude in a manner consistent with Poe's story. Our analysis contributes a computational exploration of the complex harmonic motion that can result from radially driving a pendulum and sheds light on a mechanism by which oscillations can be amplified parametrically. These insights should prove especially valuable in the undergraduate physics classroom, where investigations into pendulums and oscillations are commonplace.

  19. Late ball variance with the Model 1000 Starr-Edwards aortic valve prosthesis. Risk analysis and strategy of operative management.

    PubMed

    Grunkemeier, G L; Starr, A

    1986-06-01

    The first generation of aortic ball-valve prostheses, used until 1965, was associated with poppet damage owing to fatty infiltration of the silicone rubber ball, a phenomenon termed ball variance. For the Model 1000 Starr-Edwards valves, almost all cases were discovered before 8 years. However, a review of our patients still at risk with the original valve and poppet, prompted by other recent reports of late ball variance, has shown that severe variance can exist up to 20 years after implantation. There is a relationship between the year of valve implantation and the timing and severity of ball variance for the overall series of patients surviving operation, but for the subgroup currently at risk the sample sizes are too small to detect any difference, if one still exists. Only three of 12 patients in the current subset were found to have severe variance. Simple ball change has been the operation of choice. Prophylactic reoperation is not indicated in the current subset, but patients require careful follow-up and should be considered for reoperation should symptoms develop.

  20. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2015-01-01

    Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

  1. The median hazard ratio: a useful measure of variance and general contextual effects in multilevel survival analysis

    PubMed Central

    Wagner, Philippe; Merlo, Juan

    2016-01-01

    Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster‐specific random effects which allow one to partition the total individual variance into between‐cluster variation and between‐individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time‐to‐event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., ‘frailty’) Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27885709

  2. The median hazard ratio: a useful measure of variance and general contextual effects in multilevel survival analysis.

    PubMed

    Austin, Peter C; Wagner, Philippe; Merlo, Juan

    2017-03-15

    Multilevel data occurs frequently in many research areas like health services research and epidemiology. A suitable way to analyze such data is through the use of multilevel regression models (MLRM). MLRM incorporate cluster-specific random effects which allow one to partition the total individual variance into between-cluster variation and between-individual variation. Statistically, MLRM account for the dependency of the data within clusters and provide correct estimates of uncertainty around regression coefficients. Substantively, the magnitude of the effect of clustering provides a measure of the General Contextual Effect (GCE). When outcomes are binary, the GCE can also be quantified by measures of heterogeneity like the Median Odds Ratio (MOR) calculated from a multilevel logistic regression model. Time-to-event outcomes within a multilevel structure occur commonly in epidemiological and medical research. However, the Median Hazard Ratio (MHR) that corresponds to the MOR in multilevel (i.e., 'frailty') Cox proportional hazards regression is rarely used. Analogously to the MOR, the MHR is the median relative change in the hazard of the occurrence of the outcome when comparing identical subjects from two randomly selected different clusters that are ordered by risk. We illustrate the application and interpretation of the MHR in a case study analyzing the hazard of mortality in patients hospitalized for acute myocardial infarction at hospitals in Ontario, Canada. We provide R code for computing the MHR. The MHR is a useful and intuitive measure for expressing cluster heterogeneity in the outcome and, thereby, estimating general contextual effects in multilevel survival analysis. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  3. Effects of mRNA amplification on gene expression ratios in cDNA experiments estimated by analysis of variance

    PubMed Central

    Nygaard, Vigdis; Løland, Anders; Holden, Marit; Langaas, Mette; Rue, Håvard; Liu, Fang; Myklebost, Ola; Fodstad, Øystein; Hovig, Eivind; Smith-Sørensen, Birgitte

    2003-01-01

    Background A limiting factor of cDNA microarray technology is the need for a substantial amount of RNA per labeling reaction. Thus, 20–200 micro-grams total RNA or 0.5–2 micro-grams poly (A) RNA is typically required for monitoring gene expression. In addition, gene expression profiles from large, heterogeneous cell populations provide complex patterns from which biological data for the target cells may be difficult to extract. In this study, we chose to investigate a widely used mRNA amplification protocol that allows gene expression studies to be performed on samples with limited starting material. We present a quantitative study of the variation and noise present in our data set obtained from experiments with either amplified or non-amplified material. Results Using analysis of variance (ANOVA) and multiple hypothesis testing, we estimated the impact of amplification on the preservation of gene expression ratios. Both methods showed that the gene expression ratios were not completely preserved between amplified and non-amplified material. We also compared the expression ratios between the two cell lines for the amplified material with expression ratios between the two cell lines for the non-amplified material for each gene. With the aid of multiple t-testing with a false discovery rate of 5%, we found that 10% of the genes investigated showed significantly different expression ratios. Conclusion Although the ratios were not fully preserved, amplification may prove to be extremely useful with respect to characterizing low expressing genes. PMID:12659661

  4. An analysis of the influences of biological variance, measurement error, and uncertainty on retinal photothermal damage threshold studies

    NASA Astrophysics Data System (ADS)

    Wooddell, David A., Jr.; Schubert-Kabban, Christine M.; Hill, Raymond R.

    2012-03-01

    Safe exposure limits for directed energy sources are derived from a compilation of known injury thresholds taken primarily from animal models and simulation data. The summary statistics for these experiments are given as exposure levels representing a 50% probability of injury, or ED50, and associated variance. We examine biological variance in focal geometries and thermal properties and the influence each has in singlepulse ED50 threshold studies for 514-, 694-, and 1064-nanometer laser exposures in the thermal damage time domain. Damage threshold is defined to be the amount of energy required for a retinal burn on at least one retinal pigment epithelium (RPE) cell measuring approximately 10 microns in diameter. Better understanding of experimental variance will allow for more accurate safety buffers for exposure limits and improve directed energy research methodology.

  5. Obituary: Allan R. Sandage (1926-2010)

    NASA Astrophysics Data System (ADS)

    Devorkin, David

    2011-12-01

    Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer

  6. The Allan Hills icefield and its relationship to meteorite concentration

    NASA Technical Reports Server (NTRS)

    Annexstad, J. O.

    1982-01-01

    The Allan Hills icefield is described by as a limited icefield that has large concentrations of meteorites. The meteorites appear to be concentrated on the lower limb of an ice monocline with other finds scattered throughout the field. In an attempt to understand the mechanisms of meteorite concentration, a triangulation chain was established across the icefield. This chain is composed of 20 stations, two of which are on bedrock, and extends westward from the Allan Hills a distance of 15 kilometers. The triangulation chain and its relationship to the meteorite concentrations is shown.

  7. Allan Hills 77005 - A new meteorite type found in Antarctica

    NASA Technical Reports Server (NTRS)

    Mcsween, H. Y., Jr.; Taylor, L. A.; Stolper, E. M.

    1979-01-01

    A unique 482.5 g meteorite found in Antarctica appears to be related by igneous differentiation to shergottite achondrites, which have close similarities with terrestrial basaltic rocks. Zoned maskelynite with similar compositional ranges and plagioclase of such intermediate compositions as are unknown in other achondrites occur in both shergottites and the Allan Hills meteorite. The degree of silica saturation, however, strongly distinguishes the two meteorite types. It is suggested that the Allan Hills meteorite may represent a cumulate rock formed earlier than the shergottites from the same or a similar parent magma.

  8. Approximate Confidence Intervals for Moment-Based Estimators of the Between-Study Variance in Random Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Jackson, Dan; Bowden, Jack; Baker, Rose

    2015-01-01

    Moment-based estimators of the between-study variance are very popular when performing random effects meta-analyses. This type of estimation has many advantages including computational and conceptual simplicity. Furthermore, by using these estimators in large samples, valid meta-analyses can be performed without the assumption that the treatment…

  9. An Analysis of Variance in Teacher Self-Efficacy Levels Dependent on Participation Time in Professional Learning Communities

    ERIC Educational Resources Information Center

    Marx, Megan D.

    2016-01-01

    The purpose of this study was to determine variance in mean levels of teacher self-efficacy (TSE) and its three factors--efficacy in student engagement (EIS), efficacy in instructional strategies (EIS), and efficacy in classroom management (ECM)--based on participation and time spent in professional learning communities (PLCs). In this…

  10. On the measurement of frequency and of its sample variance with high-resolution counters

    SciTech Connect

    Rubiola, Enrico

    2005-05-15

    A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.

  11. Allan Sandage : L'architecte de l'expansion

    NASA Astrophysics Data System (ADS)

    Bonnet-Bidaud, J. M.

    1998-07-01

    Il fut de cette poignee de pionniers qui ouvrirent le monde extragalactique. Depuis pres de 50 ans, Allan Sandage poursuit la quete amorcee par le "maitre" Edwin Hubble : mesurer le taux d'expansion de l'Univers. Rencontre avec une legende vivante de la cosmologie...

  12. Biotechnology Symposium - In Memoriam, the Late Dr. Allan Zipf

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A one-day biotechnology symposium was held at Alabama A&M University (AAMU), Normal, AL on June 4, 2004 in memory of the late Dr. Allan Zipf (Sept 1953-Jan 2004). Dr. Zipf was a Research Associate Professor at the Department of Plant and Soil Sciences, AAMU, who collaborated extensively with ARS/MS...

  13. Analysis of variance components of testcross progenies in an autotetraploid species and consequences for recurrent selection with a tester.

    PubMed

    Gallais, A

    1992-01-01

    For autotetraploid species the development of the concept of test value (value in testcross) leads to a simple description of the variance among testcross progenies. When defining directly genetic effects at the level of the value of the progenies, there is no contribution of triand tetragenic interactions. To estimate additive and dominance variances it is only necessary to have the population of progenies structured in half-sib or full-sib families; it is then possible to determine the presence of epistasis using a two-way mating design. When the theory of recurrent selection is applied dominance variance can be neglected for the prediction of genetic advance in one cycle as well for the development of combined selection when progenies are structured in families. The results are similar to those for diploids with two-locus epistasis. The more efficient scheme consists of the development of pair-crossing in off-season generations (for intercrossing) and simultaneous crossing of each plant to the tester. In comparison to the classical scheme, the relative efficiency of such a scheme is 41%. The use of combined selection will further increase this superiority.

  14. Nuclear entropy, angular second moment, variance and texture correlation of thymus cortical and medullar lymphocytes: grey level co-occurrence matrix analysis.

    PubMed

    Pantic, Igor; Pantic, Senka; Paunovic, Jovana; Perovic, Milan

    2013-09-01

    Grey level co-occurrence matrix analysis (GLCM) is a well-known mathematical method for quantification of cell and tissue textural properties, such as homogeneity, complexity and level of disorder. Recently, it was demonstrated that this method is capable of evaluating fine structural changes in nuclear structure that otherwise are undetectable during standard microscopy analysis. In this article, we present the results indicating that entropy, angular second moment, variance, and texture correlation of lymphocyte nuclear structure determined by GLCM method are different in thymus cortex when compared to medulla. A total of 300 thymus lymphocyte nuclei from 10 one-month-old mice were analyzed: 150 nuclei from cortex and 150 nuclei from medullar regions of thymus. Nuclear GLCM analysis was carried out using National Institutes of Health ImageJ software. For each nucleus, entropy, angular second moment, variance and texture correlation were determined. Cortical lymphocytes had significantly higher chromatin angular second moment (p < 0.001) and texture correlation (p < 0.05) compared to medullar lymphocytes. Nuclear GLCM entropy and variance of cortical lymphocytes were on the other hand significantly lower than in medullar lymphocytes (p < 0.001). These results suggest that GLCM as a method might have a certain potential in detecting discrete changes in nuclear structure associated with lymphocyte migration and maturation in thymus.

  15. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  16. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    NASA Astrophysics Data System (ADS)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  17. Interpretation of analysis of variance models using principal component analysis to assess the effect of a maternal anticancer treatment on the mineralization of rat bones.

    PubMed

    Stanimirova, I; Michalik, K; Drzazga, Z; Trzeciak, H; Wentzell, P D; Walczak, B

    2011-03-09

    The goal of the present study is to assess the effects of anticancer treatment with cyclophosphamide and cytarabine during pregnancy on the mineralization of mandible bones in 7-, 14- and 28-day-old rats. Each bone sample was described by its X-ray fluorescence spectrum characterizing the mineral composition. The data collected are multivariate in nature and their structure is difficult to visualize and interpret directly. Therefore, methods like analysis of variance-principal component analysis (ANOVA-PCA) and ANOVA-simultaneous component analysis (ASCA), which are suitable for the analysis of highly correlated spectral data and are able to incorporate information about the underlined experimental design, are greatly valued. In this study, the ASCA methodology adapted for unbalanced data was used to investigate the impact of the anticancer drug treatment during pregnancy on the mineralization of the mandible bones of newborn rats and to examine any changes in the mineralization of the bones over time. The results showed that treatment with cyclophosphamide and cytarabine during pregnancy induces a decrease in the K and Zn levels in the mandible bones of newborns. This suppresses the development of mandible bones in rats in the early stages (up to 14 days) of formation. An interesting observation was that the levels of essential minerals like K, Mg, Na and Ca vary considerably in the different regions of the mandible bones.

  18. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  19. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  20. Exposure and terrestrial ages of four Allan Hills Antarctic meteorites

    NASA Technical Reports Server (NTRS)

    Kirsten, T.; Ries, D.; Fireman, E. L.

    1978-01-01

    Terrestrial ages of meteorites are based on the amount of cosmic-ray-produced radioactivity in the sample and the number of observed falls that have similar cosmic-ray exposure histories. The cosmic-ray exposures are obtained from the stable noble gas isotopes. Noble gas isotopes are measured by high-sensitivity mass spectrometry. In the present study, the noble gas contents were measured in four Allan Hill meteorites (No. 5, No. 6, No. 7, and No. 8), whose C-14, Al-26, and Mn-53 radioactivities are known. These meteorites are of particular interest because they belong to a large assemblage of distinct meteorites that lie exposed on a small (110 sq km) area of ice near the Allan Hills.

  1. Carbon-14 ages of Allan Hills meteorites and ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.; Norris, T.

    1982-01-01

    Allan Hills is a blue ice region of approximately 100 sq km area in Antarctica where many meteorites have been found exposed on the ice. The terrestrial ages of the Allan Hills meteorites, which are obtained from their cosmogenic nuclide abundances are important time markers which can reflect the history of ice movement to the site. The principal purpose in studying the terrestrial ages of ALHA meteorites is to locate samples of ancient ice and analyze their trapped gas contents. Attention is given to the C-14 and Ar-39 terrestrial ages of ALHA meteorites, and C-14 ages and trapped gas compositions in ice samples. On the basis of the obtained C-14 terrestrial ages, and Cl-36 and Al-26 results reported by others, it is concluded that most ALHA meteorites fell between 20,000 and 200,000 years ago.

  2. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  3. Joint analysis of beef growth and carcass quality traits through calculation of co-variance components and correlations.

    PubMed

    Mirzaei, H R; Verbyla, A P; Pitchford, W S

    2011-03-15

    A joint growth-carcass model using random regression was used to estimate the (co)variance components of beef cattle body weights and carcass quality traits and correlations between them. During a four-year period (1994-1997) of the Australian "southern crossbreeding project", mature Hereford cows (N = 581) were mated to 97 sires of Jersey, Wagyu, Angus, Hereford, South Devon, Limousin, and Belgian Blue breeds, resulting in 1141 calves. Data included 13 (for steers) and 8 (for heifers) body weight measurements approximately every 50 days from birth until slaughter and four carcass quality traits: hot standard carcass weight, rump fat depth, rib eye muscle area, and intramuscular fat content. The mixed model included fixed effects of sex, sire breed, age (linear, quadratic and cubic), and their interactions between sex and sire breed with age. Random effects were sire, dam, management (birth location, year, post-weaning groups), and permanent environmental effects, and their interactions with linear, quadratic and cubic growth, when possible. Phenotypic, sire and dam correlations between body weights and hot standard carcass weight and rib eye muscle area were positive and moderate to high from birth to feedlot period. Management variation accounted for the largest proportion of total variation in both growth and carcass traits. Management correlations between carcass traits were high, except between rump fat depth and intramuscular fat (r = 0.26). Management correlations between body weight and carcass traits during the pre-weaning period were positive except for intramuscular fat. The correlations were low from birth to weaning, then increased dramatically and were high during the feedlot period.

  4. The Effects of Single and Compound Violations of Data Set Assumptions when Using the Oneway, Fixed Effects Analysis of Variance and the One Concomitant Analysis of Covariance Statistical Models.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook

    This study integrates into one comprehensive Monte Carlo simulation a vast array of previously defined and substantively interrelated research studies of the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) statistical procedures. Three sets of balanced ANOVA and ANCOVA designs (group sizes of 15, 30, and 45) and one…

  5. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    NASA Astrophysics Data System (ADS)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  6. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.

  7. Pragmatics: The State of the Art: An Online Interview with Keith Allan

    ERIC Educational Resources Information Center

    Allan, Keith; Salmani Nodoushan, Mohammad Ali

    2015-01-01

    This interview was conducted with Professor Keith Allan with the aim of providing a brief but informative summary of the state of the art of pragmatics. In providing answers to the interview questions, Professor Allan begins with a definition of pragmatics as it is practiced today, i.e., the study of the meanings of utterances with attention to…

  8. High-dimensional nested analysis of variance to assess the effect of production season, quality grade and steam pasteurization on the phenolic composition of fermented rooibos herbal tea.

    PubMed

    Stanimirova, I; Kazura, M; de Beer, D; Joubert, E; Schulze, A E; Beelders, T; de Villiers, A; Walczak, B

    2013-10-15

    A nested analysis of variance combined with simultaneous component analysis, ASCA, was proposed to model high-dimensional chromatographic data. The data were obtained from an experiment designed to investigate the effect of production season, quality grade and post-production processing (steam pasteurization) on the phenolic content of the infusion of the popular herbal tea, rooibos, at 'cup-of-tea' strength. Specifically, a four-way analysis of variance where the experimental design involves nesting in two of the three crossed factors was considered. For the purpose of the study, batches of fermented rooibos plant material were sampled from each of four quality grades during three production seasons (2009, 2010 and 2011) and a sub-sample of each batch was steam-pasteurized. The phenolic content of each rooibos infusion was characterized by high performance liquid chromatography (HPLC)-diode array detection (DAD). In contrast to previous studies, the complete HPLC-DAD signals were used in the chemometric analysis in order to take into account the entire phenolic profile. All factors had a significant effect on the phenolic content of a 'cup-of-tea' strength rooibos infusion. In particular, infusions prepared from the grade A (highest quality) samples contained a higher content of almost all phenolic compounds than the lower quality plant material. The variations of the content of isoorientin and orientin in the different quality grade infusions over production seasons are larger than the variations in the content of aspalathin and quercetin-3-O-robinobioside. Ferulic acid can be used as an indicator of the quality of rooibos tea as its content generally decreases with increasing tea quality. Steam pasteurization decreased the content of the majority of phenolic compounds in a 'cup-of-tea' strength rooibos infusion.

  9. Confidence Intervals for the Between-Study Variance in Random Effects Meta-Analysis Using Generalised Cochran Heterogeneity Statistics

    ERIC Educational Resources Information Center

    Jackson, Dan

    2013-01-01

    Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…

  10. A Confirmatory Analysis of Item Reliability Trends (CAIRT): Differentiating True Score and Error Variance in the Analysis of Item Context Effects

    ERIC Educational Resources Information Center

    Hartig, Johannes; Holzel, Britta; Moosbrugger, Helfried

    2007-01-01

    Numerous studies have shown increasing item reliabilities as an effect of the item position in personality scales. Traditionally, these context effects are analyzed based on item-total correlations. This approach neglects that trends in item reliabilities can be caused either by an increase in true score variance or by a decrease in error…

  11. An alternative method for noise analysis using pixel variance as part of quality control procedures on digital mammography systems

    NASA Astrophysics Data System (ADS)

    Bouwman, R.; Young, K.; Lazzari, B.; Ravaglia, V.; Broeders, M.; van Engen, R.

    2009-11-01

    According to the European Guidelines for quality assured breast cancer screening and diagnosis, noise analysis is one of the measurements that needs to be performed as part of quality control procedures on digital mammography systems. However, the method recommended in the European Guidelines does not discriminate sufficiently between systems with and without additional noise besides quantum noise. This paper attempts to give an alternative and relatively simple method for noise analysis which can divide noise into electronic noise, structured noise and quantum noise. Quantum noise needs to be the dominant noise source in clinical images for optimal performance of a digital mammography system, and therefore the amount of electronic and structured noise should be minimal. For several digital mammography systems, the noise was separated into components based on the measured pixel value, standard deviation (SD) of the image and the detector entrance dose. The results showed that differences between systems exist. Our findings confirm that the proposed method is able to discriminate systems based on their noise performance and is able to detect possible quality problems. Therefore, we suggest to replace the current method for noise analysis as described in the European Guidelines by the alternative method described in this paper.

  12. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    PubMed

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  13. A unique type 3 ordinary chondrite containing graphite-magnetite aggregates - Allan Hills A77011

    NASA Technical Reports Server (NTRS)

    Mckinley, S. G.; Scott, E. R. D.; Taylor, G. J.; Keil, K.

    1982-01-01

    ALHA 77011, which is the object of study in the present investigation, is a chondrite of the 1977 meteorite collection from Allan Hills, Antarctica. It contains an opaque and recrystallized silicate matrix (Huss matrix) and numerous aggregates consisting of micron- and submicron-sized graphite and magnetite. It is pointed out that no abundant graphite-magnetite aggregates could be observed in other type 3 ordinary chondrites, except for Sharps. Attention is given to the results of a modal analysis, relations between ALHA 77011 and other type 3 ordinary chondrites, and the association of graphite-magnetite and metallic Fe, Ni. The discovery of graphite-magnetite aggregates in type 3 ordinary chondrites is found to suggest that this material may have been an important component in the formation of ordinary chondrites.

  14. Element distribution and noble gas isotopic abundances in lunar meteorite Allan Hills A81005

    NASA Technical Reports Server (NTRS)

    Kraehenbuehl, U.; Eugster, O.; Niedermann, S.

    1986-01-01

    Antarctic meteorite ALLAN HILLS A81005, an anorthositic breccia, is recognized to be of lunar origin. The noble gases in this meteorite were analyzed and found to be solar-wind implanted gases, whose absolute and relative concentrations are quite similar to those in lunar regolith samples. A sample of this meteorite was obtained for the analysis of the noble gas isotopes, including Kr(81), and for the determination of the elemental abundances. In order to better determine the volume derived from the surface correlated gases, grain size fractions were prepared. The results of the instrumental measurements of the gamma radiation are listed. From the amounts of cosmic ray produced noble gases and respective production rates, the lunar surface residence times were calculated. It was concluded that the lunar surface time is about half a billion years.

  15. Dimension reduction in heterogeneous neural networks: Generalized Polynomial Chaos (gPC) and ANalysis-Of-VAriance (ANOVA)

    NASA Astrophysics Data System (ADS)

    Choi, M.; Bertalan, T.; Laing, C. R.; Kevrekidis, I. G.

    2016-09-01

    We propose, and illustrate via a neural network example, two different approaches to coarse-graining large heterogeneous networks. Both approaches are inspired from, and use tools developed in, methods for uncertainty quantification (UQ) in systems with multiple uncertain parameters - in our case, the parameters are heterogeneously distributed on the network nodes. The approach shows promise in accelerating large scale network simulations as well as coarse-grained fixed point, periodic solution computation and stability analysis. We also demonstrate that the approach can successfully deal with structural as well as intrinsic heterogeneities.

  16. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    PubMed

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations.

  17. Seeds of a Soldier: The True Story of Edgar Allan Poe - The Sergeant Major

    DTIC Science & Technology

    2003-01-01

    2003 2. REPORT TYPE 3. DATES COVERED 00-00-2003 to 00-00-2003 4. TITLE AND SUBTITLE Seeds of a Soldier. The true story of Edgar Allan Poe - the...Std Z39-18 Army Space Journal Fall 200360 By Michael L. Howard Seeds of a Soldier The true story of Edgar Allan Poe — the Sergeant Major...the Managing Editor via email to richard.burks@arspace.army.mil dgar Allan Poe wore U.S. Army sergeant major stripes. Using the name Edgar A. Perry

  18. Analysis of variance, normal quantile-quantile correlation and effective expression support of pooled expression ratio of reference genes for defining expression stability.

    PubMed

    Priyadarshi, Himanshu; Das, Rekha; Kumar, Shivendra; Kishore, Pankaj; Kumar, Sujit

    2017-01-01

    Identification of a reference gene unaffected by the experimental conditions is obligatory for accurate measurement of gene expression through relative quantification. Most existing methods directly analyze variability in crossing point (Cp) values of reference genes and fail to account for template-independent factors that affect Cp values in their estimates. We describe the use of three simple statistical methods namely analysis of variance (ANOVA), normal quantile-quantile correlation (NQQC) and effective expression support (EES), on pooled expression ratios of reference genes in a panel to overcome this issue. The pooling of expression ratios across the genes in the panel nullify the sample specific effects uniformly affecting all genes that are falsely reflected as instability. Our methods also offer the flexibility to include sample specific PCR efficiencies in estimations, when available, for improved accuracy. Additionally, we describe a correction factor from the ANOVA method to correct the relative fold change of a target gene if no truly stable reference gene could be found in the analyzed panel. The analysis is described on a synthetic data set to simplify the explanation of the statistical treatment of data.

  19. Regarding to the Variance Analysis of Regression Equation of the Surface Roughness obtained by End Milling process of 7136 Aluminium Alloy

    NASA Astrophysics Data System (ADS)

    POP, A. B.; ȚÎȚU, M. A.

    2016-11-01

    In the metal cutting process, surface quality is intrinsically related to the cutting parameters and to the cutting tool geometry. At the same time, metal cutting processes are closely related to the machining costs. The purpose of this paper is to reduce manufacturing costs and processing time. A study was made, based on the mathematical modelling of the average of the absolute value deviation (Ra) resulting from the end milling process on 7136 aluminium alloy, depending on cutting process parameters. The novel element brought by this paper is the 7136 aluminium alloy type, chosen to conduct the experiments, which is a material developed and patented by Universal Alloy Corporation. This aluminium alloy is used in the aircraft industry to make parts from extruded profiles, and it has not been studied for the proposed research direction. Based on this research, a mathematical model of surface roughness Ra was established according to the cutting parameters studied in a set experimental field. A regression analysis was performed, which identified the quantitative relationships between cutting parameters and the surface roughness. Using the variance analysis ANOVA, the degree of confidence for the achieved results by the regression equation was determined, and the suitability of this equation at every point of the experimental field.

  20. Chemical characterization of a unique chondrite - Allan Hills 85085

    NASA Technical Reports Server (NTRS)

    Gosselin, David C.; Laul, J. C.

    1990-01-01

    Allan Hills 85085 is a new and very important addition to the growing list of unique carbonaceous chondrites because of its unique chemical and mineralogical properties. This chemical study provides more precise data on the major, minor, and trace element characteristics of ALH85085. ALH85085 has compositional, petrological, and isotopic affinities to AL Rais and Renazzo, and to Bencubbin-Weatherford. The similarities to Al Rais and Renazzo suggest similar formation locations and thermal processing, possibly in the vicinity of CI chondrites. Petrologic, compositional and isotopic studies indicate that the components that control the abundance of the various refractory and volatile elements were not allowed to equilibrate with the nebula as conditions changed, explaining the inconsistencies in the classification of these meteorites using known taxonomic parameters.

  1. A new kind of primitive chondrite, Allan Hills 85085

    NASA Technical Reports Server (NTRS)

    Scott, Edward R. D.

    1988-01-01

    Allan Hills (ALH) 85085, a chemically and mineralogically unique chondrite whose components have suffered little metamorphism or alteration, is discussed. It is found that ALH 85085 has 4 wt pct chondrules (mean diameter 16 microns), 36 wt pct Fe, Ni, 56 wt pct lithic and mineral silicate fragments, and 2 wt pct trolite. It is suggested that, with the exception of matrix lumps, the components of ALH 85085 formed and accreted in the solar nebula. It is shown that ALH 85085 does not belong to any of the nine chondrite groups and is very different from Kakangari. Similarities between ALH 85085 and Bencubbin and Weatherford suggest that the latter two primitive meteorites may be chondrites with high metal abundances and very large, partly fragmented chondrules.

  2. Seizures in the life and works of Edgar Allan Poe.

    PubMed

    Bazil, C W

    1999-06-01

    Edgar Allan Poe, one of the most celebrated of American storytellers, lived through and wrote descriptions of episodic unconsciousness, confusion, and paranoia. These symptoms have been attributed to alcohol or drug abuse but also could represent complex partial seizures, prolonged postictal states, or postictal psychosis. Complex partial seizures were not well described in Poe's time, which could explain a misdiagnosis. Alternatively, he may have suffered from complex partial epilepsy that was complicated or caused by substance abuse. Even today, persons who have epilepsy are mistaken for substance abusers and occasionally are arrested during postictal confusional states. Poe was able to use creative genius and experiences from illness to create memorable tales and poignant poems.

  3. Petrogenetic relationship between Allan Hills 77005 and other achondrites

    NASA Technical Reports Server (NTRS)

    Mcsween, H. Y., Jr.; Taylor, L. A.; Stolper, E. M.; Muntean, R. A.; Okelley, G. D.; Eldridge, J. S.; Biswas, S.; Ngo, H. T.; Lipschutz, M. E.

    1979-01-01

    The paper presents chemical and petrologic data relating the Allan Hills (ALHA) 77005 achondrite from Antarctica and explores their petrogenetic relationship with the shergottites. Petrologic similarities with the latter in terms of mineralogy, oxidation state, inferred source region composition, and shock ages suggest a genetic relationship, also indicated by volatile to involatile element ratios and abundances of other trace elements. ALHA 77005 may be a cumulate crystallized from a liquid parental to materials from which the shergottites crystallized or a sample of peridotite from which shergottite parent liquids were derived. Chemical similarities with terrestrial ultramafic rocks suggest that it provides an additional sample of the only other solar system body with basalt source origins chemically similar to the upper earth mantle.

  4. Cosmology without cosmic variance

    SciTech Connect

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.

  5. Cosmology without cosmic variance

    DOE PAGES

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  6. The History of Allan Hills 84001 Revised: Multiple Shock Events

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.

    1998-01-01

    The geologic history of Martian meteorite Allan Hills (ALH) 84001 is more complex than previously recognized, with evidence for four or five crater-forming impacts onto Mars. This history of repeated deformation and shock metamorphism appears to weaken some arguments that have been offered for and against the hypothesis of ancient Martian life in ALH 84001. Allan Hills 84001 formed originally from basaltic magma. Its first impact event (I1) is inferred from the deformation (D1) that produced the granular-textured bands ("crush zones") that transect the original igneous fabric. Deformation D1 is characterized by intense shear and may represent excavation or rebound flow of rock beneath a large impact crater. An intense thermal metamorphism followed D1 and may be related to it. The next impact (I2) produced fractures, (Fr2) in which carbonate "pancakes" were deposited and produced feldspathic glass from some of the igneous feldspars and silica. After I2, carbonate pancakes and globules were deposited in Fr2 fractures and replaced feldspathic glass and possibly crystalline silicates. Next, feldspars, feldspathic glass, and possibly some carbonates were mobilized and melted in the third impact (I3). Microfaulting, intense fracturing, and shear are also associated with 13. In the fourth impact (I4), the rock was fractured and deformed without significant heating, which permitted remnant magnetization directions to vary across fracture surfaces. Finally, ALH 84001 was ejected from Mars in event I5, which could be identical to I4. This history of multiple impacts is consistent with the photogeology of the Martian highlands and may help resolve some apparent contradictions among recent results on ALH 84001. For example, the submicron rounded magnetite grains in the carbonate globules could be contemporaneous with carbonate deposition, whereas the elongate magnetite grains, epitaxial on carbonates, could be ascribed to vapor-phase deposition during I3.

  7. Exploring Thermal Signatures in the Experimentally Heated CM Carbonaceous Chondrite Allan Hills 83100

    NASA Astrophysics Data System (ADS)

    Lindgren, P.; Andreassen, A. M.; Lee, M. R.; Sparkes, R.; Greenwood, R. C.; Franchi, I. A.

    2016-08-01

    We have examined the effect of experimental heating on water/hydroxyl content, carbon structure, O-isotope signature, and sulphide and carbonate microstructure in the highly aqueously altered meteorite Allan Hills 83100.

  8. Closed-loop stability and performance analysis of least-squares and minimum-variance control algorithms for multiconjugate adaptive optics.

    PubMed

    Gilles, Luc

    2005-02-20

    Recent progress has been made to compute efficiently the open-loop minimum-variance reconstructor (MVR) for multiconjugate adaptive optics systems by a combination of sparse matrix and iterative techniques. Using spectral analysis, I show that a closed-loop laser guide star multiconjugate adaptive optics control algorithm consisting of MVR cascaded with an integrator control law is unstable. Tosolve this problem, a computationally efficient pseudo-open-loop control (POLC) method was recently proposed. I give a theoretical proof of the stability of this method and demonstrate its superior performance and robustness against misregistration errors compared with conventional least-squares control. This can be accounted for by the fact that POLC incorporates turbulence statistics through its regularization term that can be interpreted as spatial filtering, yielding increased robustness to misregistration. For the Gemini-South 8-m telescope multiconjugate system and for median Cerro Pachon seeing, the performance of POLC in terms of rms wave-front error averaged over a 1-arc min field of view is approximately three times superior to that of a least-squares reconstructor. Performance degradation due to 30% translational misregistration on all three mirrors is approximately a 30% increased rms wave-front error, whereas a least-squares reconstructor is unstable at such a misregistration level.

  9. Testing Interaction Effects without Discarding Variance.

    ERIC Educational Resources Information Center

    Lopez, Kay A.

    Analysis of variance (ANOVA) and multiple regression are two of the most commonly used methods of data analysis in behavioral science research. Although ANOVA was intended for use with experimental designs, educational researchers have used ANOVA extensively in aptitude-treatment interaction (ATI) research. This practice tends to make researchers…

  10. Allan-Herndon-Dudley syndrome with unusual profound sensorineural hearing loss.

    PubMed

    Gagliardi, Lucia; Nataren, Nathalie; Feng, Jinghua; Schreiber, Andreas W; Hahn, Christopher N; Conwell, Louise S; Coman, David; Scott, Hamish S

    2015-08-01

    The Allan-Herndon-Dudley syndrome is caused by mutations in the thyroid hormone transporter, Monocarboxylate transporter 8 (MCT8). It is characterized by profound intellectual disability and abnormal thyroid function. We report on a patient with Allan-Herndon-Dudley syndrome (AHDS) with profound sensorineural hearing loss which is not usually a feature of AHDS and which may have been due to a coexisting nonsense mutation in Microphthalmia-associated transcription factor (MITF).

  11. Validating the unequal-variance assumption in recognition memory using response time distributions instead of ROC functions: A diffusion model analysis

    PubMed Central

    Starns, Jeffrey J.; Ratcliff, Roger

    2013-01-01

    Recognition memory z-transformed Receiver Operating Characteristic (zROC) functions have a slope less than 1. One way to accommodate this finding is to assume that memory evidence is more variable for studied (old) items than non-studied (new) items. This assumption has been implemented in signal detection models, but this approach cannot accommodate the time course of decision making. We tested the unequal-variance assumption by fitting the diffusion model to accuracy and response time (RT) distributions from nine old/new recognition data sets comprising previously-published data from 376 participants. The η parameter in the diffusion model measures between-trial variability in evidence based on accuracy and the RT distributions for correct and error responses. In fits to nine data sets, η estimates were higher for targets than lures in all cases, and fitting results rejected an equal-variance version of the model in favor of an unequal-variance version. Parameter recovery simulations showed that the variability differences were not produced by biased estimation of the η parameter. Estimates of the other model parameters were largely consistent between the equal- and unequal-variance versions of the model. Our results provide independent support for the unequal-variance assumption without using zROC data. PMID:24459327

  12. Sampling Errors of Variance Components.

    ERIC Educational Resources Information Center

    Sanders, Piet F.

    A study on sampling errors of variance components was conducted within the framework of generalizability theory by P. L. Smith (1978). The study used an intuitive approach for solving the problem of how to allocate the number of conditions to different facets in order to produce the most stable estimate of the universe score variance. Optimization…

  13. [Cointegration test and variance decomposition for the relationship between economy and environment based on material flow analysis in Tangshan City Hebei China].

    PubMed

    2015-12-01

    The material flow account of Tangshan City was established by material flow analysis (MFA) method to analyze the periodical characteristics of material input and output in the operation of economy-environment system, and the impact of material input and output intensities on economic development. Using econometric model, the long-term interaction mechanism and relationship among the indexes of gross domestic product (GDP) , direct material input (DMI), domestic processed output (DPO) were investigated after unit root hypothesis test, Johansen cointegration test, vector error correction model, impulse response function and variance decomposition. The results showed that during 1992-2011, DMI and DPO both increased, and the growth rate of DMI was higher than that of DPO. The input intensity of DMI increased, while the intensity of DPO fell in volatility. Long-term stable cointegration relationship existed between GDP, DMI and DPO. Their interaction relationship showed a trend from fluctuation to gradual ste adiness. DMI and DPO had strong, positive impacts on economic development in short-term, but the economy-environment system gradually weakened these effects by short-term dynamically adjusting indicators inside and outside of the system. Ultimately, the system showed a long-term equilibrium relationship. The effect of economic scale on economy was gradually increasing. After decomposing the contribution of each index to GDP, it was found that DMI's contribution grew, GDP's contribution declined, DPO's contribution changed little. On the whole, the economic development of Tangshan City has followed the traditional production path of resource-based city, mostly depending on the material input which caused high energy consumption and serous environmental pollution.

  14. Formation constants of copper(II) complexes with tripeptides containing Glu, Gly, and His: potentiometric measurements and modeling by generalized multiplicative analysis of variance.

    PubMed

    Khoury, Rima Raffoul; Sutton, Gordon J; Ebrahimi, Diako; Hibbert, D Brynn

    2014-02-03

    We report a systematic study of the effects of types and positions of amino acid residues of tripeptides on the formation constants logβ, acid dissociation constants pKa, and the copper coordination modes of the copper(II) complexes with 27 tripeptides formed from the amino acids glutamic acid, glycine, and histidine. logβ values were calculated from pH titrations with l mmol L(-1):1 mmol L(-1) solutions of the metal and ligand and previously reported ligand pKa values. Generalized multiplicative analysis of variance (GEMANOVA) was used to model the logβ values of the saturated, most protonated, monoprotonated, logβ(CuL) - logβ(HL), and pKa of the amide group. The resulting model of the saturated copper species has a two-term model describing an interaction between the central and the C-terminal residues plus a smaller, main effect of the N-terminal residue. The model supports the conclusion that two copper coordination modes exist depending on the absence or presence of His at the central position, giving species in which copper is coordinated via two or three fused chelate rings, respectively. The GEMANOVA model for pKamide, which is the same as that for the saturated complex, showed that Gly-Gly-His has the lowest pKamide values among the 27 tripeptides. Visible spectroscopy indicated the formation of metal-ligand dimers for tripeptides His-His-Gly and His-His-Glu, but not for His-His-His, and the formation of multiple ligand bis compexes CuL2 and Cu(HL)2 for tripeptides (Glu/Gly)-His-(Glu/Gly) and His-(Glu/Gly)-(Glu/Gly), respectively.

  15. Variance Decomposition Using an IRT Measurement Model

    PubMed Central

    Glas, Cees A. W.; Boomsma, Dorret I.

    2007-01-01

    Large scale research projects in behaviour genetics and genetic epidemiology are often based on questionnaire or interview data. Typically, a number of items is presented to a number of subjects, the subjects’ sum scores on the items are computed, and the variance of sum scores is decomposed into a number of variance components. This paper discusses several disadvantages of the approach of analysing sum scores, such as the attenuation of correlations amongst sum scores due to their unreliability. It is shown that the framework of Item Response Theory (IRT) offers a solution to most of these problems. We argue that an IRT approach in combination with Markov chain Monte Carlo (MCMC) estimation provides a flexible and efficient framework for modelling behavioural phenotypes. Next, we use data simulation to illustrate the potentially huge bias in estimating variance components on the basis of sum scores. We then apply the IRT approach with an analysis of attention problems in young adult twins where the variance decomposition model is extended with an IRT measurement model. We show that when estimating an IRT measurement model and a variance decomposition model simultaneously, the estimate for the heritability of attention problems increases from 40% (based on sum scores) to 73%. PMID:17534709

  16. Classification of the Allan Hills A77307 meteorite

    NASA Technical Reports Server (NTRS)

    Sears, D. W. G.; Ross, M.

    1983-01-01

    Thermoluminescence (TL) measurements on Allan Hills A77307 (AH), a carbonaceous chondrite found in Antarctica, are compared with those on other chondrite and applied to its classification. Two lithologically different 250-mg samples were ground, freed of magnetic material, and ground again to pass a 100-micron sieve. Aliquots of 4 mg were heated to 500 C, exposed to beta radiation from a Sr-90 source, and heated at a rate of 7.3 C/sec in N2. TL was measured with a photomultiplier tube fitted with thermal and blue filters. Glow curves for AH and for seven other, established CO-type chondrites area presented, all exhibiting two major peaks of TL sensitivity. The peaks for the seven CO-type chondrites are found at 91 + or - 7 C and at 203 + or - 11 C; those for AH at 170 + or - 17 C and at approximately 250 C. This difference is considered significant and not due to random fluctuation or a typical sampling. From this comparison and from consideration of the weathering, preterrestrial-alteration, petrological and compositional evidence on AH, it is concluded that AH is a unique chondrite, possessing both similarities to and differences from the CO class.

  17. Allan V. Cox: AGU President 1978”1980

    NASA Astrophysics Data System (ADS)

    Richman, Barbara T.

    When Allan V. Cox was presented AGU's John Adam Fleming Medal in 1969, John Verhoogen described Cox's work as “characterized by painstaking care, proper attention to and use of statistics, and great insight.” Those same thoughts were echoed on February 3, 1987, during the memorial service for Cox, who died in a bicycling accident on January 27. The Stanford Memorial Church was crowded with colleagues, students, and friends.The Fleming Medal was presented to Cox in recognition of his studies on the fluctuation of the geomagnetic field. These studies helped to confirm theories of continental drift and seafloor spreading. The medal is awarded annually by AGU for original research and technical leadership in geomagnetism, atmospheric electricity, aeronomy, and related sciences. In addition to the Fleming Medal, Cox received the Antarctic Service Medal in 1970, the Vetlesen Prize in 1971, and the Arthur L. Day Prize of the National Academy of Sciences in 1984. He was a Fellow of AGU and a member of the National Academy of Sciences.

  18. Analysis of latent variance reduction methods in phase space Monte Carlo calculations for 6, 10 and 18 MV photons by using MCNP code

    NASA Astrophysics Data System (ADS)

    Ezzati, A. O.; Sohrabpour, M.

    2013-02-01

    In this study, azimuthal particle redistribution (APR), and azimuthal particle rotational splitting (APRS) methods are implemented in MCNPX2.4 source code. First of all, the efficiency of these methods was compared to two tallying methods. The APRS is more efficient than the APR method in track length estimator tallies. However in the energy deposition tally, both methods have nearly the same efficiency. Latent variance reduction factors were obtained for 6, 10 and 18 MV photons as well. The APRS relative efficiency contours were obtained. These obtained contours reveal that by increasing the photon energies, the contours depth and the surrounding areas were further increased. The relative efficiency contours indicated that the variance reduction factor is position and energy dependent. The out of field voxels relative efficiency contours showed that latent variance reduction methods increased the Monte Carlo (MC) simulation efficiency in the out of field voxels. The APR and APRS average variance reduction factors had differences less than 0.6% for splitting number of 1000.

  19. Metal with anomalously low Ni and Ge concentrations in the Allan Hills A77081 winonaite

    NASA Technical Reports Server (NTRS)

    Kracher, Alfred

    1988-01-01

    The Ge content of metal in the Allan Hills A77081 winonaite was determined by high-sensitivity electron microprobe analysis. By optimizing analytical conditions for Ge determination, a detection limit of about 75 ppm could be achieved. In A77081 some small kamacite grains contain less Ni and Ge and more Co than coarse-grained metal. These small grains are always associated with sulfide, raising the possibility that anomalous metal is related to eutectic melting. However, when published partition coefficients for Ni and Ge in the Fe-Ni-S system are used to model fractionation of these elements during eutectic melting, one finds that secondary metal should be enriched in Ni and depleted in Ge. Thus, the positive Ni-Ge correlation found in this study is the opposite of the expected trend. No explanation for this discrepancy has yet been found. Nonetheless, the existence of anomalous metal is an indication that A77081, and probably other winonaites as well, have undergone some fractionation. This supports the notion that the high-temperature history of winonaites is related to the formation of IAB iron meteorites, whose silicate inclusions are very similar to winonaites.

  20. THE DEAD-LIVING-MOTHER: MARIE BONAPARTE'S INTERPRETATION OF EDGAR ALLAN POE'S SHORT STORIES.

    PubMed

    Obaid, Francisco Pizarro

    2016-06-01

    Princess Marie Bonaparte is an important figure in the history of psychoanalysis, remembered for her crucial role in arranging Freud's escape to safety in London from Nazi Vienna, in 1938. This paper connects us to Bonaparte's work on Poe's short stories. Founded on concepts of Freudian theory and an exhaustive review of the biographical facts, Marie Bonaparte concluded that the works of Edgar Allan Poe drew their most powerful inspirational force from the psychological consequences of the early death of the poet's mother. In Bonaparte's approach, which was powerfully influenced by her recognition of the impact of the death of her own mother when she was born-an understanding she gained in her analysis with Freud-the thesis of the dead-living-mother achieved the status of a paradigmatic key to analyze and understand Poe's literary legacy. This paper explores the background and support of this hypothesis and reviews Bonaparte's interpretation of Poe's most notable short stories, in which extraordinary female figures feature in the narrative.

  1. Novel SLC16A2 mutations in patients with Allan-Herndon-Dudley syndrome

    PubMed Central

    Shimojima, Keiko; Maruyama, Koichi; Kikuchi, Masahiro; Imai, Ayako; Inoue, Ken; Yamamoto, Toshiyuki

    2016-01-01

    Summary Allan-Herndon-Dudley syndrome (AHDS) is an X-linked disorder caused by impaired thyroid hormone transporter. Patients with AHDS usually exhibit severe motor developmental delay, delayed myelination of the brain white matter, and elevated T3 levels in thyroid tests. Neurological examination of two patients with neurodevelopmental delay revealed generalized hypotonia, and not paresis, as the main neurological finding. Nystagmus and dyskinesia were not observed. Brain magnetic resonance imaging demonstrated delayed myelination in early childhood in both patients. Nevertheless, matured myelination was observed at 6 years of age in one patient. Although the key finding for AHDS is elevated free T3, one of the patients showed a normal T3 level in childhood, misleading the diagnosis of AHDS. Genetic analysis revealed two novel SLC16A2 mutations, p.(Gly122Val) and p.(Gly221Ser), confirming the AHDS diagnosis. These results indicate that AHDS diagnosis is sometimes challenging owing to clinical variability among patients. PMID:27672545

  2. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    PubMed Central

    Yang, Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.

    2010-01-01

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system’s efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames∕s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system. PMID:20831059

  3. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    SciTech Connect

    Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.

    2010-07-15

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.

  4. Assessment of analysis-of-variance-based methods to quantify the random variations of observers in medical imaging measurements: guidelines to the investigator.

    PubMed

    Zeggelink, William F A Klein; Hart, Augustinus A M; Gilhuijs, Kenneth G A

    2004-07-01

    The random variations of observers in medical imaging measurements negatively affect the outcome of cancer treatment, and should be taken into account during treatment by the application of safety margins that are derived from estimates of the random variations. Analysis-of-variance- (ANOVA-) based methods are the most preferable techniques to assess the true individual random variations of observers, but the number of observers and the number of cases must be taken into account to achieve meaningful results. Our aim in this study is twofold. First, to evaluate three representative ANOVA-based methods for typical numbers of observers and typical numbers of cases. Second, to establish guidelines to the investigator to determine which method, how many observers, and which number of cases are required to obtain the a priori chosen performance. The ANOVA-based methods evaluated in this study are an established technique (pairwise differences method: PWD), a new approach providing additional statistics (residuals method: RES), and a generic technique that uses restricted maximum likelihood (REML) estimation. Monte Carlo simulations were performed to assess the performance of the ANOVA-based methods, which is expressed by their accuracy (closeness of the estimates to the truth), their precision (standard error of the estimates), and the reliability of their statistical test for the significance of a difference in the random variation of an observer between two groups of cases. The highest accuracy is achieved using REML estimation, but for datasets of at least 50 cases or arrangements with 6 or more observers, the differences between the methods are negligible, with deviations from the truth well below +/-3%. For datasets up to 100 cases, it is most beneficial to increase the number of cases to improve the precision of the estimated random variations, whereas for datasets over 100 cases, an improvement in precision is most efficiently achieved by increasing the number of

  5. Estimation of velocity uncertainties from GPS time series: Examples from the analysis of the South African TrigNet network

    NASA Astrophysics Data System (ADS)

    Hackl, M.; Malservisi, R.; Hugentobler, U.; Wonnacott, R.

    2011-11-01

    We present a method to derive velocity uncertainties from GPS position time series that are affected by time-correlated noise. This method is based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation (MLE). The Allan variance of the rate (AVR) is calculated in the time domain and hence is not too sensitive to gaps in the time series. We derived analytical expressions of the AVR for different kinds of noises like power law noise, white noise, flicker noise, and random walk and found an expression for the variance produced by an annual signal. These functional relations form the basis of error models that have to be fitted to the AVR in order to estimate the velocity uncertainty. Finally, we applied the method to the South Africa GPS network TrigNet. Most time series show noise characteristics that can be modeled by a power law noise plus an annual signal. The method is computationally very cheap, and the results are in good agreement with the ones obtained by methods based on MLE.

  6. The final days of Edgar Allan Poe: clues to an old mystery using 21st century medical science.

    PubMed

    Francis, Roger A

    This study examines all documented information regarding the final days and death of Edgar Allan Poe (1809-1849), in an attempt to determine the most likely cause of death of the American poet, short story writer, and literary critic. Information was gathered from letters, newspaper accounts, and magazine articles written during the period after Poe's death, and also from biographies and medical journal articles written up until the present. A chronology of Poe's final days was constructed, and this was used to form a differential diagnosis of possible causes of death. Death theories over the last 160 years were analyzed using this information. This analysis, along with a review of Poe's past medical history, would seem to support an alcohol-related cause of death.

  7. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  8. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  9. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  10. VPSim: Variance propagation by simulation

    SciTech Connect

    Burr, T.; Coulter, C.A.; Prommel, J.

    1997-12-01

    One of the fundamental concepts in a materials control and accountability system for nuclear safeguards is the materials balance (MB). All transfers into and out of a material balance area are measured, as are the beginning and ending inventories. The resulting MB measures the material loss, MB = T{sub in} + I{sub B} {minus} T{sub out} {minus} I{sub E}. To interpret the MB, the authors must estimate its measurement error standard deviation, {sigma}{sub MB}. When feasible, they use a method usually known as propagation of variance (POV) to estimate {sigma}{sub MB}. The application of POV for estimating the measurement error variance of an MB is straightforward but tedious. By applying POV to individual measurement error standard deviations they can estimate {sigma}{sub MB} (or more generally, they can estimate the variance-covariance matrix, {Sigma}, of a sequence of MBs). This report describes a new computer program (VPSim) that uses simulation to estimate the {Sigma} matrix of a sequence of MBs. Given the proper input data, VPSim calculates the MB and {sigma}{sub MB}, or calculates a sequence of n MBs and the associated n-by-n covariance matrix, {Sigma}. The covariance matrix, {Sigma}, contains the variance of each MB in the diagonal entries and the covariance between pairs of MBs in the off-diagonal entries.

  11. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  12. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  13. Where Were the Whistleblowers? The Case of Allan McDonald and Roger Boisjoly.

    ERIC Educational Resources Information Center

    Stewart, Lea P.

    Employees who "blow the whistle" on their company because they believe it is engaged in practices that are illegal, immoral, or harmful to the public, often face grave consequences for their actions, including demotion, harassment, forced resignation, or termination. The case of Allan McDonald and Roger Boisjoly, engineers who blew the…

  14. European Studies as Answer to Allan Bloom's "The Closing of the American Mind."

    ERIC Educational Resources Information Center

    Macdonald, Michael H.

    European studies can provide a solution to several of the issues raised in Allan Bloom's "The Closing of the American Mind." European studies pursue the academic quest for what is truth, what is goodness, and what is beauty. In seeking to answer these questions, the Greeks were among the first to explore many of humanity's problems and…

  15. Horror from the Soul--Gothic Style in Allan Poe's Horror Fictions

    ERIC Educational Resources Information Center

    Sun, Chunyan

    2015-01-01

    Edgar Allan Poe made tremendous contribution to horror fiction. Poe's inheritance of gothic fiction and American literature tradition combined with his living experience forms the background of his horror fictions. He inherited the tradition of the gothic fictions and made innovations on it, so as to penetrate to subconsciousness. Poe's horror…

  16. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    ERIC Educational Resources Information Center

    Gelfert, Axel

    2014-01-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling "prose-poem" "Eureka" (1848), in particular, has…

  17. The Art of George Morrison and Allan Houser: The Development and Impact of Native Modernism

    ERIC Educational Resources Information Center

    Montiel, Anya

    2005-01-01

    The idea for a retrospective on George Morrison and Allan Houser as one of the inaugural exhibitions at the National Museum of the American Indian (NMAI) came from the NMAI curator of contemporary art, Truman Lowe. An artist and sculptor himself, Lowe knew both artists personally and saw them as mentors and visionaries. Lowe advised an exhibition…

  18. An Interview with Allan Wigfield: A Giant on Research on Expectancy Value, Motivation, and Reading Achievement

    ERIC Educational Resources Information Center

    Bembenutty, Hefer

    2012-01-01

    This article presents an interview with Allan Wigfield, professor and chair of the Department of Human Development and distinguished scholar-teacher at the University of Maryland. He has authored more than 100 peer-reviewed journal articles and book chapters on children's motivation and other topics. He is a fellow of Division 15 (Educational…

  19. Allan M. Freedman, LLB: a lawyer’s gift to Canadian chiropractors

    PubMed Central

    Brown, Douglas M.

    2007-01-01

    This paper reviews the leadership role, contributions, accolades, and impact of Professor Allan Freedman through a 30 year history of service to CMCC and the chiropractic profession in Canada. Professor Freedman has served as an educator, philanthropist and also as legal counsel. His influence on chiropractic organizations and chiropractors during this significant period in the profession is discussed. PMID:18060008

  20. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  1. Statistical test of reproducibility and operator variance in thin-section modal analysis of textures and phenocrysts in the Topopah Spring member, drill hole USW VH-2, Crater Flat, Nye County, Nevada

    SciTech Connect

    Moore, L.M.; Byers, F.M. Jr.; Broxton, D.E.

    1989-06-01

    A thin-section operator-variance test was given to the 2 junior authors, petrographers, by the senior author, a statistician, using 16 thin sections cut from core plugs drilled by the US Geological Survey from drill hole USW VH-2 standard (HCQ) drill core. The thin sections are samples of Topopah Spring devitrified rhyolite tuff from four textural zones, in ascending order: (1) lower nonlithophysal, (2) lower lithopysal, (3) middle nonlithophysal, and (4) upper lithophysal. Drill hole USW-VH-2 is near the center of the Crater Flat, about 6 miles WSW of the Yucca Mountain in Exploration Block. The original thin-section labels were opaqued out with removable enamel and renumbered with alpha-numeric labels. The sliders were then given to the petrographer operators for quantitative thin-section modal (point-count) analysis of cryptocrystalline, spherulitic, granophyric, and void textures, as well as phenocryst minerals. Between operator variance was tested by giving the two petrographers the same slide, and within-operator variance was tested by the same operator the same slide to count in a second test set, administered at least three months after the first set. Both operators were unaware that they were receiving the same slide to recount. 14 figs., 6 tabs.

  2. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  3. The defect variance of random spherical harmonics

    NASA Astrophysics Data System (ADS)

    Marinucci, Domenico; Wigman, Igor

    2011-09-01

    The defect of a function f:M\\rightarrow {R} is defined as the difference between the measure of the positive and negative regions. In this paper, we begin the analysis of the distribution of defect of random Gaussian spherical harmonics. By an easy argument, the defect is non-trivial only for even degree and the expected value always vanishes. Our principal result is evaluating the defect variance, asymptotically in the high-frequency limit. As other geometric functionals of random eigenfunctions, the defect may be used as a tool to probe the statistical properties of spherical random fields, a topic of great interest for modern cosmological data analysis.

  4. Mutations in MCT8 in patients with Allan-Herndon-Dudley-syndrome affecting its cellular distribution.

    PubMed

    Kersseboom, Simone; Kremers, Gert-Jan; Friesema, Edith C H; Visser, W Edward; Klootwijk, Wim; Peeters, Robin P; Visser, Theo J

    2013-05-01

    Monocarboxylate transporter 8 (MCT8) is a thyroid hormone (TH)-specific transporter. Mutations in the MCT8 gene are associated with Allan-Herndon-Dudley Syndrome (AHDS), consisting of severe psychomotor retardation and disturbed TH parameters. To study the functional consequences of different MCT8 mutations in detail, we combined functional analysis in different cell types with live-cell imaging of the cellular distribution of seven mutations that we identified in patients with AHDS. We used two cell models to study the mutations in vitro: 1) transiently transfected COS1 and JEG3 cells, and 2) stably transfected Flp-in 293 cells expressing a MCT8-cyan fluorescent protein construct. All seven mutants were expressed at the protein level and showed a defect in T3 and T4 transport in uptake and metabolism studies. Three mutants (G282C, P537L, and G558D) had residual uptake activity in Flp-in 293 and COS1 cells, but not in JEG3 cells. Four mutants (G221R, P321L, D453V, P537L) were expressed at the plasma membrane. The mobility in the plasma membrane of P537L was similar to WT, but the mobility of P321L was altered. The other mutants studied (insV236, G282C, G558D) were predominantly localized in the endoplasmic reticulum. In essence, loss of function by MCT8 mutations can be divided in two groups: mutations that result in partial or complete loss of transport activity (G221R, P321L, D453V, P537L) and mutations that mainly disturb protein expression and trafficking (insV236, G282C, G558D). The cell type-dependent results suggest that MCT8 mutations in AHDS patients may have tissue-specific effects on TH transport probably caused by tissue-specific expression of yet unknown MCT8-interacting proteins.

  5. A quantitative method to track protein translocation between intracellular compartments in real-time in live cells using weighted local variance image analysis.

    PubMed

    Calmettes, Guillaume; Weiss, James N

    2013-01-01

    The genetic expression of cloned fluorescent proteins coupled to time-lapse fluorescence microscopy has opened the door to the direct visualization of a wide range of molecular interactions in living cells. In particular, the dynamic translocation of proteins can now be explored in real time at the single-cell level. Here we propose a reliable, easy-to-implement, quantitative image processing method to assess protein translocation in living cells based on the computation of spatial variance maps of time-lapse images. The method is first illustrated and validated on simulated images of a fluorescently-labeled protein translocating from mitochondria to cytoplasm, and then applied to experimental data obtained with fluorescently-labeled hexokinase 2 in different cell types imaged by regular or confocal microscopy. The method was found to be robust with respect to cell morphology changes and mitochondrial dynamics (fusion, fission, movement) during the time-lapse imaging. Its ease of implementation should facilitate its application to a broad spectrum of time-lapse imaging studies.

  6. Mitral disc-valve variance

    PubMed Central

    Berroya, Renato B.; Escano, Fernando B.

    1972-01-01

    This report deals with a rare complication of disc-valve prosthesis in the mitral area. A significant disc poppet and struts destruction of mitral Beall valve prostheses occurred 20 and 17 months after implantation. The resulting valve incompetence in the first case contributed to the death of the patient. The durability of Teflon prosthetic valves appears to be in question and this type of valve probably will be unacceptable if there is an increasing number of disc-valve variance in the future. Images PMID:5017573

  7. A new LL3 chondrite, Allan Hills A79003, and observations on matrices in ordinary chondrites

    NASA Technical Reports Server (NTRS)

    Scott, E. R. D.; Taylor, G. J.; Maggiore, P.

    1982-01-01

    Allan Hills A79003 is an LL3 chondrite with a petrologic subtype of 3.4 + or - 0.2. Contrary to previous suggestions, it is not paired with other Allan Hills specimens. It contains haxonite, (Fe,Ni)23C6; shock-melted, 'fizzed' metal-troilite intergrowths; and translucent, glassy-looking Huss matrix (fine-grained, Fe-rich silicate matrix), in addition to the normal opaque and recrystallized varieties of Huss matrix. Some chondrules are partly coated with opaque matrix, others with translucent matrix. Translucent matrix is more uniform in composition and contains less S, CaO and FeO and more MgO than the opaque variety. Both kinds of matrix rimmed chondrules before consolidation of the meteorite.

  8. Further Insights into the Allan-Herndon-Dudley Syndrome: Clinical and Functional Characterization of a Novel MCT8 Mutation

    PubMed Central

    Yoon, Grace; Visser, Theo J.

    2015-01-01

    Background Mutations in the thyroid hormone (TH) transporter MCT8 have been identified as the cause for Allan-Herndon-Dudley Syndrome (AHDS), characterized by severe psychomotor retardation and altered TH serum levels. Here we report a novel MCT8 mutation identified in 4 generations of one family, and its functional characterization. Methods Proband and family members were screened for 60 genes involved in X-linked cognitive impairment and the MCT8 mutation was confirmed. Functional consequences of MCT8 mutations were studied by analysis of [125I]TH transport in fibroblasts and transiently transfected JEG3 and COS1 cells, and by subcellular localization of the transporter. Results The proband and a male cousin demonstrated clinical findings characteristic of AHDS. Serum analysis showed high T3, low rT3, and normal T4 and TSH levels in the proband. A MCT8 mutation (c.869C>T; p.S290F) was identified in the proband, his cousin, and several female carriers. Functional analysis of the S290F mutant showed decreased TH transport, metabolism and protein expression in the three cell types, whereas the S290A mutation had no effect. Interestingly, both uptake and efflux of T3 and T4 was impaired in fibroblasts of the proband, compared to his healthy brother. However, no effect of the S290F mutation was observed on TH efflux from COS1 and JEG3 cells. Immunocytochemistry showed plasma membrane localization of wild-type MCT8 and the S290A and S290F mutants in JEG3 cells. Conclusions We describe a novel MCT8 mutation (S290F) in 4 generations of a family with Allan-Herndon-Dudley Syndrome. Functional analysis demonstrates loss-of-function of the MCT8 transporter. Furthermore, our results indicate that the function of the S290F mutant is dependent on cell context. Comparison of the S290F and S290A mutants indicates that it is not the loss of Ser but its substitution with Phe, which leads to S290F dysfunction. PMID:26426690

  9. Allan C. Gotlib, DC, CM: A worthy Member of the Order of Canada.

    PubMed

    Brown, Douglas M

    2016-03-01

    On June 29, 2012, His Excellency the Right Honourable David Johnston, Governor General of Canada, announced 70 new appointments to the Order of Canada. Among them was Dr. Allan Gotlib, who was subsequently installed as a Member of the Order of Canada, in recognition of his contributions to advancing research in the chiropractic profession and its inter-professional integration. This paper attempts an objective view of his career, to substantiate the accomplishments that led to Dr. Gotlib receiving Canada's highest civilian honour.

  10. Cosmic-ray-produced Cl-36 and Mn-53 in Allan Hills-77 meteorites

    NASA Technical Reports Server (NTRS)

    Nishiizumi, K.; Murrell, M. T.; Arnold, J. R.; Finkel, R. C.; Elmore, D.; Ferraro, R. D.; Gove, H. E.

    1981-01-01

    Cosmic-ray-produced Mn-53 has been determined by neutron activation in nine Allan Hills-77 meteorites. Additionally, Cl-36 has been measured in seven of these objects using tandem accelerator mass spectrometry. These results, along with C-14 and Al-26 concentrations determined elsewhere, yield terrestrial ages ranging from 10,000 to 700,000 years. Weathering was not found to result in Mn-53 loss.

  11. Applications of Variance Fractal Dimension: a Survey

    NASA Astrophysics Data System (ADS)

    Phinyomark, Angkoon; Phukpattaranont, Pornchai; Limsakul, Chusak

    2012-04-01

    Chaotic dynamical systems are pervasive in nature and can be shown to be deterministic through fractal analysis. There are numerous methods that can be used to estimate the fractal dimension. Among the usual fractal estimation methods, variance fractal dimension (VFD) is one of the most significant fractal analysis methods that can be implemented for real-time systems. The basic concept and theory of VFD are presented. Recent research and the development of several applications based on VFD are reviewed and explained in detail, such as biomedical signal processing and pattern recognition, speech communication, geophysical signal analysis, power systems and communication systems. The important parameters that need to be considered in computing the VFD are discussed, including the window size and the window increment of the feature, and the step size of the VFD. Directions for future research of VFD are also briefly outlined.

  12. Measurements of Ultra-Stable Oscillator (USO) Allan Deviations in Space

    NASA Technical Reports Server (NTRS)

    Enzer, Daphna G.; Klipstein, William M.; Wang, Rabi T.; Dunn, Charles E.

    2013-01-01

    Researchers have used data from the GRAIL mission to the Moon to make the first in-flight verification of ultra-stable oscillators (USOs) with Allan deviation below 10 13 for 1-to-100-second averaging times. USOs are flown in space to provide stable timing and/or navigation signals for a variety of different science and programmatic missions. The Gravity Recovery and Interior Laboratory (GRAIL) mission is flying twin spacecraft, each with its own USO and with a Ka-band crosslink used to measure range fluctuations. Data from this crosslink can be combined in such a way as to give the relative time offsets of the two spacecrafts USOs and to calculate the Allan deviation to describe the USOs combined performance while orbiting the Moon. Researchers find the first direct in-space Allan deviations below 10(exp -13) for 1-to-100-second averaging times comparable to pre-launch data, and better than measurements from ground tracking of an X-band carrier coherent with the USO. Fluctuations in Earth s atmosphere limit measurement performance in direct-to-Earth links. Inflight USO performance verification was also performed for GRAIL s parent mission, the Gravity Recovery and Climate Experiment (GRACE), using both Kband and Ka-band crosslinks.

  13. Speed Variance and Its Influence on Accidents.

    ERIC Educational Resources Information Center

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  14. Using "Excel" for White's Test--An Important Technique for Evaluating the Equality of Variance Assumption and Model Specification in a Regression Analysis

    ERIC Educational Resources Information Center

    Berenson, Mark L.

    2013-01-01

    There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…

  15. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  16. Identification and quantification of peptides and proteins secreted from prostate epithelial cells by unbiased liquid chromatography tandem mass spectrometry using goodness of fit and analysis of variance.

    PubMed

    Florentinus, Angelica K; Bowden, Peter; Sardana, Girish; Diamandis, Eleftherios P; Marshall, John G

    2012-02-02

    The proteins secreted by prostate cancer cells (PC3(AR)6) were separated by strong anion exchange chromatography, digested with trypsin and analyzed by unbiased liquid chromatography tandem mass spectrometry with an ion trap. The spectra were matched to peptides within proteins using a goodness of fit algorithm that showed a low false positive rate. The parent ions for MS/MS were randomly and independently sampled from a log-normal population and therefore could be analyzed by ANOVA. Normal distribution analysis confirmed that the parent and fragment ion intensity distributions were sampled over 99.9% of their range that was above the background noise. Arranging the ion intensity data with the identified peptide and protein sequences in structured query language (SQL) permitted the quantification of ion intensity across treatments, proteins and peptides. The intensity of 101,905 fragment ions from 1421 peptide precursors of 583 peptides from 233 proteins separated over 11 sample treatments were computed together in one ANOVA model using the statistical analysis system (SAS) prior to Tukey-Kramer honestly significant difference (HSD) testing. Thus complex mixtures of proteins were identified and quantified with a high degree of confidence using an ion trap without isotopic labels, multivariate analysis or comparing chromatographic retention times.

  17. Multivariate Granger causality and generalized variance

    NASA Astrophysics Data System (ADS)

    Barrett, Adam B.; Barnett, Lionel; Seth, Anil K.

    2010-04-01

    Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or “ensembles” of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke’s seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define “partial” Granger causality in the multivariate context and we also motivate reformulations of “causal density” and “Granger autonomy.” Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.

  18. Multivariate Granger causality and generalized variance.

    PubMed

    Barrett, Adam B; Barnett, Lionel; Seth, Anil K

    2010-04-01

    Granger causality analysis is a popular method for inference on directed interactions in complex systems of many variables. A shortcoming of the standard framework for Granger causality is that it only allows for examination of interactions between single (univariate) variables within a system, perhaps conditioned on other variables. However, interactions do not necessarily take place between single variables but may occur among groups or "ensembles" of variables. In this study we establish a principled framework for Granger causality in the context of causal interactions among two or more multivariate sets of variables. Building on Geweke's seminal 1982 work, we offer additional justifications for one particular form of multivariate Granger causality based on the generalized variances of residual errors. Taken together, our results support a comprehensive and theoretically consistent extension of Granger causality to the multivariate case. Treated individually, they highlight several specific advantages of the generalized variance measure, which we illustrate using applications in neuroscience as an example. We further show how the measure can be used to define "partial" Granger causality in the multivariate context and we also motivate reformulations of "causal density" and "Granger autonomy." Our results are directly applicable to experimental data and promise to reveal new types of functional relations in complex systems, neural and otherwise.

  19. Restricted sample variance reduces generalizability.

    PubMed

    Lakes, Kimberley D

    2013-06-01

    One factor that affects the reliability of observed scores is restriction of range on the construct measured for a particular group of study participants. This study illustrates how researchers can use generalizability theory to evaluate the impact of restriction of range in particular sample characteristics on the generalizability of test scores and to estimate how changes in measurement design could improve the generalizability of the test scores. An observer-rated measure of child self-regulation (Response to Challenge Scale; Lakes, 2011) is used to examine scores for 198 children (Grades K through 5) within the generalizability theory (GT) framework. The generalizability of ratings within relatively developmentally homogeneous samples is examined and illustrates the effect of reduced variance among ratees on generalizability. Forecasts for g coefficients of various D study designs demonstrate how higher generalizability could be achieved by increasing the number of raters or items. In summary, the research presented illustrates the importance of and procedures for evaluating the generalizability of a set of scores in a particular research context.

  20. Myelination Delay and Allan-Herndon-Dudley Syndrome Caused by a Novel Mutation in the SLC16A2 Gene.

    PubMed

    La Piana, Roberta; Vanasse, Michel; Brais, Bernard; Bernard, Genevieve

    2015-09-01

    Allan-Herndon-Dudley syndrome is an X-linked disease caused by mutations in the solute carrier family 16 member 2 (SLC16A2) gene. As SLC16A2 encodes the monocarboxylate transporter 8 (MCT8), a thyroid hormone transporter, patients with Allan-Herndon-Dudley syndrome present a specific altered thyroid hormone profile. Allan-Herndon-Dudley syndrome has been associated with myelination delay on the brain magnetic resonance imaging (MRI) of affected subjects. We report a patient with Allan-Herndon-Dudley syndrome characterized by developmental delay, hypotonia, and delayed myelination caused by a novel SLC16A2 mutation (p.L291R). The thyroid hormones profile in our patient was atypical for Allan-Herndon-Dudley syndrome. The follow-up examinations showed that the progression of the myelination was not accompanied by a clinical improvement. Our paper suggests that SLC16A2 mutations should be investigated in patients with myelination delay even when the thyroid function is not conclusively altered.

  1. Analysis of enterococci using portable testing equipment for developing countries--variance of Azide NutriDisk medium under variable time and temperature.

    PubMed

    Godfrey, S; Watkins, J; Toop, K; Francis, C

    2006-01-01

    This report compares the enterococci count on samples obtained with Azide NutriDisk (AND) (sterile, dehydrated culture medium) and Slanetz and Bartley (SB) medium when exposed to a variable in incubation time and temperature. Three experiments were performed to examine the recovery of enterococci on AND and SB media using membrane filtration with respect to: (a) incubation time; (b) incubation temperature; and (c) a combination of the two. Presumptive counts were observed at 37, 41, 46 and 47 degrees C and at 20, 24, 28 and 48 h. These were compared to AWWA standard method 9230 C (44 degrees C, 44 h). Samples were confirmed using Kanamycin Aesculin Azide (KAA) agar. Friedman's ANOVA and Students t-test analysis indicated higher enumeration of enterococci when grown on AND (p = 0.45) than SB (p = < 0.001) at all temperatures with a survival threshold at 47 degrees C. Significant results for AND medium were noted at 20 h (p = 0.021), 24 h (p = 0.278) and 28 h (p = 0.543). The study concluded that the accuracy of the AND medium at a greater time and temperature range provided flexibility in incubator technology making it an appropriate alternative to SB medium for monitoring drinking water using field testing kits in developing countries.

  2. The evolution and consequences of sex-specific reproductive variance.

    PubMed

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction.

  3. The Evolution and Consequences of Sex-Specific Reproductive Variance

    PubMed Central

    Mullon, Charles; Reuter, Max; Lehmann, Laurent

    2014-01-01

    Natural selection favors alleles that increase the number of offspring produced by their carriers. But in a world that is inherently uncertain within generations, selection also favors alleles that reduce the variance in the number of offspring produced. If previous studies have established this principle, they have largely ignored fundamental aspects of sexual reproduction and therefore how selection on sex-specific reproductive variance operates. To study the evolution and consequences of sex-specific reproductive variance, we present a population-genetic model of phenotypic evolution in a dioecious population that incorporates previously neglected components of reproductive variance. First, we derive the probability of fixation for mutations that affect male and/or female reproductive phenotypes under sex-specific selection. We find that even in the simplest scenarios, the direction of selection is altered when reproductive variance is taken into account. In particular, previously unaccounted for covariances between the reproductive outputs of different individuals are expected to play a significant role in determining the direction of selection. Then, the probability of fixation is used to develop a stochastic model of joint male and female phenotypic evolution. We find that sex-specific reproductive variance can be responsible for changes in the course of long-term evolution. Finally, the model is applied to an example of parental-care evolution. Overall, our model allows for the evolutionary analysis of social traits in finite and dioecious populations, where interactions can occur within and between sexes under a realistic scenario of reproduction. PMID:24172130

  4. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  5. Empirical testing of 16S rRNA gene PCR primer pairs reveals variance in target specificity and efficacy not suggested by in silico analysis.

    PubMed

    Morales, Sergio E; Holben, William E

    2009-05-01

    Phylogenetic and "fingerprinting" analyses of the 16S rRNA genes of prokaryotes have been a mainstay of microbial ecology during the last two decades. However, many methods and results from studies that rely on the 16S rRNA gene for detection and quantification of specific microbial taxa have seemingly received only cursory or even no validation. To directly examine the efficacy and specificity of 16S rRNA gene-based primers for phylum-, class-, and operational taxonomic unit-specific target amplification in quantitative PCR, we created a collection of primers based solely on an extensive soil bacterial 16S rRNA gene clone library containing approximately 5,000 sequences from a single soil sample (i.e., a closed site-specific library was used to create PCR primers for use at this site). These primers were initially tested in silico prior to empirical testing by PCR amplification of known target sequences and of controls based on disparate phylogenetic groups. Although all primers were highly specific according to the in silico analysis, the empirical analyses clearly exhibited a high degree of nonspecificity for many of the phyla or classes, while other primers proved to be highly specific. These findings suggest that significant care must be taken when interpreting studies whose results were obtained with target specific primers that were not adequately validated, especially where population densities or dynamics have been inferred from the data. Further, we suggest that the reliability of quantification of specific target abundance using 16S rRNA-based quantitative PCR is case specific and must be determined through rigorous empirical testing rather than solely in silico.

  6. Enhancing melting curve analysis for the discrimination of loop-mediated isothermal amplification products from four pathogenic molds: Use of inorganic pyrophosphatase and its effect in reducing the variance in melting temperature values.

    PubMed

    Tone, Kazuya; Fujisaki, Ryuichi; Yamazaki, Takashi; Makimura, Koichi

    2017-01-01

    Loop-mediated isothermal amplification (LAMP) is widely used for differentiating causative agents in infectious diseases. Melting curve analysis (MCA) in conjunction with the LAMP method reduces both the labor required to conduct an assay and contamination of the products. However, two factors influence the melting temperature (Tm) of LAMP products: an inconsistent concentration of Mg(2+) ion due to the precipitation of Mg2P2O7, and the guanine-cytosine (GC) content of the starting dumbbell-like structure. In this study, we investigated the influence of inorganic pyrophosphatase (PPase), an enzyme that inhibits the production of Mg2P2O7, on the Tm of LAMP products, and examined the correlation between the above factors and the Tm value using MCA. A set of LAMP primers that amplify the ribosomal DNA of the large subunit of Aspergillus fumigatus, Penicillium expansum, Penicillium marneffei, and Histoplasma capsulatum was designed, and the LAMP reaction was performed using serial concentrations of these fungal genomic DNAs as templates in the presence and absence of PPase. We compared the Tm values obtained from the PPase-free group and the PPase-containing group, and the relationship between the GC content of the theoretical starting dumbbell-like structure and the Tm values of the LAMP product from each fungus was analyzed. The range of Tm values obtained for several fungi overlapped in the PPase-free group. In contrast, in the PPase-containing group, the variance in Tm values was smaller and there was no overlap in the Tm values obtained for all fungi tested: the LAMP product of each fungus had a specific Tm value, and the average Tm value increased as the GC% of the starting dumbbell-like structure increased. The use of PPase therefore reduced the variance in the Tm value and allowed the differentiation of these pathogenic fungi using the MCA method.

  7. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  8. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  9. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  10. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  11. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  12. Allan C. Gotlib, DC, CM: A worthy Member of the Order of Canada

    PubMed Central

    Brown, Douglas M.

    2016-01-01

    On June 29, 2012, His Excellency the Right Honourable David Johnston, Governor General of Canada, announced 70 new appointments to the Order of Canada. Among them was Dr. Allan Gotlib, who was subsequently installed as a Member of the Order of Canada, in recognition of his contributions to advancing research in the chiropractic profession and its inter-professional integration. This paper attempts an objective view of his career, to substantiate the accomplishments that led to Dr. Gotlib receiving Canada’s highest civilian honour. PMID:27069273

  13. Noble gases in twenty Yamato H-chondrites: Comparison with Allan Hills chondrites and modern falls

    NASA Technical Reports Server (NTRS)

    Loeken, TH.; Scherer, P.; Schultz, L.

    1993-01-01

    Concentration and isotopic composition of noble gases have been measured in 20 H-chrondrites found on the Yamato Mountains ice fields in Antarctica. The distribution of exposure ages as well as of radiogenic He-4 contents is similar to that of H-chrondrites collected at the Allan Hills site. Furthermore, a comparison of the noble gas record of Antarctic H-chrondrites and finds or falls from non-Antarctic areas gives no support to the suggestion that Antarctic H-chrondrites and modern falls derive from differing interplanetary meteorite populations.

  14. The discovery and initial characterization of Allan Hills 81005 - The first lunar meteorite

    NASA Technical Reports Server (NTRS)

    Marvin, U. B.

    1983-01-01

    Antarctic meteorite ALHA81005, discovered in the Allan Hills region of Victoria Land, is a polymict anorthositic breccia which differs from other meteorites in mineralogical and chemical composition but is strikingly similar to lunar highlands soil breccias. The petrologic character and several independent lines of evidence identify ALHA81005 as a meteorite from the moon. Two small clasts of probable mare basalt occur among the highlands lithologies in Thin Section 81005,22. This lunar specimen, which shows relatively minor shock effects, has generated new ideas on the types of planetary samples found on the earth.

  15. Bias/Variance Analysis for Relational Domains

    DTIC Science & Technology

    2007-08-15

    EDtr ,Dte,t[L(t, y)] = EDtr ,Dte,t[(t − y)2] = Et[(t − E[t])2] + EDtr ,Dte...y − E[t])2] = NT (x) + EDtr ,Dte[(y − EDtr ,Dte[y] + EDtr ,Dte[y]− E[t])2] = NT (x) + EDtr ,Dte[(y − EDtr ,Dte[y])2 + ( EDtr ,Dte[y]− E[t])2 + 2(y − EDtr ,Dte...y]) · ( EDtr ,Dte[y]− E[t])] = NT (x) + EDtr ,Dte[(y − EDtr ,Dte[y])2] + ( EDtr ,Dte[y]− E[t])2 = NT (x) + VT (x) + BT (x) In this decomposition the

  16. Variance Design and Air Pollution Control

    ERIC Educational Resources Information Center

    Ferrar, Terry A.; Brownstein, Alan B.

    1975-01-01

    Air pollution control authorities were forced to relax air quality standards during the winter of 1972 by granting variances. This paper examines the institutional characteristics of these variance policies from an economic incentive standpoint, sets up desirable structural criteria for institutional design and arrives at policy guidelines for…

  17. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  18. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  19. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  20. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  1. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Variances. 1022.16 Section 1022.16 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  2. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  3. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  4. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....41 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  5. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  6. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  7. Allan Hills A77219 - The first Antarctic mesosiderite

    NASA Technical Reports Server (NTRS)

    Agosto, W. N.; Hewins, R. H.; Clarke, R. S., Jr.

    1980-01-01

    The abundance of orthopyroxene, inverted pigeonite, plagioclase, tridymite, kamacite, and tetrataenite, plus the whole rock analysis, indicates that ALHA 77219 is a mesosiderite. The presence of inverted pigeonite rims on orthopyroxene clasts plus the range of Fe/Mg and Fe/Mn ratios for pyroxene and olivine are characteristic of mesosiderites. All the petrographic and chemical data are consistent with classification of ALHA 77219 as a mesosiderite and, because the matrix is fine-grained and little recrystallized, as a subgroup I mesosiderite. The Fe/Mn trends in pyroxenes of mesosiderites such as ALHA 77219 can be explained by igneous fractionation of pyroxene along with metal and subsequent subsolidus reduction in the breccia.

  8. Variance Assistance Document: Land Disposal Restrictions Treatability Variances and Determinations of Equivalent Treatment

    EPA Pesticide Factsheets

    This document provides assistance to those seeking to submit a variance request for LDR treatability variances and determinations of equivalent treatment regarding the hazardous waste land disposal restrictions program.

  9. The natural thermoluminescence of meteorites. V - Ordinary chondrites at the Allan Hills ice fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Natural TL values for meteorites from the Main ice field are fairly low, while the Farwestern field shows a spread with many values 30-80 krad, suggestive of less than 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  10. Temporal Relation Extraction in Outcome Variances of Clinical Pathways.

    PubMed

    Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio

    2015-01-01

    Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization.

  11. Entropy, Fisher Information and Variance with Frost-Musulin Potenial

    NASA Astrophysics Data System (ADS)

    Idiodi, J. O. A.; Onate, C. A.

    2016-09-01

    This study presents the Shannon and Renyi information entropy for both position and momentum space and the Fisher information for the position-dependent mass Schrödinger equation with the Frost-Musulin potential. The analysis of the quantum mechanical probability has been obtained via the Fisher information. The variance information of this potential is equally computed. This controls both the chemical properties and physical properties of some of the molecular systems. We have observed the behaviour of the Shannon entropy. Renyi entropy, Fisher information and variance with the quantum number n respectively.

  12. Estimating Variances of Horizontal Wind Fluctuations in Stable Conditions

    NASA Astrophysics Data System (ADS)

    Luhar, Ashok K.

    2010-05-01

    Information concerning the average wind speed and the variances of lateral and longitudinal wind velocity fluctuations is required by dispersion models to characterise turbulence in the atmospheric boundary layer. When the winds are weak, the scalar average wind speed and the vector average wind speed need to be clearly distinguished and both lateral and longitudinal wind velocity fluctuations assume equal importance in dispersion calculations. We examine commonly-used methods of estimating these variances from wind-speed and wind-direction statistics measured separately, for example, by a cup anemometer and a wind vane, and evaluate the implied relationship between the scalar and vector wind speeds, using measurements taken under low-wind stable conditions. We highlight several inconsistencies inherent in the existing formulations and show that the widely-used assumption that the lateral velocity variance is equal to the longitudinal velocity variance is not necessarily true. We derive improved relations for the two variances, and although data under stable stratification are considered for comparison, our analysis is applicable more generally.

  13. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  14. Neural field theory with variance dynamics.

    PubMed

    Robinson, P A

    2013-06-01

    Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.

  15. Variance estimation for stratified propensity score estimators.

    PubMed

    Williamson, E J; Morley, R; Lucas, A; Carpenter, J R

    2012-07-10

    Propensity score methods are increasingly used to estimate the effect of a treatment or exposure on an outcome in non-randomised studies. We focus on one such method, stratification on the propensity score, comparing it with the method of inverse-probability weighting by the propensity score. The propensity score--the conditional probability of receiving the treatment given observed covariates--is usually an unknown probability estimated from the data. Estimators for the variance of treatment effect estimates typically used in practice, however, do not take into account that the propensity score itself has been estimated from the data. By deriving the asymptotic marginal variance of the stratified estimate of treatment effect, correctly taking into account the estimation of the propensity score, we show that routinely used variance estimators are likely to produce confidence intervals that are too conservative when the propensity score model includes variables that predict (cause) the outcome, but only weakly predict the treatment. In contrast, a comparison with the analogous marginal variance for the inverse probability weighted (IPW) estimator shows that routinely used variance estimators for the IPW estimator are likely to produce confidence intervals that are almost always too conservative. Because exact calculation of the asymptotic marginal variance is likely to be complex, particularly for the stratified estimator, we suggest that bootstrap estimates of variance should be used in practice.

  16. Studying impacts of strategy choices concerning the Celestial Reference Frame on the estimates of nutation time series during geodesic VLBI Analysis

    NASA Astrophysics Data System (ADS)

    Gattano, César; Lambert, Sébastien; Bizouard, Christian; Souchay, Jean

    2015-08-01

    Very Large Baseline Interferometry (VLBI) is the only technique which permits to determine Earth's precession-nutation at submilliarcsecond accuracy. With its 35 years of observations, at the rate of 2 observing sessions a week during the last decade, it allows to estimate nutation over periods from 14 days to 20 years. But VLBI data analysis is of such a complexity that there are as much different nutation time series that there are analysis center working on it. So, it is worthful to investigate the nature of these differences in relation with the choices in the analysis strategy.Differences between the operationnal nutation time series are considered as composed of a signal and a noise, determined by mean of wavelets and Allan variance analysis. We try to explain them by the choices made on the Celestial Reference Frame. In particulary, the ICRF2 catalog is perturbed by introducting random shifts on all the 3414 sources, and we investigate the consequences on nutation.

  17. Onomastic Mirroring: "The Closing of the American Mind" by Allan Bloom and "Lives on the Boundary" by Mike Rose.

    ERIC Educational Resources Information Center

    Heit, Karl

    Although Allan Bloom in "The Closing of the American Mind" and Mike Rose in "Lives on the Boundary" reveal an almost endless list of obvious differences of perspective on literacy and higher education in America, both take divergent yet similar routes to create a permanent place for liberal education. Both Bloom and Rose use…

  18. Reducing variance in batch partitioning measurements

    SciTech Connect

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  19. 78 FR 14122 - Revocation of Permanent Variances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-04

    ... Occupational Safety and Health Administration Revocation of Permanent Variances AGENCY: Occupational Safety and Health Administration (OSHA), Labor. ACTION: Notice of revocation. SUMMARY: With this notice, OSHA is... into consideration these newly corrected cross references. DATES: The effective date of the...

  20. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  1. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  2. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  3. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  4. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  5. Phonocardiographic diagnosis of aortic ball variance.

    PubMed

    Hylen, J C; Kloster, F E; Herr, R H; Hull, P Q; Ames, A W; Starr, A; Griswold, H E

    1968-07-01

    Fatty infiltration causing changes in the silastic poppet of the Model 1000 series Starr-Edwards aortic valve prostheses (ball variance) has been detected with increasing frequency and can result in sudden death. Phonocardiograms were recorded on 12 patients with ball variance confirmed by operation and of 31 controls. Ten of the 12 patients with ball variance were distinguished from the controls by an aortic opening sound (AO) less than half as intense as the aortic closure sound (AC) at the second right intercostal space (AO/AC ratio less than 0.5). Both AO and AC were decreased in two patients with ball variance, with the loss of the characteristic high frequency and amplitude of these sounds. The only patient having a diminished AO/AC ratio (0.42) without ball variance at reoperation had a clot extending over the aortic valve struts. The phonocardiographic findings have been the most reliable objective evidence of ball variance in patients with Starr-Edwards aortic prosthesis of the Model 1000 series.

  6. Composition of bulk samples and a possible pristine clast from Allan Hills A81005

    NASA Technical Reports Server (NTRS)

    Boynton, W. V.; Hill, D. H.

    1983-01-01

    Abundances of thirty-five elements were determined in two bulk samples and a white clast in the Allan Hills A81005 meteorite. High siderophile element content indicates that the sample is a regolith breccia. An Fe/Mn ratio of 77 in this meteorite eliminates parent bodies of known differentiated meteorites as the source of ALHA 81005. The incompatible elements are very similar to those found in most lunar highlands rocks, and provide very strong evidence that the sample is lunar in origin. The clast sample has the trace element pattern of a lunar anorthosite and is very low in KREEP and siderophile elements. It may be a fragment of a pristine lunar rock.

  7. Investigations into an unknown organism on the martian meteorite Allan Hills 84001.

    PubMed

    Steele, A; Goddard, D T; Stapleton, D; Toporski, J K; Peters, V; Bassinger, V; Sharples, G; Wynn-Williams, D D; McKay, D S

    2000-03-01

    Examination of fracture surfaces near the fusion crust of the martian meteorite Allan Hills (ALH) 84001 have been conducted using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and has revealed structures strongly resembling mycelium. These structures were compared with similar structures found in Antarctic cryptoendolithic communities. On morphology alone, we conclude that these features are not only terrestrial in origin but probably belong to a member of the Actinomycetales, which we consider was introduced during the Antarctic residency of this meteorite. If true, this is the first documented account of terrestrial microbial activity within a meteorite from the Antarctic blue ice fields. These structures, however, do not bear any resemblance to those postulated to be martian biota, although they are a probable source of the organic contaminants previously reported in this meteorite.

  8. Investigations into an unknown organism on the martian meteorite Allan Hills 84001

    NASA Technical Reports Server (NTRS)

    Steele, A.; Goddard, D. T.; Stapleton, D.; Toporski, J. K.; Peters, V.; Bassinger, V.; Sharples, G.; Wynn-Williams, D. D.; McKay, D. S.

    2000-01-01

    Examination of fracture surfaces near the fusion crust of the martian meteorite Allan Hills (ALH) 84001 have been conducted using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and has revealed structures strongly resembling mycelium. These structures were compared with similar structures found in Antarctic cryptoendolithic communities. On morphology alone, we conclude that these features are not only terrestrial in origin but probably belong to a member of the Actinomycetales, which we consider was introduced during the Antarctic residency of this meteorite. If true, this is the first documented account of terrestrial microbial activity within a meteorite from the Antarctic blue ice fields. These structures, however, do not bear any resemblance to those postulated to be martian biota, although they are a probable source of the organic contaminants previously reported in this meteorite.

  9. The MCT8 thyroid hormone transporter and Allan--Herndon--Dudley syndrome

    PubMed Central

    Schwartz, Charles E.; Stevenson, Roger E.

    2007-01-01

    Thyroid hormone is essential for the proper development and function of the brain. The active form of thyroid hormone is T3, which binds to nuclear receptors. Recently, a transporter specific for T3, MCT8 (monocarboxylate transporter 8) was identified. MCT8 is highly expressed in liver and brain. The gene is located in Xq13 and mutations in MCT8 are responsible for an X-linked condition, Allan--Herndon--Dudley syndrome (AHDS). This syndrome is characterized by congenital hypotonia that progresses to spasticity with severe psychomotor delays. Affected males also present with muscle hypoplasia, generalized muscle weakness, and limited speech. Importantly, these patients have elevated serum levels of free T3, low to below normal serum levels of free T4, and levels of thyroid stimulating hormone that are within the normal range. This constellation of measurements of thyroid function enables quick screening for AHDS in males presenting with mental retardation, congenital hypotonia, and generalized muscle weakness. PMID:17574010

  10. The MCT8 thyroid hormone transporter and Allan-Herndon-Dudley syndrome.

    PubMed

    Schwartz, Charles E; Stevenson, Roger E

    2007-06-01

    Thyroid hormone is essential for the proper development and function of the brain. The active form of thyroid hormone is T(3), which binds to nuclear receptors. Recently, a transporter specific for T(3), MCT8 (monocarboxylate transporter 8) was identified. MCT8 is highly expressed in liver and brain. The gene is located in Xq13 and mutations in MCT8 are responsible for an X-linked condition, Allan-Herndon-Dudley syndrome (AHDS). This syndrome is characterized by congenital hypotonia that progresses to spasticity with severe psychomotor delays. Affected males also present with muscle hypoplasia, generalized muscle weakness, and limited speech. Importantly, these patients have elevated serum levels of free T(3), low to below normal serum levels of free T(4), and levels of thyroid stimulating hormone that are within the normal range. This constellation of measurements of thyroid function enables quick screening for AHDS in males presenting with cognitive impairment, congenital hypotonia, and generalized muscle weakness.

  11. Allan Hills 76005 Polymict Eucrite Pairing Group: Curatorial and Scientific Update on a Jointly Curated Meteorite

    NASA Technical Reports Server (NTRS)

    Righter, K.

    2011-01-01

    Allan Hills 76005 (or 765) was collected by the joint US-Japan field search for meteorites in 1976-77. It was described in detail as "pale gray in color and consists of finely divided macrocrystalline pyroxene-rich matrix that contains abundant clastic fragments: (1) Clasts of white, plagioclase-rich rocks. (2) Medium-gray, partly devitrified, cryptocrystalline. (3) Monomineralic fragments and grains of pyroxene, plagioclases, oxide minerals, sulfides, and metal. In overall appearance it is very similar to some lunar breccias." Subsequent studies found a great diversity of basaltic clast textures and compositions, and therefore it is best classified as a polymict eucrite. Samples from the 1976-77, 77-78, and 78-79 field seasons (76, 77, and 78 prefixes) were split between US and Japan (NIPR). The US specimens are currently at NASA-JSC, Smithsonian Institution, or the Field Museum in Chicago. After this initial finding of ALH 76005, the next year s team recovered one additional mass ALH 77302, and then four additional masses were found during the third season ALH 78040 and ALH 78132, 78158 and 78165. The joint US-Japan collection effort ended after three years and the US began collecting in the Trans-Antarctic Mountains with the 1979-80 and subsequent field seasons. ALH 79017 and ALH 80102 were recovered in these first two years, and then in 1981-82 field season, 6 additional masses were recovered from the Allan Hills. Of course it took some time to establish pairing of all of these specimens, but altogether the samples comprise 4292.4 g of material. Here will be summarized the scientific findings as well as some curatorial details of how specimens have been subdivided and allocated for study. A detailed summary is also presented on the NASA-JSC curation webpage for the HED meteorite compendium.

  12. Allan-Herndon-Dudley Syndrome and the Monocarboxylate Transporter 8 (MCT8) Gene

    PubMed Central

    Schwartz, Charles E. ; May, Melanie M. ; Carpenter, Nancy J. ; Rogers, R. Curtis ; Martin, Judith ; Bialer, Martin G. ; Ward, Jewell ; Sanabria, Javier ; Marsa, Silvana ; Lewis, James A. ; Echeverri, Roberto ; Lubs, Herbert A. ; Voeller, Kytja ; Simensen, Richard J. ; Stevenson, Roger E. 

    2005-01-01

    Allan-Herndon-Dudley syndrome was among the first of the X-linked mental retardation syndromes to be described (in 1944) and among the first to be regionally mapped on the X chromosome (in 1990). Six large families with the syndrome have been identified, and linkage studies have placed the gene locus in Xq13.2. Mutations in the monocarboxylate transporter 8 gene (MCT8) have been found in each of the six families. One essential function of the protein encoded by this gene appears to be the transport of triiodothyronine into neurons. Abnormal transporter function is reflected in elevated free triiodothyronine and lowered free thyroxine levels in the blood. Infancy and childhood in the Allan-Herndon-Dudley syndrome are marked by hypotonia, weakness, reduced muscle mass, and delay of developmental milestones. Facial manifestations are not distinctive, but the face tends to be elongated with bifrontal narrowing, and the ears are often simply formed or cupped. Some patients have myopathic facies. Generalized weakness is manifested by excessive drooling, forward positioning of the head and neck, failure to ambulate independently, or ataxia in those who do ambulate. Speech is dysarthric or absent altogether. Hypotonia gives way in adult life to spasticity. The hands exhibit dystonic and athetoid posturing and fisting. Cognitive development is severely impaired. No major malformations occur, intrauterine growth is not impaired, and head circumference and genital development are usually normal. Behavior tends to be passive, with little evidence of aggressive or disruptive behavior. Although clinical signs of thyroid dysfunction are usually absent in affected males, the disturbances in blood levels of thyroid hormones suggest the possibility of systematic detection through screening of high-risk populations. PMID:15889350

  13. Allan-Herndon-Dudley syndrome and the monocarboxylate transporter 8 (MCT8) gene.

    PubMed

    Schwartz, Charles E; May, Melanie M; Carpenter, Nancy J; Rogers, R Curtis; Martin, Judith; Bialer, Martin G; Ward, Jewell; Sanabria, Javier; Marsa, Silvana; Lewis, James A; Echeverri, Roberto; Lubs, Herbert A; Voeller, Kytja; Simensen, Richard J; Stevenson, Roger E

    2005-07-01

    Allan-Herndon-Dudley syndrome was among the first of the X-linked mental retardation syndromes to be described (in 1944) and among the first to be regionally mapped on the X chromosome (in 1990). Six large families with the syndrome have been identified, and linkage studies have placed the gene locus in Xq13.2. Mutations in the monocarboxylate transporter 8 gene (MCT8) have been found in each of the six families. One essential function of the protein encoded by this gene appears to be the transport of triiodothyronine into neurons. Abnormal transporter function is reflected in elevated free triiodothyronine and lowered free thyroxine levels in the blood. Infancy and childhood in the Allan-Herndon-Dudley syndrome are marked by hypotonia, weakness, reduced muscle mass, and delay of developmental milestones. Facial manifestations are not distinctive, but the face tends to be elongated with bifrontal narrowing, and the ears are often simply formed or cupped. Some patients have myopathic facies. Generalized weakness is manifested by excessive drooling, forward positioning of the head and neck, failure to ambulate independently, or ataxia in those who do ambulate. Speech is dysarthric or absent altogether. Hypotonia gives way in adult life to spasticity. The hands exhibit dystonic and athetoid posturing and fisting. Cognitive development is severely impaired. No major malformations occur, intrauterine growth is not impaired, and head circumference and genital development are usually normal. Behavior tends to be passive, with little evidence of aggressive or disruptive behavior. Although clinical signs of thyroid dysfunction are usually absent in affected males, the disturbances in blood levels of thyroid hormones suggest the possibility of systematic detection through screening of high-risk populations.

  14. The Natural Thermoluminescence of Meteorites. Part 5; Ordinary Chondrites at the Allan Hills Ice Fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Using fairly conservative criteria (including natural and induced TL, find location, and petrographic data), the 167 meteorite fragments are thought to represent a maximum of 129 separate meteorites. Natural TL values for meteorites from the Main ice field are fairly low (typically 5-30 krad, indicative of terrestrial ages of approx. 400 ka), while the Far western field shows a spread with many values 30-80 krad, suggestive of less then 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels (less then 5 krad) at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia (less then 0.85 AU) orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends observed for meteorites collected during the 1977/1978 and 1978/1979 field seasons which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  15. Discrimination of frequency variance for tonal sequences.

    PubMed

    Byrne, Andrew J; Viemeister, Neal F; Stellmack, Mark A

    2014-12-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN (2), while in the signal interval, the variance of the sequence was σSIG (2) (with σSIG (2) >  σSTAN (2)). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN (2). Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of ( σSIG (2)- σSTAN (2)) to σSTAN (2) yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data.

  16. Variance estimation for nucleotide substitution models.

    PubMed

    Chen, Weishan; Wang, Hsiuying

    2015-09-01

    The current variance estimators for most evolutionary models were derived when a nucleotide substitution number estimator was approximated with a simple first order Taylor expansion. In this study, we derive three variance estimators for the F81, F84, HKY85 and TN93 nucleotide substitution models, respectively. They are obtained using the second order Taylor expansion of the substitution number estimator, the first order Taylor expansion of a squared deviation and the second order Taylor expansion of a squared deviation, respectively. These variance estimators are compared with the existing variance estimator in terms of a simulation study. It shows that the variance estimator, which is derived using the second order Taylor expansion of a squared deviation, is more accurate than the other three estimators. In addition, we also compare these estimators with an estimator derived by the bootstrap method. The simulation shows that the performance of this bootstrap estimator is similar to the estimator derived by the second order Taylor expansion of a squared deviation. Since the latter one has an explicit form, it is more efficient than the bootstrap estimator.

  17. Incorporating Love- and Rayleigh-Wave Magnitudes, Unequal Earthquake and Explosion Variance Assumptions, and Intrastation Complexity for Improved Event Screening

    DTIC Science & Technology

    2009-09-30

    differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. We have applied the Ms (VMAX) analysis (Bonner et al...for Rayleigh waves and (2) quantifying differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. We...quantifying differences in complexities and magnitude variances for earthquake and explosion - generated surface waves. RESEARCH ACCOMPLISHED Love

  18. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  19. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  20. Noise characteristics in DORIS station positions time series derived from IGN-JPL, INASAN and CNES-CLS analysis centres

    NASA Astrophysics Data System (ADS)

    Khelifa, S.

    2014-12-01

    Using wavelet transform and Allan variance, we have analysed the solutions of weekly position residuals of 09 high latitude DORIS stations in STCD (STation Coordinate Difference) format provided from the three Analysis Centres : IGN-JPL (solution ign11wd01), INASAN (solution ina10wd01) and CNES-CLS (solution lca11wd02), in order to compare the spectral characteristics of their residual noise. The temporal correlations between the three solutions, two by two and station by station, for each component (North, East and Vertical) reveal a high correlation in the horizontal components (North and East). For the North component, the correlation average is about 0.88, 0.81 and 0.79 between, respectively, IGN-INA, IGN-LCA and INA-LCA solutions, then for the East component it is about 0.84, 0.82 and 0.76, respectively. However, the correlations for the Vertical component are moderate with an average of 0.64, 0.57 and 0.58 in, respectively, IGN-INA, IGN-LCA and INA-LCA solutions. After removing the trends and seasonal components from the analysed time series, the Allan variance analysis shows that the three solutions are dominated by a white noise in the all three components (North, East and Vertical). The wavelet transform analysis, using the VisuShrink method with soft thresholding, reveals that the noise level in the LCA solution is less important compared to IGN and INA solutions. Indeed, the standard deviation of the noise for the three components is in the range of 5-11, 5-12 and 4-9mm in the IGN, INA, and LCA solutions, respectively.

  1. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) PROCEDURE FOR VARIATIONS FROM SAFETY AND HEALTH REGULATIONS UNDER THE LONGSHOREMEN'S AND HARBOR...

  2. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Dockets Management, except for information regarded as confidential under section 537(e) of the act. (d... Management (HFA-305), Food and Drug Administration, 5630 Fishers Lane, rm. 1061, Rockville, MD 20852. (1) The application for variance shall include the following information: (i) A description of the product and...

  3. Understanding gender variance in children and adolescents.

    PubMed

    Simons, Lisa K; Leibowitz, Scott F; Hidalgo, Marco A

    2014-06-01

    Gender variance is an umbrella term used to describe gender identity, expression, or behavior that falls outside of culturally defined norms associated with a specific gender. In recent years, growing media coverage has heightened public awareness about gender variance in childhood and adolescence, and an increasing number of referrals to clinics specializing in care for gender-variant youth have been reported in the United States. Gender-variant expression, behavior, and identity may present in childhood and adolescence in a number of ways, and youth with gender variance have unique health needs. For those experiencing gender dysphoria, or distress encountered by the discordance between biological sex and gender identity, puberty is often an exceptionally challenging time. Pediatric primary care providers may be families' first resource for education and support, and they play a critical role in supporting the health of youth with gender variance by screening for psychosocial problems and health risks, referring for gender-specific mental health and medical care, and providing ongoing advocacy and support.

  4. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) NATIONAL ENVIRONMENTAL POLICY ACT IMPLEMENTING PROCEDURES Implementing... arrangements for emergency actions having significant environmental impacts. DOE shall document,...

  5. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the study was conducted in compliance with the good laboratory practice regulations set forth in part... application for variance shall include the following information: (i) A description of the product and its... equipment, the proposed location of each unit. (viii) Such other information required by regulation or...

  6. Parameterization of Incident and Infragravity Swash Variance

    NASA Astrophysics Data System (ADS)

    Stockdon, H. F.; Holman, R. A.; Sallenger, A. H.

    2002-12-01

    By clearly defining the forcing and morphologic controls of swash variance in both the incident and infragravity frequency bands, we are able to derive a more complete parameterization for extreme runup that may be applicable to a wide range of beach and wave conditions. It is expected that the dynamics of the incident and infragravity bands will have different dependencies on offshore wave conditions and local beach slopes. For example, previous studies have shown that swash variance in the incident band depends on foreshore beach slope while the infragravity variance depends more on a weighted mean slope across the surf zone. Because the physics of each band is parameterized differently, the amount that each frequency band contributes to the total swash variance will vary from site to site and, often, at a single site as the profile configuration changes over time. Using water level time series (measured at the shoreline) collected during nine dynamically different field experiments, we test the expected behavior of both incident and infragravity swash and the contribution each makes to total variance. At the dissipative sites (Iribarren number, \\xi0, <0.3) located in Oregon and the Netherlands, the incident band swash is saturated with respect to offshore wave height. Conversely, on the intermediate and reflective beaches, the amplitudes of both incident and infragravity swash variance grow with increasing offshore wave height. While infragravity band swash at all sites appears to increase linearly with offshore wave height, the magnitudes of the response are somewhat greater on reflective beaches than on dissipative beaches. This means that for the same offshore wave conditions the swash on a steeper foreshore will be larger than that on a more gently sloping foreshore. The potential control of the surf zone slope on infragravity band swash is examined at Duck, North Carolina, (0.3 < \\xi0 < 4.0), where significant differences in the relationship between swash

  7. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  8. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  9. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  10. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from...

  11. Evaluation of climate modeling factors impacting the variance of streamflow

    NASA Astrophysics Data System (ADS)

    Al Aamery, N.; Fox, J. F.; Snyder, M.

    2016-11-01

    The present contribution quantifies the relative importance of climate modeling factors and chosen response variables upon controlling the variance of streamflow forecasted with global climate model (GCM) projections, which has not been attempted in previous literature to our knowledge. We designed an experiment that varied climate modeling factors, including GCM type, project phase, emission scenario, downscaling method, and bias correction. The streamflow response variable was also varied and included forecasted streamflow and difference in forecast and hindcast streamflow predictions. GCM results and the Soil Water Assessment Tool (SWAT) were used to predict streamflow for a wet, temperate watershed in central Kentucky USA. After calibrating the streamflow model, 112 climate realizations were simulated within the streamflow model and then analyzed on a monthly basis using analysis of variance. Analysis of variance results indicate that the difference in forecast and hindcast streamflow predictions is a function of GCM type, climate model project phase, and downscaling approach. The prediction of forecasted streamflow is a function of GCM type, project phase, downscaling method, emission scenario, and bias correction method. The results indicate the relative importance of the five climate modeling factors when designing streamflow prediction ensembles and quantify the reduction in uncertainty associated with coupling the climate results with the hydrologic model when subtracting the hindcast simulations. Thereafter, analysis of streamflow prediction ensembles with different numbers of realizations show that use of all available realizations is unneeded for the study system, so long as the ensemble design is well balanced. After accounting for the factors controlling streamflow variance, results show that predicted average monthly change in streamflow tends to follow precipitation changes and result in a net increase in the average annual precipitation and

  12. A surface layer variance heat budget for ENSO

    NASA Astrophysics Data System (ADS)

    Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.

    2015-05-01

    Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.

  13. Variance and skewness in the FIRST survey

    NASA Astrophysics Data System (ADS)

    Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.

    1998-10-01

    We investigate the large-scale clustering of radio sources in the FIRST 1.4-GHz survey by analysing the distribution function (counts in cells). We select a reliable sample from the the FIRST catalogue, paying particular attention to the problem of how to define single radio sources from the multiple components listed. We also consider the incompleteness of the catalogue. We estimate the angular two-point correlation function w(theta), the variance Psi_2 and skewness Psi_3 of the distribution for the various subsamples chosen on different criteria. Both w(theta) and Psi_2 show power-law behaviour with an amplitude corresponding to a spatial correlation length of r_0~10h^-1Mpc. We detect significant skewness in the distribution, the first such detection in radio surveys. This skewness is found to be related to the variance through Psi_3=S_3(Psi_2)^alpha, with alpha=1.9+/-0.1, consistent with the non-linear gravitational growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of variance and the skewness are consistent with realistic models of galaxy clustering.

  14. Variance Reduction Factor of Nuclear Data for Integral Neutronics Parameters

    SciTech Connect

    Chiba, G. Tsuji, M.; Narabayashi, T.

    2015-01-15

    We propose a new quantity, a variance reduction factor, to identify nuclear data for which further improvements are required to reduce uncertainties of target integral neutronics parameters. Important energy ranges can be also identified with this variance reduction factor. Variance reduction factors are calculated for several integral neutronics parameters. The usefulness of the variance reduction factors is demonstrated.

  15. Atmospheric composition 1 million years ago from blue ice in the Allan Hills, Antarctica.

    PubMed

    Higgins, John A; Kurbatov, Andrei V; Spaulding, Nicole E; Brook, Ed; Introne, Douglas S; Chimiak, Laura M; Yan, Yuzhen; Mayewski, Paul A; Bender, Michael L

    2015-06-02

    Here, we present direct measurements of atmospheric composition and Antarctic climate from the mid-Pleistocene (∼1 Ma) from ice cores drilled in the Allan Hills blue ice area, Antarctica. The 1-Ma ice is dated from the deficit in (40)Ar relative to the modern atmosphere and is present as a stratigraphically disturbed 12-m section at the base of a 126-m ice core. The 1-Ma ice appears to represent most of the amplitude of contemporaneous climate cycles and CO2 and CH4 concentrations in the ice range from 221 to 277 ppm and 411 to 569 parts per billion (ppb), respectively. These concentrations, together with measured δD of the ice, are at the warm end of the field for glacial-interglacial cycles of the last 800 ky and span only about one-half of the range. The highest CO2 values in the 1-Ma ice fall within the range of interglacial values of the last 400 ka but are up to 7 ppm higher than any interglacial values between 450 and 800 ka. The lowest CO2 values are 30 ppm higher than during any glacial period between 450 and 800 ka. This study shows that the coupling of Antarctic temperature and atmospheric CO2 extended into the mid-Pleistocene and demonstrates the feasibility of discontinuously extending the current ice core record beyond 800 ka by shallow coring in Antarctic blue ice areas.

  16. Atmospheric composition 1 million years ago from blue ice in the Allan Hills, Antarctica

    PubMed Central

    Higgins, John A.; Kurbatov, Andrei V.; Spaulding, Nicole E.; Brook, Ed; Introne, Douglas S.; Chimiak, Laura M.; Yan, Yuzhen; Mayewski, Paul A.; Bender, Michael L.

    2015-01-01

    Here, we present direct measurements of atmospheric composition and Antarctic climate from the mid-Pleistocene (∼1 Ma) from ice cores drilled in the Allan Hills blue ice area, Antarctica. The 1-Ma ice is dated from the deficit in 40Ar relative to the modern atmosphere and is present as a stratigraphically disturbed 12-m section at the base of a 126-m ice core. The 1-Ma ice appears to represent most of the amplitude of contemporaneous climate cycles and CO2 and CH4 concentrations in the ice range from 221 to 277 ppm and 411 to 569 parts per billion (ppb), respectively. These concentrations, together with measured δD of the ice, are at the warm end of the field for glacial–interglacial cycles of the last 800 ky and span only about one-half of the range. The highest CO2 values in the 1-Ma ice fall within the range of interglacial values of the last 400 ka but are up to 7 ppm higher than any interglacial values between 450 and 800 ka. The lowest CO2 values are 30 ppm higher than during any glacial period between 450 and 800 ka. This study shows that the coupling of Antarctic temperature and atmospheric CO2 extended into the mid-Pleistocene and demonstrates the feasibility of discontinuously extending the current ice core record beyond 800 ka by shallow coring in Antarctic blue ice areas. PMID:25964367

  17. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    NASA Astrophysics Data System (ADS)

    Gelfert, Axel

    2014-03-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling `prose-poem' Eureka (1848), in particular, has sometimes been scrutinized for anticipations of later scientific developments. By contrast, the present paper argues that it should be understood as a contribution to the raging debates about scientific methodology at the time. This methodological interest, which is echoed in Poe's `tales of ratiocination', gives rise to a proposed new mode of—broadly abductive—inference, which Poe attributes to the hybrid figure of the `poet-mathematician'. Without creative imagination and intuition, Science would necessarily remain incomplete, even by its own standards. This concern with imaginative (abductive) inference ties in nicely with his coherentism, which grants pride of place to the twin virtues of Simplicity and Consistency, which must constrain imagination lest it degenerate into mere fancy.

  18. A legacy in 20th-century medicine: Robert Allan Phillips and the taming of cholera.

    PubMed

    Savarino, Stephen J

    2002-09-15

    The legacy of Captain Robert Allan Phillips (1906-1976) was to establish effective, evidence-based rehydration methods for the treatment of cholera. As a Navy Lieutenant at the Rockefeller Institute for Medical Research (New York, New York) during World War II, Phillips developed a field method for the rapid assessment of fluid loss in wounded servicemen. After the war, he championed the establishment of United States Naval Medical Research Unit (NAMRU)-3 (Cairo; 1946) and NAMRU-2 (Taipei; 1955), serving at the helm of both units. Phillips embarked on cholera studies during the 1947 Egyptian cholera epidemic and brought them to maturity at NAMRU-2 (1958-1965), elucidating the pathophysiologic derangements induced by cholera and developing highly efficacious methods of intravenous rehydration. His conception of a simpler cholera treatment was realized in the late 1960s with the development of glucose-based oral rehydration therapy, a monumental breakthrough to which many other investigators made vital contributions. Today, these simple advances have been integrated into everyday medical practice across the globe, saving millions of lives annually.

  19. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    NASA Astrophysics Data System (ADS)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  20. Proposed Ice Flow, Given 200m and 400m Additional Ice in the Allan Hills Region, Antarctica: Implications for Meteorite Concentration

    NASA Astrophysics Data System (ADS)

    Traub-Metlay, S.; Cassidy, W. A.

    1992-07-01

    The Allan Hills-David Glacier region contains some of the most highly populated meteorite stranding surfaces in Antarctica. Nearly 2000 meteorites have to date been collected from the icefields associated with the Allan Hills, and nearly 1500 from areas around Elephant Moraine. While much attention has been focused on the current geological and glaciological conditions of these stranding surfaces, less work has been done concerning what they may have looked like in the past, when ice thicknesses may have been greater. In this study, conjectural maps of the current Allan Hills area with 200 meters and 400 meters of additional ice cover each are analyzed for probable regional and local ice flow patterns. A dramatic decrease in ice thickness over a relatively brief period of time could result either from climatic change or a geologically rapid regional uplift. Delisle and Sievers (1991) noted that the valley between the Allan Hills Main Icefield and the Allan Hills resembles a half-graben resulting from east-west extensional tectonics, and that the mesa-like bedrock features associated with the Near Western and Mid Western Icefields resemble fault blocks. They concluded that the Allan Hills area icefields may have become active stranding surfaces as a result of a regional uplift within the past 1-2 million years, assuming a current rate of uplift in the Allan Hills region of ~100 meters/million years. Whether the cause was climatic or tectonic, generalized maps of current ice contours plus 400 and 200 meters ice may provide views of what the Allan Hills region looked like just before activation of the modern meteorite stranding surfaces (Figs. 1 and 2). At an ice thickness greater by 400 meters, ice could flow smoothly over the Allan Hills and would drain down to the Mawson Glacier via the Odell Glacier, east of the Allan Hills; down the Manhaul Bay depression between the east and west arms of Allan Hills; and down the half-graben discovered by Delisle and Sievers

  1. Modality-Driven Classification and Visualization of Ensemble Variance

    SciTech Connect

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  2. Dynamic Programming Using Polar Variance for Image Segmentation.

    PubMed

    Rosado-Toro, Jose A; Altbach, Maria I; Rodriguez, Jeffrey J

    2016-10-06

    When using polar dynamic programming (PDP) for image segmentation, the object size is one of the main features used. This is because if size is left unconstrained the final segmentation may include high-gradient regions that are not associated with the object. In this paper, we propose a new feature, polar variance, which allows the algorithm to segment objects of different sizes without the need for training data. The polar variance is the variance in a polar region between a user-selected origin and a pixel we want to analyze. We also incorporate a new technique that allows PDP to segment complex shapes by finding low-gradient regions and growing them. The experimental analysis consisted on comparing our technique with different active contour segmentation techniques on a series of tests. The tests consisted on robustness to additive Gaussian noise, segmentation accuracy with different grayscale images and finally robustness to algorithm-specific parameters. Experimental results show that our technique performs favorably when compared to other segmentation techniques.

  3. Is fMRI "noise" really noise? Resting state nuisance regressors remove variance with network structure.

    PubMed

    Bright, Molly G; Murphy, Kevin

    2015-07-01

    Noise correction is a critical step towards accurate mapping of resting state BOLD fMRI connectivity. Noise sources related to head motion or physiology are typically modelled by nuisance regressors, and a generalised linear model is applied to regress out the associated signal variance. In this study, we use independent component analysis (ICA) to characterise the data variance typically discarded in this pre-processing stage in a cohort of 12 healthy volunteers. The signal variance removed by 24, 12, 6, or only 3 head motion parameters demonstrated network structure typically associated with functional connectivity, and certain networks were discernable in the variance extracted by as few as 2 physiologic regressors. Simulated nuisance regressors, unrelated to the true data noise, also removed variance with network structure, indicating that any group of regressors that randomly sample variance may remove highly structured "signal" as well as "noise." Furthermore, to support this we demonstrate that random sampling of the original data variance continues to exhibit robust network structure, even when as few as 10% of the original volumes are considered. Finally, we examine the diminishing returns of increasing the number of nuisance regressors used in pre-processing, showing that excessive use of motion regressors may do little better than chance in removing variance within a functional network. It remains an open challenge to understand the balance between the benefits and confounds of noise correction using nuisance regressors.

  4. Spatial variances of wind fields and their relation to second-order structure functions and spectra

    NASA Astrophysics Data System (ADS)

    Vogelzang, Jur; King, Gregory P.; Stoffelen, Ad

    2015-02-01

    Kinetic energy variance as a function of spatial scale for wind fields is commonly estimated either using second-order structure functions (in the spatial domain) or by spectral analysis (in the frequency domain). Both techniques give an order-of-magnitude estimate. More accurate estimates are given by a statistic called spatial variance. Spatial variances have a clear interpretation and are tolerant for missing data. They can be related to second-order structure functions, both for discrete and continuous data. Spatial variances can also be Fourier transformed to yield a relation with spectra. The flexibility of spatial variances is used to study various sampling strategies, and to compare them with second-order structure functions and spectral variances. It is shown that the spectral sampling strategy is not seriously biased to calm conditions for scatterometer ocean surface vector winds. When the second-order structure function behaves like rp, its ratio with the spatial variance equals >(p+1>)>(p+2>). Ocean surface winds in the tropics have p between 2/3 and 1, so one-sixth to one-fifth of the second-order structure function value is a good proxy for the cumulative variance.

  5. Visual SLAM Using Variance Grid Maps

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  6. Clinical and Molecular Characteristics of SLC16A2 (MCT8) Mutations in Three Families with the Allan-Herndon-Dudley Syndrome.

    PubMed

    Novara, Francesca; Groeneweg, Stefan; Freri, Elena; Estienne, Margherita; Reho, Paolo; Matricardi, Sara; Castellotti, Barbara; Visser, W Edward; Zuffardi, Orsetta; Visser, Theo J

    2017-03-01

    Mutations in the thyroid hormone transporter SLC16A2 (MCT8) cause the Allan-Herndon-Dudley Syndrome (AHDS), characterized by severe psychomotor retardation and peripheral thyrotoxicosis. Here, we report three newly identified AHDS patients. Previously documented mutations were identified in probands 1 (p.R271H) and 2 (p.G564R), resulting in a severe clinical phenotype. A novel mutation (p.G564E) was identified in proband 3, affecting the same Gly564 residue, but resulting in a relatively mild clinical phenotype. Functional analysis in transiently transfected COS-1 and JEG-3 cells showed a near-complete inactivation of TH transport for p.G564R, whereas considerable cell-type-dependent residual transport activity was observed for p.G564E. Both mutants showed a strong decrease in protein expression levels, but differentially affected Vmax and Km values of T3 transport. Our findings illustrate that different mutations affecting the same residue may have a differential impact on SLC16A2 transporter function, which translates into differences in severity of the clinical phenotype.

  7. River meanders - Theory of minimum variance

    USGS Publications Warehouse

    Langbein, Walter Basil; Leopold, Luna Bergere

    1966-01-01

    Meanders are the result of erosion-deposition processes tending toward the most stable form in which the variability of certain essential properties is minimized. This minimization involves the adjustment of the planimetric geometry and the hydraulic factors of depth, velocity, and local slope.The planimetric geometry of a meander is that of a random walk whose most frequent form minimizes the sum of the squares of the changes in direction in each successive unit length. The direction angles are then sine functions of channel distance. This yields a meander shape typically present in meandering rivers and has the characteristic that the ratio of meander length to average radius of curvature in the bend is 4.7.Depth, velocity, and slope are shown by field observations to be adjusted so as to decrease the variance of shear and the friction factor in a meander curve over that in an otherwise comparable straight reach of the same riverSince theory and observation indicate meanders achieve the minimum variance postulated, it follows that for channels in which alternating pools and riffles occur, meandering is the most probable form of channel geometry and thus is more stable geometry than a straight or nonmeandering alinement.

  8. Variance and Skewness in the FIRST Survey

    NASA Astrophysics Data System (ADS)

    Magliocchetti, M.; Maddox, S. J.; Lahav, O.; Wall, J. V.

    We investigate the large-scale clustering of radio sources by analysing the distribution function of the FIRST 1.4 GHz survey. We select a reliable galaxy sample from the FIRST catalogue, paying particular attention to the definition of single radio sources from the multiple components listed in the FIRST catalogue. We estimate the variance, Ψ2, and skewness, Ψ3, of the distribution function for the best galaxy subsample. Ψ2 shows power-law behaviour as a function of cell size, with an amplitude corresponding a spatial correlation length of r0 ~10 h-1 Mpc. We detect significant skewness in the distribution, and find that it is related to the variance through the relation Ψ3 = S3 (Ψ2)α with α = 1.9 +/- 0.1 consistent with the non-linear growth of perturbations from primordial Gaussian initial conditions. We show that the amplitude of clustering (corresponding to a spatial correlation length of r0 ~10 h-1 Mpc) and skewness are consistent with realistic models of galaxy clustering.

  9. Hybrid biasing approaches for global variance reduction.

    PubMed

    Wu, Zeyun; Abdel-Khalik, Hany S

    2013-02-01

    A new variant of Monte Carlo-deterministic (DT) hybrid variance reduction approach based on Gaussian process theory is presented for accelerating convergence of Monte Carlo simulation and compared with Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) approach implemented in the SCALE package from Oak Ridge National Laboratory. The new approach, denoted the Gaussian process approach, treats the responses of interest as normally distributed random processes. The Gaussian process approach improves the selection of the weight windows of simulated particles by identifying a subspace that captures the dominant sources of statistical response variations. Like the FW-CADIS approach, the Gaussian process approach utilizes particle importance maps obtained from deterministic adjoint models to derive weight window biasing. In contrast to the FW-CADIS approach, the Gaussian process approach identifies the response correlations (via a covariance matrix) and employs them to reduce the computational overhead required for global variance reduction (GVR) purpose. The effective rank of the covariance matrix identifies the minimum number of uncorrelated pseudo responses, which are employed to bias simulated particles. Numerical experiments, serving as a proof of principle, are presented to compare the Gaussian process and FW-CADIS approaches in terms of the global reduction in standard deviation of the estimated responses.

  10. Carbonates in fractures of Martian meteorite Allan Hills 84001: petrologic evidence for impact origin

    NASA Technical Reports Server (NTRS)

    Scott, E. R.; Krot, A. N.; Yamaguchi, A.

    1998-01-01

    Carbonates in Martian meteorite Allan Hills 84001 occur as grains on pyroxene grain boundaries, in crushed zones, and as disks, veins, and irregularly shaped grains in healed pyroxene fractures. Some carbonate disks have tapered Mg-rich edges and are accompanied by smaller, thinner and relatively homogeneous, magnesite microdisks. Except for the microdisks, all types of carbonate grains show the same unique chemical zoning pattern on MgCO3-FeCO3-CaCO3 plots. This chemical characteristic and the close spatial association of diverse carbonate types show that all carbonates formed by a similar process. The heterogeneous distribution of carbonates in fractures, tapered shapes of some disks, and the localized occurrence of Mg-rich microdisks appear to be incompatible with growth from an externally derived CO2-rich fluid that changed in composition over time. These features suggest instead that the fractures were closed as carbonates grew from an internally derived fluid and that the microdisks formed from a residual Mg-rich fluid that was squeezed along fractures. Carbonate in pyroxene fractures is most abundant near grains of plagioclase glass that are located on pyroxene grain boundaries and commonly contain major or minor amounts of carbonate. We infer that carbonates in fractures formed from grain boundary carbonates associated with plagiociase that were melted by impact and dispersed into the surrounding fractured pyroxene. Carbonates in fractures, which include those studied by McKay et al. (1996), could not have formed at low temperatures and preserved mineralogical evidence for Martian organisms.

  11. Variance of wind estimates using spaced antenna techniques with the MU radar

    NASA Astrophysics Data System (ADS)

    Hassenpflug, G.; Yamamoto, M.; Fukao, S.

    2004-11-01

    Variance of horizontal wind estimates in conditions of anisotropic scattering are obtained for the Spaced Antenna (SA) Full Correlation Analysis (FCA) method of Holloway et al. (1997b) and Doviak et al. (1996), but are equally applicable to the Briggs method of FCA. Variance and covariance of cross-correlation magnitudes are theoretically estimated, and the standard theory of error propagation is used to estimate the variance of the wind components for the infinite SNR case. The effect of baseline orientation is investigated, and experimental data from the MU radar in Japan is presented.

  12. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  13. Robust Variance Estimation with Dependent Effect Sizes: Practical Considerations Including a Software Tutorial in Stata and SPSS

    ERIC Educational Resources Information Center

    Tanner-Smith, Emily E.; Tipton, Elizabeth

    2014-01-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and SPSS (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding…

  14. Bulk and Stable Isotopic Compositions of Carbonate Minerals in Martian Meteorite Allan Hills 84001: No Proof of High Formation Temperature

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.; Romanek, Christopher S.

    1998-01-01

    Understanding the origin of carbonate minerals in the Martian meteorite Allan Hills (ALH) 84001 is crucial to evaluating the hypothesis that they contain traces of ancient Martian life. Using arguments based on chemical equilibria among carbonates and fluids, an origin at greater than 650 C (inimical to life) has been proposed. However, the bulk and stable isotopic compositions of the carbonate minerals are open to multiple interpretations and so lend no particular support to a high-temperature origin. Other methods (possibly less direct) will have to be used to determine the formation temperature of the carbonates in ALH 84001.

  15. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...

  16. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... its application is complete. (d) The Administrator will issue a variance if the criteria specified in... entity will achieve compliance with this subpart. (f) A variance will cease to be effective upon...

  17. Discordance of DNA Methylation Variance Between two Accessible Human Tissues

    PubMed Central

    Jiang, Ruiwei; Jones, Meaghan J.; Chen, Edith; Neumann, Sarah M.; Fraser, Hunter B.; Miller, Gregory E.; Kobor, Michael S.

    2015-01-01

    Population epigenetic studies have been seeking to identify differences in DNA methylation between specific exposures, demographic factors, or diseases in accessible tissues, but relatively little is known about how inter-individual variability differs between these tissues. This study presents an analysis of DNA methylation differences between matched peripheral blood mononuclear cells (PMBCs) and buccal epithelial cells (BECs), the two most accessible tissues for population studies, in 998 promoter-located CpG sites. Specifically we compared probe-wise DNA methylation variance, and how this variance related to demographic factors across the two tissues. PBMCs had overall higher DNA methylation than BECs, and the two tissues tended to differ most at genomic regions of low CpG density. Furthermore, although both tissues showed appreciable probe-wise variability, the specific regions and magnitude of variability differed strongly between tissues. Lastly, through exploratory association analysis, we found indication of differential association of BEC and PBMC with demographic variables. The work presented here offers insight into variability of DNA methylation between individuals and across tissues and helps guide decisions on the suitability of buccal epithelial or peripheral mononuclear cells for the biological questions explored by epigenetic studies in human populations. PMID:25660083

  18. Assessment of the genetic variance of late-onset Alzheimer's disease.

    PubMed

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions.

  19. Considering Oil Production Variance as an Indicator of Peak Production

    DTIC Science & Technology

    2010-06-07

    Acquisition Cost ( IRAC ) Oil Prices. Source: Data used to construct graph acquired from the EIA (http://tonto.eia.doe.gov/country/timeline/oil_chronology.cfm...Acquisition Cost ( IRAC ). Production vs. Price – Variance Comparison Oil production variance and oil price variance have never been so far

  20. A New Nonparametric Levene Test for Equal Variances

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.

    2010-01-01

    Tests of the equality of variances are sometimes used on their own to compare variability across groups of experimental or non-experimental conditions but they are most often used alongside other methods to support assumptions made about variances. A new nonparametric test of equality of variances is described and compared to current "gold…

  1. Minimum Variance Approaches to Ultrasound Pixel-Based Beamforming.

    PubMed

    Nguyen, Nghia Q; Prager, Richard W

    2017-02-01

    We analyze the principles underlying minimum variance distortionless response (MVDR) beamforming in order to integrate it into a pixel-based algorithm. There is a challenge posed by the low echo signal-to-noise ratio (eSNR) when calculating beamformer contributions at pixels far away from the beam centreline. Together with the well-known scarcity of samples for covariance matrix estimation, this reduces the beamformer performance and degrades the image quality. To address this challenge, we implement the MVDR algorithm in two different ways. First, we develop the conventional minimum variance pixel-based (MVPB) beamformer that performs the MVDR after the pixel-based superposition step. This involves a combination of methods in the literature, extended over multiple transmits to increase the eSNR. Then we propose the coherent MVPB beamformer, where the MVDR is applied to data within individual transmits. Based on pressure field analysis, we develop new algorithms to improve the data alignment and matrix estimation, and hence overcome the low-eSNR issue. The methods are demonstrated on data acquired with an ultrasound open platform. The results show the coherent MVPB beamformer substantially outperforms the conventional MVPB in a series of experiments, including phantom and in vivo studies. Compared to the unified pixel-based beamformer, the newest delay-and-sum algorithm in [1], the coherent MVPB performs well on regions that conform to the diffuse scattering assumptions on which the minimum variance principles are based. It produces less good results for parts of the image that are dominated by specular reflections.

  2. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  3. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed.

  4. From means and variances to persons and patterns

    PubMed Central

    Grice, James W.

    2015-01-01

    A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672

  5. Speckle variance OCT imaging of the vasculature in live mammalian embryos

    NASA Astrophysics Data System (ADS)

    Sudheendran, N.; Syed, S. H.; Dickinson, M. E.; Larina, I. V.; Larin, K. V.

    2011-03-01

    Live imaging of normal and abnormal vascular development in mammalian embryos is important tool in embryonic research, which can potentially contribute to understanding, prevention and treatment of cardiovascular birth defects. Here, we used speckle variance analysis of swept source optical coherence tomography (OCT) data sets acquired from live mouse embryos to reconstruct the 3-D structure of the embryonic vasculature. Both Doppler OCT and speckle variance algorithms were used to reconstruct the vascular structure. The results demonstrates that speckle variance imaging provides more accurate representation of the vascular structure, as it is not sensitive to the blood flow direction, while the Doppler OCT imaging misses blood flow component perpendicular to the beam direction. These studies suggest that speckle variance imaging is a promising tool to study vascular development in cultured mouse embryos.

  6. Hidden temporal order unveiled in stock market volatility variance

    NASA Astrophysics Data System (ADS)

    Shapira, Y.; Kenett, D. Y.; Raviv, Ohad; Ben-Jacob, E.

    2011-06-01

    When analyzed by standard statistical methods, the time series of the daily return of financial indices appear to behave as Markov random series with no apparent temporal order or memory. This empirical result seems to be counter intuitive since investor are influenced by both short and long term past market behaviors. Consequently much effort has been devoted to unveil hidden temporal order in the market dynamics. Here we show that temporal order is hidden in the series of the variance of the stocks volatility. First we show that the correlation between the variances of the daily returns and means of segments of these time series is very large and thus cannot be the output of random series, unless it has some temporal order in it. Next we show that while the temporal order does not show in the series of the daily return, rather in the variation of the corresponding volatility series. More specifically, we found that the behavior of the shuffled time series is equivalent to that of a random time series, while that of the original time series have large deviations from the expected random behavior, which is the result of temporal structure. We found the same generic behavior in 10 different stock markets from 7 different countries. We also present analysis of specially constructed sequences in order to better understand the origin of the observed temporal order in the market sequences. Each sequence was constructed from segments with equal number of elements taken from algebraic distributions of three different slopes.

  7. PET image reconstruction: mean, variance, and optimal minimax criterion

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng

    2015-04-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.

  8. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    PubMed

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  9. Utilizing "The William Allan Kritsonis Balanced Teeter-Totter Model" as a Means to Cultivate a Legacy of Transformational Leaders in Schools: A National Focus

    ERIC Educational Resources Information Center

    Jacobs, Karen Dupre

    2007-01-01

    The Kritsonis Teeter-Totter Model, developed by Dr. William Allan Kritsonis, is utilized to cultivate a legacy of transformational leaders in schools throughout the United States. In a time when change in schools is inevitable, the model aides school leaders in better defining their individual role in schools and that of their stakeholders in…

  10. Adjusting for Unequal Variances when Comparing Means in One-Way and Two-Way Fixed Effects ANOVA Models.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1989-01-01

    Two methods of handling unequal variances in the two-way fixed effects analysis of variance (ANOVA) model are described. One is based on an improved Wilcox (1988) method for the one-way model, and the other is an extension of G. S. James' (1951) second order method. (TJH)

  11. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series

    PubMed Central

    Fransson, Peter

    2016-01-01

    Abstract Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box–Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed. PMID:27784176

  12. On Stabilizing the Variance of Dynamic Functional Brain Connectivity Time Series.

    PubMed

    Thompson, William Hedley; Fransson, Peter

    2016-12-01

    Assessment of dynamic functional brain connectivity based on functional magnetic resonance imaging (fMRI) data is an increasingly popular strategy to investigate temporal dynamics of the brain's large-scale network architecture. Current practice when deriving connectivity estimates over time is to use the Fisher transformation, which aims to stabilize the variance of correlation values that fluctuate around varying true correlation values. It is, however, unclear how well the stabilization of signal variance performed by the Fisher transformation works for each connectivity time series, when the true correlation is assumed to be fluctuating. This is of importance because many subsequent analyses either assume or perform better when the time series have stable variance or adheres to an approximate Gaussian distribution. In this article, using simulations and analysis of resting-state fMRI data, we analyze the effect of applying different variance stabilization strategies on connectivity time series. We focus our investigation on the Fisher transformation, the Box-Cox (BC) transformation and an approach that combines both transformations. Our results show that, if the intention of stabilizing the variance is to use metrics on the time series, where stable variance or a Gaussian distribution is desired (e.g., clustering), the Fisher transformation is not optimal and may even skew connectivity time series away from being Gaussian. Furthermore, we show that the suboptimal performance of the Fisher transformation can be substantially improved by including an additional BC transformation after the dynamic functional connectivity time series has been Fisher transformed.

  13. A Comparative Study of Tests for Homogeneity of Variances with Application to DNA Methylation Data

    PubMed Central

    Li, Xuan; Qiu, Weiliang; Morrow, Jarrett; DeMeo, Dawn L.; Weiss, Scott T.; Fu, Yuejiao; Wang, Xiaogang

    2015-01-01

    Variable DNA methylation has been associated with cancers and complex diseases. Researchers have identified many DNA methylation markers that have different mean methylation levels between diseased subjects and normal subjects. Recently, researchers found that DNA methylation markers with different variabilities between subject groups could also have biological meaning. In this article, we aimed to help researchers choose the right test of equal variance in DNA methylation data analysis. We performed systematic simulation studies and a real data analysis to compare the performances of 7 equal-variance tests, including 2 tests recently proposed in the DNA methylation analysis literature. Our results showed that the Brown-Forsythe test and trimmed-mean-based Levene's test had good performance in testing for equality of variance in our simulation studies and real data analyses. Our results also showed that outlier profiles could be biologically very important. PMID:26683022

  14. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  15. Robust Variance Estimation in Meta-Regression with Binary Dependent Effects

    ERIC Educational Resources Information Center

    Tipton, Elizabeth

    2013-01-01

    Dependent effect size estimates are a common problem in meta-analysis. Recently, a robust variance estimation method was introduced that can be used whenever effect sizes in a meta-analysis are not independent. This problem arises, for example, when effect sizes are nested or when multiple measures are collected on the same individuals. In this…

  16. Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.

    ERIC Educational Resources Information Center

    Gullickson, Arlen R.; And Others

    This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…

  17. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

    ERIC Educational Resources Information Center

    Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

    2004-01-01

    This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

  18. Inference of equivalence for the ratio of two normal means with unspecified variances.

    PubMed

    Xu, Siyan; Hua, Steven Ye; Menton, Ronald; Barker, Kerry; Menon, Sandeep; D'Agostino, Ralph B

    2014-01-01

    Equivalence trials aim to demonstrate that new and standard treatments are equivalent within predefined clinically relevant limits. We focus on when inference of equivalence is made in terms of the ratio of two normal means. In the presence of unspecified variances, methods such as the likelihood-ratio test use sample estimates for those variances; Bayesian models integrate them out in the posterior distribution. These methods limit the knowledge on the extent to which equivalence is affected by variability of the parameter of interest. In this article, we propose a likelihood approach that retains the unspecified variances in the model and partitions the likelihood function into two components: F-statistic function for variances, and t-statistic function for the ratio of two means. By incorporating unspecified variances, the proposed method can help identify a numeric range of variances where equivalence is more likely to be achieved, which cannot be accomplished by current analysis methods. By partitioning the likelihood function into two components, the proposed method provides more inference information than a method that relies solely on one component. Using a published set of real example data, we show that the proposed method produces the same results as the likelihood-ratio test and is comparable to Bayesian analysis in the general case. In a special case where the ratio of two variances is directly proportional to the ratio of two means, the proposed method yields better results in inference about equivalence than either the likelihood-ratio test or the Bayesian method. Using a published set of real example data, the proposed likelihood method is shown to be a better alternative than current analysis methods for equivalence inference.

  19. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  20. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  1. RR-Interval variance of electrocardiogram for atrial fibrillation detection

    NASA Astrophysics Data System (ADS)

    Nuryani, N.; Solikhah, M.; Nugoho, A. S.; Afdala, A.; Anzihory, E.

    2016-11-01

    Atrial fibrillation is a serious heart problem originated from the upper chamber of the heart. The common indication of atrial fibrillation is irregularity of R peak-to-R-peak time interval, which is shortly called RR interval. The irregularity could be represented using variance or spread of RR interval. This article presents a system to detect atrial fibrillation using variances. Using clinical data of patients with atrial fibrillation attack, it is shown that the variance of electrocardiographic RR interval are higher during atrial fibrillation, compared to the normal one. Utilizing a simple detection technique and variances of RR intervals, we find a good performance of atrial fibrillation detection.

  2. Regolith breccia Allan Hills A81005 - Evidence of lunar origin, and petrography of pristine and nonpristine clasts

    NASA Technical Reports Server (NTRS)

    Warren, P. H.; Taylor, G. J.; Keil, K.

    1983-01-01

    It is shown that the ratios of MnO/FeO in pyroxene, texture (abundant brown and swirly glass, which are typical of lunar regolith breccias) and overall composition (approximately 75 percent plagioclase) indicate a lunar origin for the regolith breccia Allan Hills A81005, presumably from an unsampled region of the moon. The rock is found to differ in detail from other regolith samples; for example, it has exceptionally low contents of Na and KREEP. In addition, a pristine clast is found to contain exceptionally coarse augite in comparison with similar Apollo samples. It is found that ALHA81005 is not perceptibly more shocked than typical Apollo regolith breccias. It is concluded that the discovery of this rock on earth strengthens the suggestion that SNC achondrites were derived by impact ejection from Mars.

  3. A 7-month-old male with Allan-Herndon-Dudley syndrome and the power of T3.

    PubMed

    Langley, Katherine G; Trau, Steven; Bean, Lora J H; Narravula, Alekhya; Schrier Vergano, Samantha A

    2015-05-01

    Allan-Herndon-Dudley syndrome (AHDS, MIM 300523) is an X-linked neurodegenerative disorder characterized by intellectual disability, severe hypotonia, diminished muscle mass, and progressive spastic paraplegia. All affected males have pathognomonic thyroid profiles with an elevated T3 , low-normal free T4 , and normal TSH. Mutations in the monocarboxylate transporter 8 (MCT8) gene, SLC16A2, have been found to be causative. Here, we describe a proband whose extensive evaluation and ultimate diagnosis of AHDS unmasked three previously undiagnosed generations of affected individuals in one family. This case illustrates the need for clinicians to consider obtaining full thyroid studies on individuals with the non-specific findings of severe hypotonia, failure to thrive, and gross motor delay.

  4. VIVA (from virus variance), a library to reconstruct icosahedral viruses based on the variance of structural models.

    PubMed

    Cantele, Francesca; Lanzavecchia, Salvatore; Bellon, Pier Luigi

    2004-11-01

    VIVA is a software library that obtains low-resolution models of icosahedral viruses from projections observed at the electron microscope. VIVA works in a fully automatic way without any initial model. This feature eliminates the possibility of bias that could originate from the alignment of the projections to an external preliminary model. VIVA determines the viewing direction of the virus images by computation of sets of single particle reconstruction (SPR) followed by a variance analysis and classification of the 3D models. All structures are reduced in size to speed up computation. This limits the resolution of a VIVA reconstruction. The models obtained can be subsequently refined at best with use of standard libraries. Up today, VIVA has successfully solved the structure of all viruses tested, some of which being considered refractory particles. The VIVA library is written in 'C' language and is devised to run on widespread Linux computers.

  5. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction.

    PubMed

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-02-27

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 10(16) electrons/m²) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  6. Kriging with Unknown Variance Components for Regional Ionospheric Reconstruction

    PubMed Central

    Huang, Ling; Zhang, Hongping; Xu, Peiliang; Geng, Jianghui; Wang, Cheng; Liu, Jingnan

    2017-01-01

    Ionospheric delay effect is a critical issue that limits the accuracy of precise Global Navigation Satellite System (GNSS) positioning and navigation for single-frequency users, especially in mid- and low-latitude regions where variations in the ionosphere are larger. Kriging spatial interpolation techniques have been recently introduced to model the spatial correlation and variability of ionosphere, which intrinsically assume that the ionosphere field is stochastically stationary but does not take the random observational errors into account. In this paper, by treating the spatial statistical information on ionosphere as prior knowledge and based on Total Electron Content (TEC) semivariogram analysis, we use Kriging techniques to spatially interpolate TEC values. By assuming that the stochastic models of both the ionospheric signals and measurement errors are only known up to some unknown factors, we propose a new Kriging spatial interpolation method with unknown variance components for both the signals of ionosphere and TEC measurements. Variance component estimation has been integrated with Kriging to reconstruct regional ionospheric delays. The method has been applied to data from the Crustal Movement Observation Network of China (CMONOC) and compared with the ordinary Kriging and polynomial interpolations with spherical cap harmonic functions, polynomial functions and low-degree spherical harmonic functions. The statistics of results indicate that the daily ionospheric variations during the experimental period characterized by the proposed approach have good agreement with the other methods, ranging from 10 to 80 TEC Unit (TECU, 1 TECU = 1 × 1016 electrons/m2) with an overall mean of 28.2 TECU. The proposed method can produce more appropriate estimations whose general TEC level is as smooth as the ordinary Kriging but with a smaller standard deviation around 3 TECU than others. The residual results show that the interpolation precision of the new proposed

  7. Water vapor variance measurements using a Raman lidar

    NASA Technical Reports Server (NTRS)

    Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.

    1992-01-01

    Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.

  8. A general approach to mixed effects modeling of residual variances in generalized linear mixed models

    PubMed Central

    Kizilkaya, Kadir; Tempelman, Robert J

    2005-01-01

    We propose a general Bayesian approach to heteroskedastic error modeling for generalized linear mixed models (GLMM) in which linked functions of conditional means and residual variances are specified as separate linear combinations of fixed and random effects. We focus on the linear mixed model (LMM) analysis of birth weight (BW) and the cumulative probit mixed model (CPMM) analysis of calving ease (CE). The deviance information criterion (DIC) was demonstrated to be useful in correctly choosing between homoskedastic and heteroskedastic error GLMM for both traits when data was generated according to a mixed model specification for both location parameters and residual variances. Heteroskedastic error LMM and CPMM were fitted, respectively, to BW and CE data on 8847 Italian Piemontese first parity dams in which residual variances were modeled as functions of fixed calf sex and random herd effects. The posterior mean residual variance for male calves was over 40% greater than that for female calves for both traits. Also, the posterior means of the standard deviation of the herd-specific variance ratios (relative to a unitary baseline) were estimated to be 0.60 ± 0.09 for BW and 0.74 ± 0.14 for CE. For both traits, the heteroskedastic error LMM and CPMM were chosen over their homoskedastic error counterparts based on DIC values. PMID:15588567

  9. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT...

  10. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  11. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  12. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  13. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  14. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  15. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  16. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  17. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  18. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  19. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions....

  20. 76 FR 78698 - Proposed Revocation of Permanent Variances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-19

    ... Occupational Safety and Health Administration Proposed Revocation of Permanent Variances AGENCY: Occupational... short and plain statement detailing (1) how the proposed revocation would affect the requesting party..., subpart L. The following table provides information about the variances proposed for revocation by...

  1. Gender Variance and Educational Psychology: Implications for Practice

    ERIC Educational Resources Information Center

    Yavuz, Carrie

    2016-01-01

    The area of gender variance appears to be more visible in both the media and everyday life. Within educational psychology literature gender variance remains underrepresented. The positioning of educational psychologists working across the three levels of child and family, school or establishment and education authority/council, means that they are…

  2. 42 CFR 456.522 - Content of request for variance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS UTILIZATION CONTROL Utilization Review Plans: FFP, Waivers, and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time..., mental hospital, and ICF located within a 50-mile radius of the facility; (e) The distance and...

  3. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  4. Genotypic-specific variance in Caenorhabditis elegans lifetime fecundity

    PubMed Central

    Diaz, S Anaid; Viney, Mark

    2014-01-01

    Organisms live in heterogeneous environments, so strategies that maximze fitness in such environments will evolve. Variation in traits is important because it is the raw material on which natural selection acts during evolution. Phenotypic variation is usually thought to be due to genetic variation and/or environmentally induced effects. Therefore, genetically identical individuals in a constant environment should have invariant traits. Clearly, genetically identical individuals do differ phenotypically, usually thought to be due to stochastic processes. It is now becoming clear, especially from studies of unicellular species, that phenotypic variance among genetically identical individuals in a constant environment can be genetically controlled and that therefore, in principle, this can be subject to selection. However, there has been little investigation of these phenomena in multicellular species. Here, we have studied the mean lifetime fecundity (thus a trait likely to be relevant to reproductive success), and variance in lifetime fecundity, in recently-wild isolates of the model nematode Caenorhabditis elegans. We found that these genotypes differed in their variance in lifetime fecundity: some had high variance in fecundity, others very low variance. We find that this variance in lifetime fecundity was negatively related to the mean lifetime fecundity of the lines, and that the variance of the lines was positively correlated between environments. We suggest that the variance in lifetime fecundity may be a bet-hedging strategy used by this species. PMID:25360248

  5. Conceptual Complexity and the Bias/Variance Tradeoff

    ERIC Educational Resources Information Center

    Briscoe, Erica; Feldman, Jacob

    2011-01-01

    In this paper we propose that the conventional dichotomy between exemplar-based and prototype-based models of concept learning is helpfully viewed as an instance of what is known in the statistical learning literature as the "bias/variance tradeoff". The bias/variance tradeoff can be thought of as a sliding scale that modulates how closely any…

  6. Variances and Covariances of Kendall's Tau and Their Estimation.

    ERIC Educational Resources Information Center

    Cliff, Norman; Charlin, Ventura

    1991-01-01

    Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)

  7. Trait and state variance in oppositional defiant disorder symptoms: A multi-source investigation with Spanish children.

    PubMed

    Preszler, Jonathan; Burns, G Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu

    2017-02-01

    The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on 2 occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all 4 sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers' and fathers' trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the 4 sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to nonexistent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. (PsycINFO Database Record

  8. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  9. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  10. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  11. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  12. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  13. Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1972-01-01

    The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.

  14. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  15. Variances of the components and magnitude of the polar heliospheric magnetic field

    NASA Technical Reports Server (NTRS)

    Balogh, A.; Horbury, T. S.; Forsyth, R. J.; Smith, E. J.

    1995-01-01

    The heliolatitude dependences of the variances in the components and the magnitude of the heliospheric magnetic field have been analysed, using the Ulysses magnetic field observations from close to the ecliptic plane to 80 southern solar latitude. The normalized variances in the components of the field increased significantly (by a factor about 5) as Ulysses entered the purely polar flows from the southern coronal hole. At the same time, there was at most a small increase in the variance of the field magnitude. The analysis of the different components indicates that the power in the fluctuations is not isotropically distributed: most of the power is in the components of the field transverse to the radial direction. Examining the variances calculated over different time scales from minutes to hours shows that the anisotropy of the field variances is different on different scales, indicating the influence of the two distinct populations of fluctuations in the polar solar wind which have been previously identified. We discuss these results in terms of evolutionary, dynamic processes as a function of heliocentric distance and as a function of the large scale geometry of the magnetic field associated with the polar coronal hole.

  16. Anatomically constrained minimum variance beamforming applied to EEG.

    PubMed

    Murzin, Vyacheslav; Fuchs, Armin; Kelso, J A Scott

    2011-10-01

    Neural activity as measured non-invasively using electroencephalography (EEG) or magnetoencephalography (MEG) originates in the cortical gray matter. In the cortex, pyramidal cells are organized in columns and activated coherently, leading to current flow perpendicular to the cortical surface. In recent years, beamforming algorithms have been developed, which use this property as an anatomical constraint for the locations and directions of potential sources in MEG data analysis. Here, we extend this work to EEG recordings, which require a more sophisticated forward model due to the blurring of the electric current at tissue boundaries where the conductivity changes. Using CT scans, we create a realistic three-layer head model consisting of tessellated surfaces that represent the cerebrospinal fluid-skull, skull-scalp, and scalp-air boundaries. The cortical gray matter surface, the anatomical constraint for the source dipoles, is extracted from MRI scans. EEG beamforming is implemented on simulated sets of EEG data for three different head models: single spherical, multi-shell spherical, and multi-shell realistic. Using the same conditions for simulated EEG and MEG data, it is shown (and quantified by receiver operating characteristic analysis) that EEG beamforming detects radially oriented sources, to which MEG lacks sensitivity. By merging several techniques, such as linearly constrained minimum variance beamforming, realistic geometry forward solutions, and cortical constraints, we demonstrate it is possible to localize and estimate the dynamics of dipolar and spatially extended (distributed) sources of neural activity.

  17. Reduction of variance in measurements of average metabolite concentration in anatomically-defined brain regions

    NASA Astrophysics Data System (ADS)

    Larsen, Ryan J.; Newman, Michael; Nikolaidis, Aki

    2016-11-01

    Multiple methods have been proposed for using Magnetic Resonance Spectroscopy Imaging (MRSI) to measure representative metabolite concentrations of anatomically-defined brain regions. Generally these methods require spectral analysis, quantitation of the signal, and reconciliation with anatomical brain regions. However, to simplify processing pipelines, it is practical to only include those corrections that significantly improve data quality. Of particular importance for cross-sectional studies is knowledge about how much each correction lowers the inter-subject variance of the measurement, thereby increasing statistical power. Here we use a data set of 72 subjects to calculate the reduction in inter-subject variance produced by several corrections that are commonly used to process MRSI data. Our results demonstrate that significant reductions of variance can be achieved by performing water scaling, accounting for tissue type, and integrating MRSI data over anatomical regions rather than simply assigning MRSI voxels with anatomical region labels.

  18. Robust variance estimation with dependent effect sizes: practical considerations including a software tutorial in Stata and spss.

    PubMed

    Tanner-Smith, Emily E; Tipton, Elizabeth

    2014-03-01

    Methodologists have recently proposed robust variance estimation as one way to handle dependent effect sizes in meta-analysis. Software macros for robust variance estimation in meta-analysis are currently available for Stata (StataCorp LP, College Station, TX, USA) and spss (IBM, Armonk, NY, USA), yet there is little guidance for authors regarding the practical application and implementation of those macros. This paper provides a brief tutorial on the implementation of the Stata and spss macros and discusses practical issues meta-analysts should consider when estimating meta-regression models with robust variance estimates. Two example databases are used in the tutorial to illustrate the use of meta-analysis with robust variance estimates.

  19. Comparing estimates of genetic variance across different relationship models.

    PubMed

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities".

  20. Filtered kriging for spatial data with heterogeneous measurement error variances.

    PubMed

    Christensen, William F

    2011-09-01

    When predicting values for the measurement-error-free component of an observed spatial process, it is generally assumed that the process has a common measurement error variance. However, it is often the case that each measurement in a spatial data set has a known, site-specific measurement error variance, rendering the observed process nonstationary. We present a simple approach for estimating the semivariogram of the unobservable measurement-error-free process using a bias adjustment of the classical semivariogram formula. We then develop a new kriging predictor that filters the measurement errors. For scenarios where each site's measurement error variance is a function of the process of interest, we recommend an approach that also uses a variance-stabilizing transformation. The properties of the heterogeneous variance measurement-error-filtered kriging (HFK) predictor and variance-stabilized HFK predictor, and the improvement of these approaches over standard measurement-error-filtered kriging are demonstrated using simulation. The approach is illustrated with climate model output from the Hudson Strait area in northern Canada. In the illustration, locations with high or low measurement error variances are appropriately down- or upweighted in the prediction of the underlying process, yielding a realistically smooth picture of the phenomenon of interest.

  1. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  2. Geological evolution of the Coombs Allan Hills area, Ferrar large igneous province, Antarctica: Debris avalanches, mafic pyroclastic density currents, phreatocauldrons

    NASA Astrophysics Data System (ADS)

    Ross, Pierre-Simon; White, James D. L.; McClintock, Murray

    2008-05-01

    The Jurassic Ferrar large igneous province of Antarctica comprises igneous intrusions, flood lavas, and mafic volcaniclastic deposits (now lithified). The latter rocks are particularly diverse and well-exposed in the Coombs-Allan Hills area of South Victoria Land, where they are assigned to the Mawson Formation. In this paper we use these rocks in conjunction with the pre-Ferrar sedimentary rocks (Beacon Supergroup) and the lavas themselves (Kirkpatrick Basalt) to reconstruct the geomorphological and geological evolution of the landscape. In the Early Jurassic, the surface of the region was an alluvial plain, with perhaps 1 km of mostly continental siliciclastic sediments underlying it. After the fall of silicic ash from an unknown but probably distal source, mafic magmatism of the Ferrar province began. The oldest record of this event at Allan Hills is a ≤ 180 m-thick debris-avalanche deposit (member m1 of the Mawson Formation) which contains globular domains of mafic igneous rock. These domains are inferred to represent dismembered Ferrar intrusions emplaced in the source area of the debris avalanche; shallow emplacement of Ferrar magmas caused a slope failure that mobilized the uppermost Beacon Supergroup, and the silicic ash deposits, into a pre-existing valley or basin. The period which followed ('Mawson time') was the main stage for explosive eruptions in the Ferrar province, and several cubic kilometres of both new magma and sedimentary rock were fragmented over many years. Phreatomagmatic explosions were the dominant fragmentation mechanism, with magma-water interaction taking place in both sedimentary aquifers and existing vents filled by volcaniclastic debris. At Coombs Hills, a vent complex or 'phreatocauldron' was formed by coalescence of diatreme-like structures; at Allan Hills, member m2 of the Mawson Formation consists mostly of thick, coarse-grained, poorly sorted layers inferred to represent the lithified deposits of pyroclastic density currents

  3. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    PubMed

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  4. Blinded sample size re-estimation in superiority and noninferiority trials: bias versus variance in variance estimation.

    PubMed

    Friede, Tim; Kieser, Meinhard

    2013-01-01

    The internal pilot study design allows for modifying the sample size during an ongoing study based on a blinded estimate of the variance thus maintaining the trial integrity. Various blinded sample size re-estimation procedures have been proposed in the literature. We compare the blinded sample size re-estimation procedures based on the one-sample variance of the pooled data with a blinded procedure using the randomization block information with respect to bias and variance of the variance estimators, and the distribution of the resulting sample sizes, power, and actual type I error rate. For reference, sample size re-estimation based on the unblinded variance is also included in the comparison. It is shown that using an unbiased variance estimator (such as the one using the randomization block information) for sample size re-estimation does not guarantee that the desired power is achieved. Moreover, in situations that are common in clinical trials, the variance estimator that employs the randomization block length shows a higher variability than the simple one-sample estimator and in turn the sample size resulting from the related re-estimation procedure. This higher variability can lead to a lower power as was demonstrated in the setting of noninferiority trials. In summary, the one-sample estimator obtained from the pooled data is extremely simple to apply, shows good performance, and is therefore recommended for application.

  5. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  6. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) NATIONAL VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a)...

  7. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.

  8. Effect of window shape on the detection of hyperuniformity via the local number variance

    NASA Astrophysics Data System (ADS)

    Kim, Jaeuk; Torquato, Salvatore

    2017-01-01

    Hyperuniform many-particle systems in d-dimensional space {{{R}}d} , which includes crystals, quasicrystals, and some exotic disordered systems, are characterized by an anomalous suppression of density fluctuations at large length scales such that the local number variance within a ‘spherical’ observation window grows slower than the window volume. In usual circumstances, this direct-space condition is equivalent to the Fourier-space hyperuniformity condition that the structure factor vanishes as the wavenumber goes to zero. In this paper, we comprehensively study the effect of aspherical window shapes with characteristic size L on the direct-space condition for hyperuniform systems. For lattices, we demonstrate that the variance growth rate can depend on the shape as well as the orientation of the windows, and in some cases, the growth rate can be faster than the window volume (i.e. L d ), which may lead one to falsely conclude that the system is non-hyperuniform solely according to the direct-space condition. We begin by numerically investigating the variance of two-dimensional lattices using ‘superdisk’ windows, whose convex shapes continuously interpolate between circles (p  =  1) and squares (p\\to ∞ ), as prescribed by a deformation parameter p, when the superdisk symmetry axis is aligned with the lattice. Subsequently, we analyze the variance for lattices as a function of the window orientation, especially for two-dimensional lattices using square windows (superdisk when p\\to ∞ ). Based on this analysis, we explain the reason why the variance for d  =  2 can grow faster than the window area or even slower than the window perimeter (e.g. like \\ln (L) ). We then study the generalized condition of the window orientation, under which the variance can grow as fast as or faster than L d (window volume), to the case of Bravais lattices and parallelepiped windows in {{{R}}d} . In the case of isotropic disordered hyperuniform systems, we

  9. Prediction of membrane protein types using maximum variance projection

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Yang, Jie

    2011-05-01

    Predicting membrane protein types has a positive influence on further biological function analysis. To quickly and efficiently annotate the type of an uncharacterized membrane protein is a challenge. In this work, a system based on maximum variance projection (MVP) is proposed to improve the prediction performance of membrane protein types. The feature extraction step is based on a hybridization representation approach by fusing Position-Specific Score Matrix composition. The protein sequences are quantized in a high-dimensional space using this representation strategy. Some problems will be brought when analysing these high-dimensional feature vectors such as high computing time and high classifier complexity. To solve this issue, MVP, a novel dimensionality reduction algorithm is introduced by extracting the essential features from the high-dimensional feature space. Then, a K-nearest neighbour classifier is employed to identify the types of membrane proteins based on their reduced low-dimensional features. As a result, the jackknife and independent dataset test success rates of this model reach 86.1 and 88.4%, respectively, and suggest that the proposed approach is very promising for predicting membrane proteins types.

  10. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  11. Forecast Variance Estimates Using Dart Inversion

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2014-12-01

    The tsunami forecast tool developed by the NOAA Center for Tsunami Research (NCTR) provides real-time tsunami forecast and is composed of the following major components: a pre-computed tsunami propagation database, an inversion algorithm that utilizes real-time tsunami data recorded at DART stations to define the tsunami source, and inundation models that predict tsunami wave characteristics at specific coastal locations. The propagation database is a collection of basin-wide tsunami model runs generated from 50x100 km "unit sources" with a slip of 1 meter. Linear combination and scaling of unit sources is possible since the nonlinearity in the deep ocean is negligible. To define the tsunami source using the unit sources, real-time DART data is ingested into an inversion algorithm. Based on the selected DART and length of tsunami time series, the inversion algorithm will select the best combination of unit sources and scaling factors that best fit the observed data at the selected locations. This combined source then serves as boundary condition for the inundation models. Different combinations of DARTs and length of tsunami time series used in the inversion algorithm will result in different selection of unit sources and scaling factors. Since the combined unit sources are used as boundary condition for inundation modeling, different sources will produce variations in the tsunami wave characteristics. As part of the testing procedures for the tsunami forecast tool, staff at NCTR and both National and Pacific Tsunami Warning Centers, performed post-event forecasts for several historical tsunamis. The extent of variation due to different source definitions obtained from the testing is analyzed by comparing the simulated maximum tsunami wave amplitude with recorded data at tide gauge locations. Results of the analysis will provide an error estimate defining the possible range of the simulated maximum tsunami wave amplitude for each specific inundation model.

  12. Hidden item variance in multiple mini-interview scores.

    PubMed

    Zaidi, Nikki L Bibler; Swoboda, Christopher M; Kelcey, Benjamin M; Manuel, R Stephen

    2017-05-01

    The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation form. Due to its multi-faceted, repeated measures format, reliability for the MMI has been primarily evaluated using generalizability (G) theory. A key assumption of G theory is that G studies model the most important sources of variance to which a researcher plans to generalize. Because G studies can only attribute variance to the facets that are modeled in a G study, failure to model potentially substantial sources of variation in MMI scores can result in biased estimates of variance components. This study demonstrates the implications of hiding the item facet in MMI studies when true item-level effects exist. An extensive Monte Carlo simulation study was conducted to examine whether a commonly used hidden item, person-by-station (p × s|i) G study design results in biased estimated variance components. Estimates from this hidden item model were compared with estimates from a more complete person-by-station-by-item (p × s × i) model. Results suggest that when true item-level effects exist, the hidden item model (p × s|i) will result in biased variance components which can bias reliability estimates; therefore, researchers should consider using the more complete person-by-station-by-item model (p × s × i) when evaluating generalizability of MMI scores.

  13. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.

  14. Variance Analysis of Unevenly Spaced Time Series Data

    NASA Technical Reports Server (NTRS)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  15. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    ERIC Educational Resources Information Center

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  16. Quantification of spurious dissipation and mixing - Discrete variance decay in a Finite-Volume framework

    NASA Astrophysics Data System (ADS)

    Klingbeil, Knut; Mohammadi-Aragh, Mahdi; Gräwe, Ulf; Burchard, Hans

    2014-09-01

    It is well known that in numerical models the advective transport relative to fixed or moving grids needs to be discretised with sufficient accuracy to minimise the spurious decay of tracer variance (spurious mixing). In this paper a general analysis of discrete variance decay (DVD) caused by advective and diffusive fluxes is established. Lacking a general closed derivation for the local DVD rate, two non-invasive methods to estimate local DVD during model runtime are discussed. Whereas the first was presented recently by Burchard and Rennau (2008), the second is a newly proposed alternative. This alternative analysis method is argued to have a more consistent foundation. In particular, it recovers a physically sound definition of discrete variance in a Finite-Volume cell. The diagnosed DVD can be separated into physical and numerical (spurious) contributions, with the latter originating from discretisation errors. Based on the DVD analysis, a 3D dissipation analysis is developed to quantify the physically and numerically induced loss of kinetic energy. This dissipation analysis provides a missing piece of information to assess the discrete energy conservation of an ocean model. Analyses are performed and evaluated for three test cases, with complexities ranging from idealised 1D advection to a realistic ocean modelling application to the Western Baltic Sea. In all test cases the proposed alternative DVD analysis method is demonstrated to provide a reliable diagnostic tool for the local quantification of physically and numerically induced dissipation and mixing.

  17. Fine-Grained Rims in the Allan Hills 81002 and Lewis Cliff 90500 CM2 Meteorites: Their Origin and Modification

    NASA Technical Reports Server (NTRS)

    Hua, X.; Wang, J.; Buseck, P. R.

    2002-01-01

    Antarctic CM meteorites Allan Hills (ALH) 8 1002 and Lewis Cliff (LEW) 90500 contain abundant fine-grained rims (FGRs) that surround a variety of coarse-grained objects. FGRs from both meteorites have similar compositions and petrographic features, independent of their enclosed objects. The FGRs are chemically homogeneous at the 10 m scale for major and minor elements and at the 25 m scale for trace elements. They display accretionary features and contain large amounts of volatiles, presumably water. They are depleted in Ca, Mn, and S but enriched in P. All FGRs show a slightly fractionated rare earth element (REE) pattern, with enrichments of Gd and Yb and depletion of Er. Gd is twice as abundant as Er. Our results indicate that those FGRs are not genetically related to their enclosed cores. They were sampled from a reservoir of homogeneously mixed dust, prior to accretion to their parent body. The rim materials subsequently experienced aqueous alteration under identical conditions. Based on their mineral, textural, and especially chemical similarities, we conclude that ALH 8 1002 and LEW 90500 likely have a similar or identical source.

  18. Allan Brooks, naturalist and artist (1869-1946): the travails of an early twentieth century wildlife illustrator in North America.

    PubMed

    Winearls, Joan

    2008-01-01

    British by birth Allan Cyril Brooks (1869-1946) emigrated to Canada in the 1880s, and became one of the most important North American bird illustrators during the first half of the twentieth century. Brooks was one of the leading ornithologists and wildlife collectors of the time; he corresponded extensively with other ornithologists and supplied specimens to many major North American museums. From the 1890s on he hoped to support himself by painting birds and mammals, but this was not possible in Canada at that time and he was forced to turn to American sources for illustration commissions. His work can be compared with that of his contemporary, the leading American bird painter Louis Agassiz Fuertes (1874-1927), and there are striking similarities and differences in their careers. This paper discusses the work of a talented, self-taught wildlife artist working in a North American milieu, his difficulties and successes in a newly developing field, and his quest for Canadian recognition.

  19. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  20. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.