Science.gov

Sample records for allan variance analysis

  1. Spectral Ambiguity of Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1996-01-01

    We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.

  2. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.

  3. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757

  4. Estimating the Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles

    1995-01-01

    A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.

  5. Avoiding Aliasing in Allan Variance: An Application to Fiber Link Data Analysis.

    PubMed

    Calosso, Claudio E; Clivati, Cecilia; Micalizio, Salvatore

    2016-04-01

    Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance (AVAR) as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284-km coherent optical link for frequency dissemination, which we realized in Italy. PMID:26800534

  6. Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.

    PubMed

    Malkin, Zinovy

    2016-04-01

    The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series. PMID:26540681

  7. A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance

    NASA Technical Reports Server (NTRS)

    Weiss, Marc A.; Greenhall, Charles A.

    1996-01-01

    An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.

  8. On the application of Allan variance method for Ring Laser Gyro performance characterization

    SciTech Connect

    Ng, L.C.

    1993-10-15

    This report describes the method of Allan variance and its application to the characterization of a Ring Laser Gyro`s (RLG) performance. Allan variance, a time domain analysis technique, is an accepted IEEE standard for gyro specifications. The method was initially developed by David Allan of the National Bureau of Standards to quantify the error statistics of a Cesium beam frequency standard employed as the US Frequency Standards in 1960`s. The method can, in general, be applied to analyze the error characteristics of any precision measurement instrument. The key attribute of the method is that it allows for a finer, easier characterization and identification of error sources and their contribution to the overall noise statistics. This report presents an overview of the method, explains the relationship between Allan variance and power spectral density distribution of underlying noise sources, describes the batch and recursive implementation approaches, validates the Allan variance computation with a simulation model, and illustrates the Allan variance method using data collected from several Honeywell LIMU units.

  9. Numbers Of Degrees Of Freedom Of Allan-Variance Estimators

    NASA Technical Reports Server (NTRS)

    Greenhall, Charles A.

    1992-01-01

    Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.

  10. Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter

    PubMed Central

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  11. Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.

    PubMed

    Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao

    2015-01-01

    As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903

  12. The Third-Difference Approach to Modified Allan Variance

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    1995-01-01

    This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.

  13. Relationship between Allan variances and Kalman Filter parameters

    NASA Technical Reports Server (NTRS)

    Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.

    1984-01-01

    A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.

  14. The dynamic Allan Variance IV: characterization of atomic clock anomalies.

    PubMed

    Galleani, Lorenzo; Tavella, Patrizia

    2015-05-01

    The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674

  15. Measurement of Allan variance and phase noise at fractions of a millihertz

    NASA Technical Reports Server (NTRS)

    Conroy, Bruce L.; Le, Duc

    1990-01-01

    Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.

  16. Allan Variance Computed in Space Domain: Definition and Application to InSAR Data to Characterize Noise and Geophysical Signal.

    PubMed

    Cavalié, Olivier; Vernotte, François

    2016-04-01

    The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time

  17. Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.

    PubMed

    Bregni, Stefano

    2016-04-01

    The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis. PMID:26529754

  18. On the Design of Attitude-Heading Reference Systems Using the Allan Variance.

    PubMed

    Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis

    2016-04-01

    The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV). PMID:26800535

  19. Investigation of Allan variance for determining noise spectral forms with application to microwave radiometry

    NASA Technical Reports Server (NTRS)

    Stanley, William D.

    1994-01-01

    An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.

  20. Allan deviation analysis of financial return series

    NASA Astrophysics Data System (ADS)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  1. Naive Analysis of Variance

    ERIC Educational Resources Information Center

    Braun, W. John

    2012-01-01

    The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…

  2. Budget variance analysis using RVUs.

    PubMed

    Berlin, M F; Budzynski, M R

    1998-01-01

    This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247

  3. Analysis of Variance: Variably Complex

    ERIC Educational Resources Information Center

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…

  4. Variance analysis. Part I, Extending flexible budget variance analysis to acuity.

    PubMed

    Finkler, S A

    1991-01-01

    The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002

  5. Another Line for the Analysis of Variance

    ERIC Educational Resources Information Center

    Brown, Bruce L.; Harshbarger, Thad R.

    1976-01-01

    A test is developed for hypotheses about the grand mean in the analysis of variance, using the known relationship between the t distribution and the F distribution with 1 df (degree of freedom) for the numerator. (Author/RC)

  6. Nonorthogonal Analysis of Variance Programs: An Evaluation.

    ERIC Educational Resources Information Center

    Hosking, James D.; Hamer, Robert M.

    1979-01-01

    Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)

  7. Allan Deviation Plot as a Tool for Quartz-Enhanced Photoacoustic Sensors Noise Analysis.

    PubMed

    Giglio, Marilena; Patimisco, Pietro; Sampaolo, Angelo; Scamarcio, Gaetano; Tittel, Frank K; Spagnolo, Vincenzo

    2016-04-01

    We report here on the use of the Allan deviation plot to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor. The Allan plot provides information about the optimum averaging time for the QEPAS signal and allows the prediction of its ultimate detection limit. The Allan deviation can also be used to determine the main sources of noise coming from the individual components of the sensor. Quartz tuning fork thermal noise dominates for integration times up to 275 s, whereas at longer averaging times, the main contribution to the sensor noise originates from laser power instabilities. PMID:26529758

  8. Formative Use of Intuitive Analysis of Variance

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2013-01-01

    Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…

  9. Directional variance analysis of annual rings

    NASA Astrophysics Data System (ADS)

    Kumpulainen, P.; Marjanen, K.

    2010-07-01

    The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.

  10. Analysis of variance based on fuzzy observations

    NASA Astrophysics Data System (ADS)

    Nourbakhsh, M.; Mashinchi, M.; Parchami, A.

    2013-04-01

    Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several populations. In this article, we extend one-way ANOVA to a case where observed data are fuzzy observations rather than real numbers. Two real-data examples are given to show the performance of this method.

  11. Analysis of variance of microarray data.

    PubMed

    Ayroles, Julien F; Gibson, Greg

    2006-01-01

    Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792

  12. RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...

  13. Automatic variance analysis of multistage care pathways.

    PubMed

    Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T

    2014-01-01

    A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280

  14. Applications of non-parametric statistics and analysis of variance on sample variances

    NASA Technical Reports Server (NTRS)

    Myers, R. H.

    1981-01-01

    Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.

  15. Functional Analysis of Variance for Association Studies

    PubMed Central

    Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.; Greenwood, Mark C.; Wei, Changshuai; Lu, Qing

    2014-01-01

    While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods – SKAT and a previously proposed method based on functional linear models (FLM), – especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256

  16. Wave propagation analysis using the variance matrix.

    PubMed

    Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S

    2014-10-01

    The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243

  17. Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation

    NASA Technical Reports Server (NTRS)

    Hutsell, Steven T.

    1996-01-01

    The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.

  18. A Computer Program to Determine Reliability Using Analysis of Variance

    ERIC Educational Resources Information Center

    Burns, Edward

    1976-01-01

    A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)

  19. Variance analysis. Part II, The use of computers.

    PubMed

    Finkler, S A

    1991-09-01

    This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788

  20. Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances

    ERIC Educational Resources Information Center

    Jan, Show-Li; Shieh, Gwowen

    2014-01-01

    The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…

  1. Cyclostationary analysis with logarithmic variance stabilisation

    NASA Astrophysics Data System (ADS)

    Borghesani, Pietro; Shahriar, Md Rifat

    2016-03-01

    Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.

  2. Meta-analysis of ratios of sample variances.

    PubMed

    Prendergast, Luke A; Staudte, Robert G

    2016-05-20

    When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27062644

  3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    ERIC Educational Resources Information Center

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  4. On variance estimate for covariate adjustment by propensity score analysis.

    PubMed

    Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo

    2016-09-10

    Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553

  5. Analysis of Variance Components for Genetic Markers with Unphased Genotypes

    PubMed Central

    Wang, Tao

    2016-01-01

    An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297

  6. Intuitive Analysis of Variance-- A Formative Assessment Approach

    ERIC Educational Resources Information Center

    Trumpower, David

    2013-01-01

    This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)

  7. Strength of Relationship in Multivariate Analysis of Variance.

    ERIC Educational Resources Information Center

    Smith, I. Leon

    Methods for the calculation of eta coefficient, or correlation ratio, squared have recently been presented for examining the strength of relationship in univariate analysis of variance. This paper extends them to the multivariate case in which the effects of independent variables may be examined in relation to two or more dependent variables, and…

  8. Variance estimation for radiation analysis and multi-sensor fusion.

    SciTech Connect

    Mitchell, Dean James

    2010-09-01

    Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.

  9. Analysis of Variance in the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    Deloach, Richard

    2010-01-01

    This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.

  10. Variance reduction in Monte Carlo analysis of rarefied gas diffusion

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.

  11. Two-dimensional finite-element temperature variance analysis

    NASA Technical Reports Server (NTRS)

    Heuser, J. S.

    1972-01-01

    The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.

  12. FMRI group analysis combining effect estimates and their variances.

    PubMed

    Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W

    2012-03-01

    Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach

  13. Analysis of variance tables based on experimental structure.

    PubMed

    Brien, C J

    1983-03-01

    A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362

  14. Analysis of variance of thematic mapping experiment data.

    USGS Publications Warehouse

    Rosenfield, G.H.

    1981-01-01

    As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author

  15. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  16. Analysis of variance of an underdetermined geodetic displacement problem

    SciTech Connect

    Darby, D.

    1982-06-01

    It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.

  17. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  18. UV Spectral Fringerprinting and Analysis of Variance-Principal Component Analysis: A Tool for Characterizing Sources of Variance in Plant Materials

    Technology Transfer Automated Retrieval System (TEKTRAN)

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...

  19. Developing the Noncentrality Parameter for Calculating Group Sample Sizes in Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2011-01-01

    Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…

  20. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  1. Allan Cox 1926”1987

    NASA Astrophysics Data System (ADS)

    Coe, Rob; Dalrymple, Brent

    More than 1000 friends, students, and colleagues from all over the country filled Stanford Memorial Chapel (Stanford, Calif.) on February 3, 1987, to join in “A Celebration of the Life of Allan Cox.” Allan died early on the morning of January 27 while bicycling, the sport he had come to love the most. Between pieces of his favorite music by Bach and Mozart, Stanford administrators and colleagues spoke in tribute of Allan's unique qualities as friend, scientist, teacher, and dean of the School of Earth Sciences. James Rosse, Vice President and Provost of Stanford University, struck a particularly resonant chord with his personal remarks: "Allan reached out to each person he knew with the warmth and attention that can only come from deep respect and affection for others. I never heard him speak ill of others, and I do not believe he was capable of doing anything that would harm another being. He cared too much to intrude where he was not wanted, but his curiosity about people and the loving care with which he approached them broke down reserve to create remarkable friendships. His enthusiasm and good humor made him a welcome guest in the hearts of the hundreds of students and colleagues who shared the opportunity of knowing Allan Cox as a person."

  2. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  3. A multi-variance analysis in the time domain

    NASA Technical Reports Server (NTRS)

    Walter, Todd

    1993-01-01

    Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.

  4. Edgar Allan Poe and neurology.

    PubMed

    Teive, Hélio Afonso Ghizoni; Paola, Luciano de; Munhoz, Renato Puppi

    2014-06-01

    Edgar Allan Poe was one of the most celebrated writers of all time. He published several masterpieces, some of which include references to neurological diseases. Poe suffered from recurrent depression, suggesting a bipolar disorder, as well as alcohol and drug abuse, which in fact led to his death from complications related to alcoholism. Various hypotheses were put forward, including Wernicke's encephalopathy. PMID:24964115

  5. Allan Bloom's Quarrel with History.

    ERIC Educational Resources Information Center

    Thompson, James

    1988-01-01

    Responds to Allan Bloom's "The Closing of the American Mind." Concludes that despite cranky comments about bourgeois culture, the focus of Bloom's attack is on historicism, which undercuts his nostalgic vision of a prosperous and just America. Condemns Bloom's exclusion of Blacks, Hispanics, and women from America's cultural heritage. (DMM)

  6. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Kotob, S.

    1975-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  7. A new variance-based global sensitivity analysis technique

    NASA Astrophysics Data System (ADS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2013-11-01

    A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.

  8. Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.

    ERIC Educational Resources Information Center

    Amado, Alfred J.

    Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…

  9. Analysis and application of minimum variance discrete time system identification

    NASA Technical Reports Server (NTRS)

    Kotob, S.; Kaufman, H.

    1976-01-01

    An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  10. Analysis and application of minimum variance discrete linear system identification

    NASA Technical Reports Server (NTRS)

    Kotob, S.; Kaufman, H.

    1977-01-01

    An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  11. The Queensland "New Basics": An Interview with Allan Luke.

    ERIC Educational Resources Information Center

    Hunter, Lisa

    2001-01-01

    Presents an interview with Allan Luke, current editor of "The Journal of Adolescent and Adult Literacy," and Deputy Director General of Education for Queensland. Discusses several reform projects--Education 2010 (a futures-oriented analysis and philosophy for Queensland Schools); The New Basics ( a new curriculum/pedagogy/assessment framework);…

  12. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  13. Meta-analysis with missing study-level sample variance data.

    PubMed

    Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P

    2016-07-30

    We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26888093

  14. On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.

    2000-01-01

    Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)

  15. A Bayesian Solution for Two-Way Analysis of Variance. ACT Technical Bulletin No. 8.

    ERIC Educational Resources Information Center

    Lindley, Dennis V.

    The standard statistical analysis of data classified in two ways (say into rows and columns) is through an analysis of variance that splits the total variation of the data into the main effect of rows, the main effect of columns, and the interaction between rows and columns. This paper presents an alternative Bayesian analysis of the same…

  16. The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.

    PubMed

    Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico

    2016-04-01

    This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523

  17. Resampling analysis of participant variance to improve the efficiency of sensor modeling perception experiments

    NASA Astrophysics Data System (ADS)

    O'Connor, John D.; Hixson, Jonathan; McKnight, Patrick; Peterson, Matthew S.; Parasuraman, Raja

    2010-04-01

    Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) sensor models, such as NV Therm IP, are developed through perception experiments that investigate phenomena associated with sensor performance (e.g. sampling, noise, sensitivity). A standardized laboratory perception testing method developed in the mid-1990's has been responsible for advances in sensor modeling that are supported by field sensor performance experiments.1 The number of participants required to yield dependable results for these experiments could not be estimated because the variance in performance due to participant differences was not known. NVESD and George Mason University (GMU) scientists measured the contribution of participant variance within the overall experimental variance for 22 individuals each exposed to 1008 stimuli. Results of the analysis indicate that the total participant contribution to overall experimental variance was between 1% and 2%.

  18. Edgar Allan Poe's Physical Cosmology

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    1994-06-01

    In this paper I describe the scientific content of Eureka, the prose poem written by Edgar Allan Poe in 1848. In that work, starting from metaphysical assumptions, Poe claims that the Universe is finite in an infinite Space, and that it was originated from a primordial Particle, whose fragmentation under the action of a repulsive force caused a diffusion of atoms in space. I will show that his subsequently collapsing universe represents a scientifically acceptable Newtonian model. In the framework of his evolving universe, Poe makes use of contemporary astronomical knowledge, deriving modern concepts such as a primordial atomic state of the universe and a common epoch of galaxy formation. Harrison found in Eureka the first, qualitative solution of the Olbers' paradox; I show that Poe also applies in a modern way the anthropic principle, trying to explain why the Universe is so large.

  19. UV Spectral Fingerprinting and Analysis of Variance-Principal Component Analysis: a Useful Tool for Characterizing Sources of Variance in Plant Materials

    PubMed Central

    Luthria, Devanand L.; Mukhopadhyay, Sudarsan; Robbins, Rebecca J.; Finley, John W.; Banuelos, Gary S.; Harnly, James M.

    2013-01-01

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol–water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220–380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively. PMID:18572954

  20. A Monte Carlo Investigation of the Analysis of Variance Applied to Non-Independent Bernoulli Variates.

    ERIC Educational Resources Information Center

    Draper, John F., Jr.

    The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…

  1. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    ERIC Educational Resources Information Center

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  2. Decomposing genomic variance using information from GWA, GWE and eQTL analysis.

    PubMed

    Ehsani, A; Janss, L; Pomp, D; Sørensen, P

    2016-04-01

    A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. PMID:26678352

  3. Allan deviation computations of a linear frequency synthesizer system using frequency domain techniques

    NASA Technical Reports Server (NTRS)

    Wu, Andy

    1995-01-01

    Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.

  4. Cost-variance analysis by DRGs; a technique for clinical budget analysis.

    PubMed

    Voss, G B; Limpens, P G; Brans-Brabant, L J; van Ooij, A

    1997-02-01

    In this article it is shown how a cost accounting system based on DRGs can be valuable in determining changes in clinical practice and explaining alterations in expenditure patterns from one period to another. A cost-variance analysis is performed using data from the orthopedic department from the fiscal years 1993 and 1994. Differences between predicted and observed cost for medical care, such as diagnostic procedures, therapeutic procedures and nursing care are analyzed into different components: changes in patient volume, case-mix differences, changes in resource use and variations in cost per procedure. Using a DRG cost accounting system proved to be a useful technique for clinical budget analysis. Results may stimulate discussions between hospital managers and medical professionals to explain cost variations integrating medical and economic aspects of clinical health care. PMID:10165044

  5. Teaching Principles of One-Way Analysis of Variance Using M&M's Candy

    ERIC Educational Resources Information Center

    Schwartz, Todd A.

    2013-01-01

    I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…

  6. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2014-01-01

    Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…

  7. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  8. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    ERIC Educational Resources Information Center

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  9. A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2010-01-01

    The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…

  10. Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance

    ERIC Educational Resources Information Center

    Finch, W. Holmes

    2016-01-01

    Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…

  11. A Demonstration of the Analysis of Variance Using Physical Movement and Space

    ERIC Educational Resources Information Center

    Owen, William J.; Siakaluk, Paul D.

    2011-01-01

    Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…

  12. Uncertainty analysis for 3D geological modeling using the Kriging variance

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Choi, Younjung; Park, Sebeom; Um, Jeong-Gi

    2014-05-01

    The credible estimation of geological properties is critical in many geosciences fields including the geotechnical engineering, environmental engineering, mining engineering and petroleum engineering. Many interpolation techniques have been developed to estimate the geological properties from limited sampling data such as borehole logs. The Kriging is an interpolation technique that gives the best linear unbiased prediction of the intermediate values. It also provides the Kriging variance which quantifies the uncertainty of the kriging estimates. This study provides a new method to analyze the uncertainty in 3D geological modeling using the Kriging variance. The cut-off values determined by the Kriging variance were used to effectively visualize the 3D geological models with different confidence levels. This presentation describes the method for uncertainty analysis and a case study which evaluates the amount of recoverable resources by considering the uncertainty.

  13. Toward a more robust variance-based global sensitivity analysis of model outputs

    SciTech Connect

    Tong, C

    2007-10-15

    Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.

  14. The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.

    ERIC Educational Resources Information Center

    Peet, Martha W.

    This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301 middle…

  15. Analysis of variance and Westlake's test of bioavailability data using a programmable minicalculator.

    PubMed

    Gouyette, A

    1984-01-01

    A program for the HP-41 CV calculator with adapted printer is described for the analysis of variance of bioavailability data based upon the areas under the curve measured during a two-way cross-over pharmacokinetic study of two different drug formulations. The program can also perform the test of Westlake to compute the 95% confidence interval and determine if both formulations are bioequivalent. PMID:6735510

  16. Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores

    NASA Astrophysics Data System (ADS)

    Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.

    2015-12-01

    Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture

  17. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  18. The Efficiency of Split Panel Designs in an Analysis of Variance Model.

    PubMed

    Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  19. Applying the Generalized Waring model for investigating sources of variance in motor vehicle crash analysis.

    PubMed

    Peng, Yichuan; Lord, Dominique; Zou, Yajie

    2014-12-01

    As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model. PMID:25173723

  20. Discriminating between cultivars and treatments of broccoli using mass spectral fingerprinting and analysis of variance-principal component analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...

  1. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits

  2. A further analysis for the minimum-variance deconvolution filter performance

    NASA Technical Reports Server (NTRS)

    Chi, Chong-Yung

    1987-01-01

    Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.

  3. A further analysis for the minimum-variance deconvolution filter performance

    NASA Astrophysics Data System (ADS)

    Chi, Chong-Yung

    1987-06-01

    Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.

  4. John A. Scigliano Interviews Allan B. Ellis.

    ERIC Educational Resources Information Center

    Scigliano, John A.

    2000-01-01

    This interview with Allan Ellis focuses on a history of computer applications in education. Highlights include work at the Harvard Graduate School of Education; the New England Education Data System; and efforts to create a computer-based distance learning and development program called ISVD (Information System for Vocational Decisions). (LRW)

  5. The Curious Mind of Allan Bloom.

    ERIC Educational Resources Information Center

    Gardner, Martin

    1988-01-01

    This article reviews Allan Bloom's 1987 book, THE CLOSING OF THE AMERICAN MIND: HOW HIGHER EDUCATION HAS FAILED DEMOCRACY AND IMPOVERISHED THE SOULS OF TODAY'S CHILDREN. Compares Bloom's book with THE HIGHER LEARNING IN AMERICA, a 1930s book by Mortimer Adler and Robert Hutchins. (JDH)

  6. The use of repeated measures analysis of variance for plaque and gingival indices.

    PubMed

    Gunsolley, J C; Chinchilli, V M; Koertge, T E; Palcanis, K G; Sarbin, A G; Brooks, C N

    1989-03-01

    Clinical trials for anti-gingivitis and anti-plaque agents commonly use the mean of Silness and Löe plaque indices and Löe and Silness gingival indices as response variables. The aim of this report is to determine if data from anti-plaque and anti-gingivitis clinical trials using Silness and Löe plaque indices and Löe and Silness gingival indices satisfy conditions necessary for the use of the univariate or multivariate approach to repeated measures. These conditions are multivariate normality, homogeneity of variance-covariance matrices, and for the univariate approach, a type-H variance-covariance matrix. Data from 5 separate clinical trials representing a wide range in sample size, pretreatment mean gingival and plaque indices and treatment effects were used to test these conditions. Either the univariate or multivariate approach to repeated measures was found to be appropriate for both responses of the 5 clinical trials. Thus, means of Silness & Löe and Löe and Silness gingival indices meet the necessary conditions for use of either the univariate and/or multivariate approach to repeated measures. However, significant time-treatment interactions are a common occurrence in these types of clinical trials and must be evaluated carefully. The analyses in this study were carried out using SAS. Other mainframe statistical software packages and many micro-computer statistical software packages have routines to analyze repeated measures experiments with analysis of variance methods. However, some of the packages may omit the multivariate approach to repeated measures or may not include interactions between within-subject and between-subject effects. These packages should be used with caution. PMID:2723097

  7. Allan-Herndon syndrome. I. Clinical studies.

    PubMed Central

    Stevenson, R E; Goodman, H O; Schwartz, C E; Simensen, R J; McLean, W T; Herndon, C N

    1990-01-01

    A large family with X-linked mental retardation, originally reported in 1944 by Allan, Herndon, and Dudley, has been reinvestigated. Twenty-nine males have been affected in seven generations. Clinical features include severe mental retardation, dysarthria, ataxia, athetoid movements, muscle hypoplasia, and spastic paraplegia with hyperreflexia, clonus, and Babinski reflexes. The facies appear elongated with normal head circumference, bitemporal narrowing, and large, simple ears. Contractures develop at both small and large joint. Statural growth is normal and macroorchidism does not occur. Longevity is not impaired. High-resolution chromosomes, serum creatine kinase, and amino acids are normal. This condition, termed the Allan-Herndon syndrome, appears distinct from other X-linked disorders having mental retardation, muscle hypoplasia, and spastic paraplegia. Images Figure 2 Figure 3 PMID:2393019

  8. The Cosmology of Edgar Allan Poe

    NASA Astrophysics Data System (ADS)

    Cappi, Alberto

    2011-06-01

    Eureka is a ``prose poem'' published in 1848, where Edgar Allan Poe presents his original cosmology. While starting from metaphysical assumptions, Poe develops an evolving Newtonian model of the Universe which has many and non casual analogies with modern cosmology. Poe was well informed about astronomical and physical discoveries, and he was influenced by both contemporary science and ancient ideas. For these reasons, Eureka is a unique synthesis of metaphysics, art and science.

  9. Analysis of variances of quasirapidities in collisions of gold nuclei with track-emulsion nuclei

    SciTech Connect

    Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V. Navotny, V. S. Saidkhanov, N. S.; Chudakov, V. M.

    2012-08-15

    A new method of an analysis of variances was developed for studying n-particle correlations of quasirapidities in nucleus-nucleus collisions for a large constant number n of particles. Formulas that generalize the results of the respective analysis to various values of n were derived. Calculations on the basis of simple models indicate that the method is applicable, at least for n {>=} 100. Quasirapidity correlations statistically significant at a level of 36 standard deviations were discovered in collisions between gold nuclei and track-emulsion nuclei at an energy of 10.6 GeV per nucleon. The experimental data obtained in our present study are contrasted against the theory of nucleus-nucleus collisions.

  10. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  11. Princess Marie Bonaparte, Edgar Allan Poe, and psychobiography.

    PubMed

    Warner, S L

    1991-01-01

    Princess Marie Bonaparte was a colorful yet mysterious member of Freud's inner circle of psychoanalysis. In analysis with Freud beginning in 1925 (she was then 45 years old), she became a lay analyst and writer of many papers and books. Her most ambitious task was a 700-page psychobiography of Edgar Allan Poe that was first published in French in 1933. She was fascinated by Poe's gothic stories--with the return to life of dead persons and the eerie, unexpected turns of events. Her fascination with Poe can be traced to the similarity of their early traumatic life experiences. Bonaparte had lost her mother a month after her birth. Poe's father deserted the family when Edgar was two years old, and his mother died of tuberculosis when he was three. Poe's stories helped him to accommodate to these early traumatic losses. Bonaparte vicariously shared in Poe's loss and the fantasies of the return of the deceased parent in his stories. She was sensitive and empathetic to Poe's inner world because her inner world was similar. The result of this psychological fit between Poe and Bonaparte was her psychobiography, The Life and Works of Edgar Allan Poe. It was a milestone in psychobiography but limited in its psychological scope by its strong emphasis on early childhood trauma. Nevertheless it proved Bonaparte a bona fide creative psychoanalyst and not a dilettante propped up by her friendship with Freud. PMID:1744021

  12. Semiautomated analysis of embryoscope images: Using localized variance of image intensity to detect embryo developmental stages.

    PubMed

    Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester

    2015-02-01

    Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. PMID:25614363

  13. Minimum variance imaging based on correlation analysis of Lamb wave signals.

    PubMed

    Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi

    2016-08-01

    In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance. PMID:27155349

  14. Structural damage detection in an aeronautical panel using analysis of variance

    NASA Astrophysics Data System (ADS)

    Gonsalez, Camila Gianini; da Silva, Samuel; Brennan, Michael J.; Lopes Junior, Vicente

    2015-02-01

    This paper describes a procedure for structural health assessment based on one-way analysis of variance (ANOVA) together with Tukey's multiple comparison test, to determine whether the results are statistically significant. The feature indices are obtained from electromechanical impedance measurements using piezoceramic sensor/actuator patches bonded to the structure. Compared to the classical approach based on a simple change of the observed signals, using for example root mean square responses, the decision procedure in this paper involves a rigorous statistical test. Experimental tests were carried out on an aeronautical panel in the laboratory to validate the approach. In order to include uncontrolled variability in the dynamic responses, the measurements were taken over several days in different environmental conditions using all eight sensor/actuator patches. The damage was simulated by controlling the tightness and looseness of the bolts and was correctly diagnosed. The paper discusses the strengths and weakness of the approach in light of the experimental results.

  15. A VLBI variance-covariance analysis interactive computer program. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bock, Y.

    1980-01-01

    An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.

  16. Analysis of variance on thickness and electrical conductivity measurements of carbon nanotube thin films

    NASA Astrophysics Data System (ADS)

    Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard

    2016-09-01

    One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.

  17. [Discussion of errors and measuring strategies in morphometry using analysis of variance].

    PubMed

    Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U

    1986-01-01

    Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811

  18. Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs

    SciTech Connect

    Tong, C

    2009-04-20

    Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.

  19. NASTRAN variance analysis and plotting of HBDY elements. [analysis of uncertainties of the computer results as a function of uncertainties in the input data

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1974-01-01

    The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.

  20. [The medical history of Edgar Allan Poe].

    PubMed

    Miranda C, Marcelo

    2007-09-01

    Edgar Allan Poe, one of the best American storytellers and poets, suffered an episodic behaviour disorder partially triggered by alcohol and opiate use. Much confusion still exists about the last days of his turbulent life and the cause of his death at an early age. Different etiologies have been proposed to explain his main medical problem, however, complex partial seizures triggered by alcohol, poorly recognized at the time when Poe lived, seems to be one of the most acceptable hypothesis, among others discussed. PMID:18064380

  1. Effects of Violations of Data Set Assumptions When Using the Analysis of Variance and Covariance with Unequal Group Sizes.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    This research explored the degree to which group sizes can differ before the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) are jeopardized. Monte Carlo methodology was used, allowing for the experimental investigation of potential threats to robustness under conditions common to researchers in education. The…

  2. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    ERIC Educational Resources Information Center

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  3. Variance Analysis of Wind and Natural Gas Generation under Different Market Structures: Some Observations

    SciTech Connect

    Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.

    2012-01-01

    Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.

  4. Variance-based global sensitivity analysis for multiple scenarios and models with implementation using sparse grid collocation

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Ye, Ming

    2015-09-01

    Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.

  5. Odor measurements according to EN 13725: A statistical analysis of variance components

    NASA Astrophysics Data System (ADS)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore

  6. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance-components analysis.

    PubMed

    Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O'Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L

    2015-12-01

    Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  7. Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.

    PubMed

    Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D

    2016-01-01

    Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217

  8. Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance

    USGS Publications Warehouse

    Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.

    1995-01-01

    The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.

  9. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  10. Contrasting genetic architectures of schizophrenia and other complex diseases using fast variance components analysis

    PubMed Central

    Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick

    2015-01-01

    Heritability analyses of GWAS cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here, we analyze the genetic architecture of schizophrenia in 49,806 samples from the PGC, and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) among several pairs of GERA diseases; genetic correlations were on average 1.3x stronger than correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multi-component, multi-trait variance components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775

  11. New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2010-01-01

    Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).

  12. Methods to estimate the between-study variance and its uncertainty in meta-analysis.

    PubMed

    Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia

    2016-03-01

    Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144

  13. How to detect Edgar Allan Poe's 'purloined letter,' or cross-correlation algorithms in digitized video images for object identification, movement evaluation, and deformation analysis

    NASA Astrophysics Data System (ADS)

    Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd

    2003-07-01

    Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.

  14. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  15. Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2016-01-01

    This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…

  16. Obituary: Allan R. Sandage (1926-2010)

    NASA Astrophysics Data System (ADS)

    Devorkin, David

    2011-12-01

    Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer

  17. Analysis of the anomalous scale-dependent behavior of dispersivity using straightforward analytical equations: Flow variance vs. dispersion

    SciTech Connect

    Looney, B.B.; Scott, M.T.

    1988-12-31

    Recent field and laboratory data have confirmed that apparent dispersivity is a function of the flow distance of the measurement. This scale effect is not consistent with classical advection dispersion modeling often used to describe the transport of solutes in saturated porous media. Many investigators attribute this anomalous behavior to the fact that the spreading of solute is actually the result of the heterogeneity of subsurface materials and the wide distribution of flow paths and velocities available in such systems. An analysis using straightforward analytical equations confirms this hypothesis. An analytical equation based on a flow variance approach matches available field data when a variance description of approximately 0.4 is employed. Also, current field data provide a basis for statistical selection of the variance parameter based on the level of concern related to the resulting calculated concentration. While the advection dispersion approach often yielded reasonable predictions, continued development of statistical and stochastic techniques will provide more defendable and mechanistically descriptive models.

  18. Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.

    2012-04-01

    In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi

  19. Performance Analysis of the Blind Minimum Output Variance Estimator for Carrier Frequency Offset in OFDM Systems

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Li, Kwok H.; Teh, Kah C.

    2006-12-01

    Carrier frequency offset (CFO) is a serious drawback in orthogonal frequency division multiplexing (OFDM) systems. It must be estimated and compensated before demodulation to guarantee the system performance. In this paper, we examine the performance of a blind minimum output variance (MOV) estimator. Based on the derived probability density function (PDF) of the output magnitude, its mean and variance are obtained and it is observed that the variance reaches the minimum when there is no frequency offset. This observation motivates the development of the proposed MOV estimator. The theoretical mean-square error (MSE) of the MOV estimator over an AWGN channel is obtained. The analytical results are in good agreement with the simulation results. The performance evaluation of the MOV estimator is extended to a frequency-selective fading channel and the maximal-ratio combining (MRC) technique is applied to enhance the MOV estimator's performance. Simulation results show that the MRC technique significantly improves the accuracy of the MOV estimator.

  20. Uranium series dating of Allan Hills ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.

    1986-01-01

    Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.

  1. Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis

    ERIC Educational Resources Information Center

    Williams, Ryan

    2013-01-01

    The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…

  2. Power Analysis of Selected Parametric and Nonparametric Tests for Heterogeneous Variances in Non-Normal Distributions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…

  3. Analysis of Quantitative Traits in Two Long-Term Randomly Mated Soybean Populations I. Genetic Variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...

  4. 32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE AT CONTROL PANEL, ABOUT 1965. - U.S. Geological Survey, Rock Magnetics Laboratory, 345 Middlefield Road, Menlo Park, San Mateo County, CA

  5. Spectral variance of aeroacoustic data

    NASA Technical Reports Server (NTRS)

    Rao, K. V.; Preisser, J. S.

    1981-01-01

    An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.

  6. Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance

    PubMed Central

    Lacy, Robert C.; Lynch, Carol Becker

    1979-01-01

    Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909

  7. Analysis and application of minimum variance discrete time system identification. [for adaptive control system design

    NASA Technical Reports Server (NTRS)

    Kotob, S.; Kaufman, H.

    1976-01-01

    An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.

  8. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  9. Assessing land cover performance in Senegal, West Africa using 1-km integrated NDVI and local variance analysis

    USGS Publications Warehouse

    Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.

    2004-01-01

    The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.

  10. Allan Sandage and the distance scale

    NASA Astrophysics Data System (ADS)

    Tammann, G. A.; Reindl, B.

    2013-02-01

    Allan Sandage returned to the distance scale and the calibration of the Hubble constant again and again during his active life, experimenting with different distance indicators. In 1952 his proof of the high luminosity of Cepheids confirmed Baade's revision of the distance scale (H0 ~ 250 km s-1 Mpc-1). During the next 25 years, he lowered the value to 75 and 55. Upon the arrival of the Hubble Space Telescope, he observed Cepheids to calibrate the mean luminosity of nearby Type Ia supernovae (SNe Ia) which, used as standard candles, led to the cosmic value of H0 = 62.3 +/- 1.3 +/- 5.0 km s-1 Mpc-1. Eventually he turned to the tip of the red giant branch (TRGB) as a very powerful distance indicator. A compilation of 176 TRGB distances yielded a mean, very local value of H0 = 62.9 +/- 1.6 km s-1 Mpc-1 and shed light on the streaming velocities in the Local Supercluster. Moreover, TRGB distances are now available for six SNe Ia; if their mean luminosity is applied to distant SNe Ia, one obtains H0 = 64.6 +/- 1.6 +/- 2.0 km s-1 Mpc-1. The weighted mean of the two independent large-scale calibrations yields H0 = 64.1 km s-1 Mpc-1 within 3.6%.

  11. Patient population management: taking the leap from variance analysis to outcomes measurement.

    PubMed

    Allen, K M

    1998-01-01

    Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable." PMID:9601411

  12. The Effects of Violations of Data Set Assumptions When Using the Oneway, Fixed-Effects Analysis of Variance and the One Concomitant Analysis of Covariance.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook; Rakow, Ernest A.

    1994-01-01

    This research is an empirical study, through Monte Carlo simulation, of the effects of violations of the assumptions for the oneway fixed-effects analysis of variance (ANOVA) and analysis of covariance (ANCOVA). Research reaffirms findings of previous studies that suggest that ANOVA and ANCOVA be avoided when group sizes are not equal. (SLD)

  13. View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation

    NASA Technical Reports Server (NTRS)

    Gong, Jie; Wu, Dong L.

    2013-01-01

    Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.

  14. Radial forcing and Edgar Allan Poe's lengthening pendulum

    NASA Astrophysics Data System (ADS)

    McMillan, Matthew; Blasing, David; Whitney, Heather M.

    2013-09-01

    Inspired by Edgar Allan Poe's The Pit and the Pendulum, we investigate a radially driven, lengthening pendulum. We first show that increasing the length of an undriven pendulum at a uniform rate does not amplify the oscillations in a manner consistent with the behavior of the scythe in Poe's story. We discuss parametric amplification and the transfer of energy (through the parameter of the pendulum's length) to the oscillating part of the system. In this manner, radial driving can easily and intuitively be understood, and the fundamental concept applied in many other areas. We propose and show by a numerical model that appropriately timed radial forcing can increase the oscillation amplitude in a manner consistent with Poe's story. Our analysis contributes a computational exploration of the complex harmonic motion that can result from radially driving a pendulum and sheds light on a mechanism by which oscillations can be amplified parametrically. These insights should prove especially valuable in the undergraduate physics classroom, where investigations into pendulums and oscillations are commonplace.

  15. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  16. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  17. Obituary: Allan R. Sandage (1926-2010)

    NASA Astrophysics Data System (ADS)

    Devorkin, David

    2011-12-01

    Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer

  18. Comments on the statistical analysis of excess variance in the COBE differential microwave radiometer maps

    NASA Technical Reports Server (NTRS)

    Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.

    1994-01-01

    Cosmic anisotrophy produces an excess variance sq sigma(sub sky) in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigma(sub sky)(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)(exp 0.5). The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)(exp 0.5) to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigma(sub sky)(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)(exp 0.5) = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).

  19. Variance Component Quantitative Trait Locus Analysis for Body Weight Traits in Purebred Korean Native Chicken

    PubMed Central

    Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon

    2016-01-01

    Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327

  20. SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance

    SciTech Connect

    Phillips, J; Wolfgang, J

    2014-06-01

    Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least insensitive

  1. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  2. Noise and drift analysis of non-equally spaced timing data

    NASA Technical Reports Server (NTRS)

    Vernotte, F.; Zalamansky, G.; Lantz, E.

    1994-01-01

    Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.

  3. Regression Computer Programs for Setwise Regression and Three Related Analysis of Variance Techniques.

    ERIC Educational Resources Information Center

    Williams, John D.; Lindem, Alfred C.

    Four computer programs using the general purpose multiple linear regression program have been developed. Setwise regression analysis is a stepwise procedure for sets of variables; there will be as many steps as there are sets. Covarmlt allows a solution to the analysis of covariance design with multiple covariates. A third program has three…

  4. Allan Houser (Haozous) Santa Fe Compound and Sculpture Garden.

    ERIC Educational Resources Information Center

    Herberholz, Barbara

    1999-01-01

    Summarizes the life of artist Allan Houser focusing on his childhood and his family life, the development of his artistic endeavors, and his career as an artist. Comments on the Alan Houser Compound that is a 104-acre compound and sculpture garden that houses over 30 of his sculptures. (CMK)

  5. Biotechnology Symposium - In Memoriam, the Late Dr. Allan Zipf

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A one-day biotechnology symposium was held at Alabama A&M University (AAMU), Normal, AL on June 4, 2004 in memory of the late Dr. Allan Zipf (Sept 1953-Jan 2004). Dr. Zipf was a Research Associate Professor at the Department of Plant and Soil Sciences, AAMU, who collaborated extensively with ARS/MS...

  6. The Variance Analysis for Seismic Attributes in Oil-and-Gas detection at the Midlle of Tarim Basin

    NASA Astrophysics Data System (ADS)

    Yu, C.; Jingyi, F.

    2011-12-01

    seismic attributes of reflection data is important for oil-and-gas detection. It turns to be a new technique to identify oil and/or gas reservoirs with the information brought by seismic waves. Local changes and variances would be detected in the seismic waves at and around the area with oil or/and gas, which could be used to issue oil and gas reservoirs and provide basement for geophysical evaluation on hydrocarbon traps. The method has been tested in the middle of Tarim Basin, and the analysis of seismic attributes and its relationship with oil and gas reservoirs seems to have cast a new light for detecting hydrocarbon traps. Before exploring and drilling for oil and gas, data accessing and data mining, especially for the seismic attribute data, are suggested to be done for hydrocarbon detection and trap evaluation, in order to avoid risk and improve efficiency in oil-and-gas exploration.

  7. Analysis of the improvement in sky coverage for multiconjugate adaptive optics systems obtained using minimum variance split tomography.

    PubMed

    Wang, Lianqi; Gilles, Luc; Ellerbroek, Brent

    2011-06-20

    The scientific utility of laser-guide-star-based multiconjugate adaptive optics systems depends upon high sky coverage. Previously we reported a high-fidelity sky coverage analysis of an ad hoc split tomography control algorithm and a postprocessing simulation technique. In this paper, we present the performance of a newer minimum variance split tomography algorithm, and we show that it brings a median improvement at zenith of 21 nm rms optical path difference error over the ad hoc split tomography control algorithm for our system, the Narrow Field Infrared Adaptive Optics System for the Thirty Meter Telescope. In order to make the comparison, we also validated our previously developed sky coverage postprocessing software using an integrated simulation of both high- (laser guide star) and low-order (natural guide star) loops. A new term in the noise model is also identified that improves the performance of both algorithms by more properly regularizing the reconstructor. PMID:21691367

  8. Variance components, heritability and correlation analysis of anther and ovary size during the floral development of bread wheat.

    PubMed

    Guo, Zifeng; Chen, Dijun; Schnurbusch, Thorsten

    2015-06-01

    Anther and ovary development play an important role in grain setting, a crucial factor determining wheat (Triticum aestivum L.) yield. One aim of this study was to determine the heritability of anther and ovary size at different positions within a spikelet at seven floral developmental stages and conduct a variance components analysis. Relationships between anther and ovary size and other traits were also assessed. The thirty central European winter wheat genotypes used in this study were based on reduced height (Rht) and photoperiod sensitivity (Ppd) genes with variable genetic backgrounds. Identical experimental designs were conducted in a greenhouse and field simultaneously. Heritability of anther and ovary size indicated strong genetic control. Variance components analysis revealed that anther and ovary sizes of floret 3 (i.e. F3, the third floret from the spikelet base) and floret 4 (F4) were more sensitive to the environment compared with those in floret 1 (F1). Good correlations were found between spike dry weight and anther and ovary size in both greenhouse and field, suggesting that anther and ovary size are good predictors of each other, as well as spike dry weight in both conditions. Relationships between spike dry weight and anther and ovary size at F3/4 positions were stronger than at F1, suggesting that F3/4 anther and ovary size are better predictors of spike dry weight. Generally, ovary size showed a closer relationship with spike dry weight than anther size, suggesting that ovary size is a more reliable predictor of spike dry weight. PMID:25821074

  9. Analysis of NDVI variance across landscapes and seasons allows assessment of degradation and resilience to shocks in Mediterranean dry ecosystems

    NASA Astrophysics Data System (ADS)

    liniger, hanspeter; jucker riva, matteo; schwilch, gudrun

    2016-04-01

    Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of

  10. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    SciTech Connect

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    2014-06-15

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment

  11. Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.

    PubMed

    Shakouri, Mahmoud; Lee, Hyun Woo

    2016-03-01

    The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458

  12. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    PubMed

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  13. A Variance Decomposition Approach to Uncertainty Quantification and Sensitivity Analysis of the J&E Model

    PubMed Central

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.

    2015-01-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  14. A variance decomposition approach to uncertainty quantification and sensitivity analysis of the Johnson and Ettinger model.

    PubMed

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G

    2015-02-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  15. Simulation Study Using a New Type of Sample Variance

    NASA Technical Reports Server (NTRS)

    Howe, D. A.; Lainson, K. J.

    1996-01-01

    We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.

  16. Carbon-14 ages of Allan Hills meteorites and ice

    NASA Technical Reports Server (NTRS)

    Fireman, E. L.; Norris, T.

    1982-01-01

    Allan Hills is a blue ice region of approximately 100 sq km area in Antarctica where many meteorites have been found exposed on the ice. The terrestrial ages of the Allan Hills meteorites, which are obtained from their cosmogenic nuclide abundances are important time markers which can reflect the history of ice movement to the site. The principal purpose in studying the terrestrial ages of ALHA meteorites is to locate samples of ancient ice and analyze their trapped gas contents. Attention is given to the C-14 and Ar-39 terrestrial ages of ALHA meteorites, and C-14 ages and trapped gas compositions in ice samples. On the basis of the obtained C-14 terrestrial ages, and Cl-36 and Al-26 results reported by others, it is concluded that most ALHA meteorites fell between 20,000 and 200,000 years ago.

  17. Exposure and terrestrial ages of four Allan Hills Antarctic meteorites

    NASA Technical Reports Server (NTRS)

    Kirsten, T.; Ries, D.; Fireman, E. L.

    1978-01-01

    Terrestrial ages of meteorites are based on the amount of cosmic-ray-produced radioactivity in the sample and the number of observed falls that have similar cosmic-ray exposure histories. The cosmic-ray exposures are obtained from the stable noble gas isotopes. Noble gas isotopes are measured by high-sensitivity mass spectrometry. In the present study, the noble gas contents were measured in four Allan Hill meteorites (No. 5, No. 6, No. 7, and No. 8), whose C-14, Al-26, and Mn-53 radioactivities are known. These meteorites are of particular interest because they belong to a large assemblage of distinct meteorites that lie exposed on a small (110 sq km) area of ice near the Allan Hills.

  18. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight.

    PubMed

    Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  19. Variance-component analysis of obesity in type 2 diabetes confirms loci on chromosomes 1q and 11q.

    PubMed

    van Tilburg, Jonathan H O; Sandkuijl, Lodewijk A; Strengman, Eric; Pearson, Peter L; van Haeften, Timon W; Wijmenga, Cisca

    2003-11-01

    To study genetic loci influencing obesity in nuclear families with type 2 diabetes, we performed a genome-wide screen with 325 microsatellite markers that had an average spacing of 11 cM and a mean heterozygosity of approximately 75% covering all 22 autosomes. Genotype data were obtained from 562 individuals from 178 families from the Breda Study Cohort. These families were determined to have at least two members with type 2 diabetes. As a measure of obesity, the BMI of each diabetes patient was determined. The genotypes were analyzed using variance components (VCs) analysis implemented in GENEHUNTER 2 to determine quantitative trait loci influencing BMI. The VC analysis revealed two genomic regions showing VC logarithm of odds (LOD) scores > or =1.0 on chromosome 1 and chromosome 11. The regions of interest on both chromosomes were further investigated by fine-mapping with additional markers, resulting in a VC LOD score of 1.5 on chromosome 1q and a VC LOD of 2.4 on chromosome 11q. The locus on chromosome 1 has been implicated previously in diabetes. The locus on chromosome 11 has been implicated previously in diabetes and obesity. Our study to determine linkage for BMI confirms the presence of quantitative trait loci influencing obesity in subjects with type 2 diabetes on chromosomes 1q31-q42 and 11q14-q24. PMID:14627748

  20. Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight

    PubMed Central

    Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David

    2016-01-01

    Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895

  1. Nuclear Material Variance Calculation

    1995-01-01

    MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less

  2. Biclustering with heterogeneous variance.

    PubMed

    Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R

    2013-07-23

    In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637

  3. Variance associated with the use of relative velocity for force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Volstad, Nicola; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials. PMID:26631945

  4. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs

    PubMed Central

    Hans, Eric C.; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2016-01-01

    Factors that contribute to variance in ground reaction forces (GRF) include: dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, we hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from twenty normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5m/s2). Variance effects from twelve trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P<0.05. Mean dog body weight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P<0.001). Trial number significantly affected thoracic limb PVF (P<0.001). Limb (left or right) significantly affected thoracic limb VI (P=0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5–2.0 m/s, 1.8–2.2 m/s, and 1.9–2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5–2.2 m/s, captured a large percentage of trials per dog (84.2±21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. We conclude wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design. PMID:25457264

  5. Electrocardiogram signal variance analysis in the diagnosis of coronary artery disease--a comparison with exercise stress test in an angiographically documented high prevalence population.

    PubMed

    Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C

    1993-09-01

    Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p < 0.01). Variance ECG provided a stochastically significant discrimination (p < 0.01) which was matched by exercise test only when chest pain variable was added to ST-segment depression as a discriminating criterion. Even then, the exercise test diagnosed single-vessel disease with a significantly lower sensitivity. At a cutpoint of CAD-I > or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of

  6. Variance associated with subject velocity and trial repetition during force platform gait analysis in a heterogeneous population of clinically normal dogs.

    PubMed

    Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter

    2014-12-01

    Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P <0.05. Mean dog bodyweight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P <0.001). Trial number significantly affected thoracic limb PVF (P <0.001). Limb (left or right) significantly affected thoracic limb VI (P = 0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5-2.0 m/s, 1.8-2.2 m/s, and 1.9-2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5-2.2 m/s, captured a large percentage of trials per dog (84.2 ± 21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. It was concluded that wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design. PMID:25457264

  7. Aspects of First Year Statistics Students' Reasoning When Performing Intuitive Analysis of Variance: Effects of Within- and Between-Group Variability

    ERIC Educational Resources Information Center

    Trumpower, David L.

    2015-01-01

    Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…

  8. On the measurement of frequency and of its sample variance with high-resolution counters

    SciTech Connect

    Rubiola, Enrico

    2005-05-15

    A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.

  9. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    PubMed Central

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  10. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    PubMed

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  11. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the

  12. Pragmatics: The State of the Art: An Online Interview with Keith Allan

    ERIC Educational Resources Information Center

    Allan, Keith; Salmani Nodoushan, Mohammad Ali

    2015-01-01

    This interview was conducted with Professor Keith Allan with the aim of providing a brief but informative summary of the state of the art of pragmatics. In providing answers to the interview questions, Professor Allan begins with a definition of pragmatics as it is practiced today, i.e., the study of the meanings of utterances with attention to…

  13. A unique type 3 ordinary chondrite containing graphite-magnetite aggregates - Allan Hills A77011

    NASA Technical Reports Server (NTRS)

    Mckinley, S. G.; Scott, E. R. D.; Taylor, G. J.; Keil, K.

    1982-01-01

    ALHA 77011, which is the object of study in the present investigation, is a chondrite of the 1977 meteorite collection from Allan Hills, Antarctica. It contains an opaque and recrystallized silicate matrix (Huss matrix) and numerous aggregates consisting of micron- and submicron-sized graphite and magnetite. It is pointed out that no abundant graphite-magnetite aggregates could be observed in other type 3 ordinary chondrites, except for Sharps. Attention is given to the results of a modal analysis, relations between ALHA 77011 and other type 3 ordinary chondrites, and the association of graphite-magnetite and metallic Fe, Ni. The discovery of graphite-magnetite aggregates in type 3 ordinary chondrites is found to suggest that this material may have been an important component in the formation of ordinary chondrites.

  14. Variance analysis of gamma-aminobutyric acid (GABA)-ergic inhibitory postsynaptic currents from melanotropes of Xenopus laevis.

    PubMed Central

    Borst, J G; Kits, K S; Bier, M

    1994-01-01

    We have studied the variance in the decay of large spontaneous gamma-aminobutyric acid (GABA)-ergic inhibitory postsynaptic currents (IPSCs) in melanotropes of Xenopus laevis to obtain information about the number of GABAA receptor channels that bind GABA during the IPSCs. The average decay of the IPSCs is well described by the sum of two exponential functions. This suggests that a three-state Markov model is sufficient to describe the decay phase, with one of the three states being an absorbing state, entered when GABA dissociates from the GABAA receptor. We have compared the variance in the decay of large spontaneous IPSCs with the variance calculated for two different three-state models: a model with one open state, one closed state, and one absorbing state (I), and a model with two open states and one absorbing state (II). The data were better described by the more efficient model II. This suggests that the efficacy of GABA at synaptic GABAA receptor channels is high and that only a small number of channels are involved in generating the GABA-ergic IPSCs. PMID:7918986

  15. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    SciTech Connect

    Milias-Argeitis, Andreas Khammash, Mustafa; Lygeros, John

    2014-07-14

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  16. Limited variance control in statistical low thrust guidance analysis. [stochastic algorithm for SEP comet Encke flyby mission

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1975-01-01

    Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.

  17. Fast variance reduction for steady-state simulation and sensitivity analysis of stochastic chemical systems using shadow function estimators

    NASA Astrophysics Data System (ADS)

    Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa

    2014-07-01

    We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.

  18. What do differences between multi-voxel and univariate analysis mean? How subject-, voxel-, and trial-level variance impact fMRI analysis.

    PubMed

    Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A

    2014-08-15

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930

  19. Seizures in the life and works of Edgar Allan Poe.

    PubMed

    Bazil, C W

    1999-06-01

    Edgar Allan Poe, one of the most celebrated of American storytellers, lived through and wrote descriptions of episodic unconsciousness, confusion, and paranoia. These symptoms have been attributed to alcohol or drug abuse but also could represent complex partial seizures, prolonged postictal states, or postictal psychosis. Complex partial seizures were not well described in Poe's time, which could explain a misdiagnosis. Alternatively, he may have suffered from complex partial epilepsy that was complicated or caused by substance abuse. Even today, persons who have epilepsy are mistaken for substance abusers and occasionally are arrested during postictal confusional states. Poe was able to use creative genius and experiences from illness to create memorable tales and poignant poems. PMID:10369317

  20. Nannobacterial alteration of pyroxenes in martian meteorite Allan Hills 84001

    NASA Astrophysics Data System (ADS)

    Folk, Robert L.; Taylor, Lawrence A.

    2002-08-01

    In martian meteorite Allan Hills (ALH) 84001, this scanning electron microscope study was focused on the ferromagnesian minerals, which are extensively covered with nanometer-size bodies mainly 30-100 nm in diameter. These bodies range from spheres to ovoids to caterpillar shapes and resemble, both in size and shape, nannobacteria that attack weathered rocks on Earth and that can be cultured. Dense colonies alternate with clean, smooth cleavage surfaces, possibly formed later. Statistical study shows that the distribution of presumed nannobacteria is very clustered. In addition to the small bodies, there are a few occurrences of ellipsoidal 200-400 nm objects, that are within the lower size range of "normal" earthly bacteria. We conclude that the nanobodies so abundant in ALH 84001 are indeed nannobacteria, confirming the initial assertion of McKay et al. (1996). However, whether these bodies originated on Mars or are Antarctic contamination remains a valid question.

  1. Petrogenetic relationship between Allan Hills 77005 and other achondrites

    NASA Technical Reports Server (NTRS)

    Mcsween, H. Y., Jr.; Taylor, L. A.; Stolper, E. M.; Muntean, R. A.; Okelley, G. D.; Eldridge, J. S.; Biswas, S.; Ngo, H. T.; Lipschutz, M. E.

    1979-01-01

    The paper presents chemical and petrologic data relating the Allan Hills (ALHA) 77005 achondrite from Antarctica and explores their petrogenetic relationship with the shergottites. Petrologic similarities with the latter in terms of mineralogy, oxidation state, inferred source region composition, and shock ages suggest a genetic relationship, also indicated by volatile to involatile element ratios and abundances of other trace elements. ALHA 77005 may be a cumulate crystallized from a liquid parental to materials from which the shergottites crystallized or a sample of peridotite from which shergottite parent liquids were derived. Chemical similarities with terrestrial ultramafic rocks suggest that it provides an additional sample of the only other solar system body with basalt source origins chemically similar to the upper earth mantle.

  2. The History of Allan Hills 84001 Revised: Multiple Shock Events

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.

    1998-01-01

    The geologic history of Martian meteorite Allan Hills (ALH) 84001 is more complex than previously recognized, with evidence for four or five crater-forming impacts onto Mars. This history of repeated deformation and shock metamorphism appears to weaken some arguments that have been offered for and against the hypothesis of ancient Martian life in ALH 84001. Allan Hills 84001 formed originally from basaltic magma. Its first impact event (I1) is inferred from the deformation (D1) that produced the granular-textured bands ("crush zones") that transect the original igneous fabric. Deformation D1 is characterized by intense shear and may represent excavation or rebound flow of rock beneath a large impact crater. An intense thermal metamorphism followed D1 and may be related to it. The next impact (I2) produced fractures, (Fr2) in which carbonate "pancakes" were deposited and produced feldspathic glass from some of the igneous feldspars and silica. After I2, carbonate pancakes and globules were deposited in Fr2 fractures and replaced feldspathic glass and possibly crystalline silicates. Next, feldspars, feldspathic glass, and possibly some carbonates were mobilized and melted in the third impact (I3). Microfaulting, intense fracturing, and shear are also associated with 13. In the fourth impact (I4), the rock was fractured and deformed without significant heating, which permitted remnant magnetization directions to vary across fracture surfaces. Finally, ALH 84001 was ejected from Mars in event I5, which could be identical to I4. This history of multiple impacts is consistent with the photogeology of the Martian highlands and may help resolve some apparent contradictions among recent results on ALH 84001. For example, the submicron rounded magnetite grains in the carbonate globules could be contemporaneous with carbonate deposition, whereas the elongate magnetite grains, epitaxial on carbonates, could be ascribed to vapor-phase deposition during I3.

  3. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  4. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  5. Comparison of a one-at-a-step and variance-based global sensitivity analysis applied to a parsimonious urban hydrological model

    NASA Astrophysics Data System (ADS)

    Coutu, S.

    2014-12-01

    A sensitivity analysis was conducted on an existing parsimonious model aiming to reproduce flow in engineered urban catchments and sewer networks. The model is characterized by his parsimonious feature and is limited to seven calibration parameters. The objective of this study is to demonstrate how different levels of sensitivity analysis can have an influence on the interpretation of input parameter relevance in urban hydrology, even for light structure models. In this perspective, we applied a one-at-a-time (OAT) sensitivity analysis (SA) as well as a variance-based global and model independent method; the calculation of Sobol indexes. Sobol's first and total effect indexes were estimated using a Monte-Carlo approach. We present evidences of the irrelevance of calculating Sobol's second order indexes when uncertainty on index estimation is too high. Sobol's method results showed that two parameters drive model performance: the subsurface discharge rate and the root zone drainage coefficient (Clapp exponent). Interestingly, the surface discharge rate responsible flow in impervious area has no significant relevance, contrarily to what was expected considering only the one-at-a-time sensitivity analysis. This last statement is clearly not straightforward. It highlights the utility of carrying variance-based sensitivity analysis in the domain of urban hydrology, even when using a parsimonious model, in order to prevent misunderstandings in the system dynamics and consequent management mistakes.

  6. Monte Carlo variance reduction

    NASA Technical Reports Server (NTRS)

    Byrn, N. R.

    1980-01-01

    Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.

  7. A COSMIC VARIANCE COOKBOOK

    SciTech Connect

    Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu

    2011-04-20

    Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z

  8. A Cosmic Variance Cookbook

    NASA Astrophysics Data System (ADS)

    Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter

    2011-04-01

    Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is

  9. The Effects of Single and Compound Violations of Data Set Assumptions when Using the Oneway, Fixed Effects Analysis of Variance and the One Concomitant Analysis of Covariance Statistical Models.

    ERIC Educational Resources Information Center

    Johnson, Colleen Cook

    This study integrates into one comprehensive Monte Carlo simulation a vast array of previously defined and substantively interrelated research studies of the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) statistical procedures. Three sets of balanced ANOVA and ANCOVA designs (group sizes of 15, 30, and 45) and one…

  10. Longitudinal analysis of residual feed intake and BW in mink using random regression with heterogeneous residual variance.

    PubMed

    Shirali, M; Nielsen, V H; Møller, S H; Jensen, J

    2015-10-01

    The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting

  11. Technical note: An improved estimate of uncertainty for source contribution from effective variance Chemical Mass Balance (EV-CMB) analysis

    NASA Astrophysics Data System (ADS)

    Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang

    2015-01-01

    The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.

  12. Hidden-Markov methods for the analysis of single-molecule actomyosin displacement data: the variance-Hidden-Markov method.

    PubMed Central

    Smith, D A; Steffen, W; Simmons, R M; Sleep, J

    2001-01-01

    In single-molecule experiments on the interaction between myosin and actin, mechanical events are embedded in Brownian noise. Methods of detecting events have progressed from simple manual detection of shifts in the position record to threshold-based selection of intermittent periods of reduction in noise. However, none of these methods provides a "best fit" to the data. We have developed a Hidden-Markov algorithm that assumes a simple kinetic model for the actin-myosin interaction and provides automatic, threshold-free, maximum-likelihood detection of events. The method is developed for the case of a weakly trapped actin-bead dumbbell interacting with a stationary myosin molecule (Finer, J. T., R. M. Simmons, and J. A. Spudich. 1994. Nature. 368:113-119). The algorithm operates on the variance of bead position signals in a running window, and is tested using Monte Carlo simulations to formulate ways of determining the optimum window width. The working stroke is derived and corrected for actin-bead link compliance. With experimental data, we find that modulation of myosin binding by the helical structure of the actin filament complicates the determination of the working stroke; however, under conditions that produce a Gaussian distribution of bound levels (cf. Molloy, J. E., J. E. Burns, J. Kendrick-Jones, R. T. Tregear, and D. C. S. White. 1995. Nature. 378:209-212), four experiments gave working strokes in the range 5.4-6.3 nm for rabbit skeletal muscle myosin S1. PMID:11606292

  13. A univariate analysis of variance design for multiple-choice feeding-preference experiments: A hypothetical example with fruit-eating birds

    NASA Astrophysics Data System (ADS)

    Larrinaga, Asier R.

    2010-01-01

    I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.

  14. Getting around cosmic variance

    SciTech Connect

    Kamionkowski, M.; Loeb, A.

    1997-10-01

    Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}

  15. Joint analysis of beef growth and carcass quality traits through calculation of co-variance components and correlations.

    PubMed

    Mirzaei, H R; Verbyla, A P; Pitchford, W S

    2011-01-01

    A joint growth-carcass model using random regression was used to estimate the (co)variance components of beef cattle body weights and carcass quality traits and correlations between them. During a four-year period (1994-1997) of the Australian "southern crossbreeding project", mature Hereford cows (N = 581) were mated to 97 sires of Jersey, Wagyu, Angus, Hereford, South Devon, Limousin, and Belgian Blue breeds, resulting in 1141 calves. Data included 13 (for steers) and 8 (for heifers) body weight measurements approximately every 50 days from birth until slaughter and four carcass quality traits: hot standard carcass weight, rump fat depth, rib eye muscle area, and intramuscular fat content. The mixed model included fixed effects of sex, sire breed, age (linear, quadratic and cubic), and their interactions between sex and sire breed with age. Random effects were sire, dam, management (birth location, year, post-weaning groups), and permanent environmental effects, and their interactions with linear, quadratic and cubic growth, when possible. Phenotypic, sire and dam correlations between body weights and hot standard carcass weight and rib eye muscle area were positive and moderate to high from birth to feedlot period. Management variation accounted for the largest proportion of total variation in both growth and carcass traits. Management correlations between carcass traits were high, except between rump fat depth and intramuscular fat (r = 0.26). Management correlations between body weight and carcass traits during the pre-weaning period were positive except for intramuscular fat. The correlations were low from birth to weaning, then increased dramatically and were high during the feedlot period. PMID:21425094

  16. The emergence of modern statistics in agricultural science: analysis of variance, experimental design and the reshaping of research at Rothamsted Experimental Station, 1919-1933.

    PubMed

    Parolini, Giuditta

    2015-01-01

    During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906

  17. Variance Anisotropy in Kinetic Plasmas

    NASA Astrophysics Data System (ADS)

    Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping

    2016-06-01

    Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.

  18. Conversations across Meaning Variance

    ERIC Educational Resources Information Center

    Cordero, Alberto

    2013-01-01

    Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…

  19. Minimum variance geographic sampling

    NASA Technical Reports Server (NTRS)

    Terrell, G. R. (Principal Investigator)

    1980-01-01

    Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.

  20. Using a variance-based sensitivity analysis for analyzing the relation between measurements and unknown parameters of a physical model

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Tiede, C.

    2011-05-01

    An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.

  1. Identification of Analytical Factors Affecting Complex Proteomics Profiles Acquired in a Factorial Design Study with Analysis of Variance: Simultaneous Component Analysis.

    PubMed

    Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter

    2016-04-19

    Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level

  2. Variance-based Sensitivity Analysis of Large-scale Hydrological Model to Prepare an Ensemble-based SWOT-like Data Assimilation Experiments

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.

    2015-12-01

    Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while

  3. THE DEAD-LIVING-MOTHER: MARIE BONAPARTE'S INTERPRETATION OF EDGAR ALLAN POE'S SHORT STORIES.

    PubMed

    Obaid, Francisco Pizarro

    2016-06-01

    Princess Marie Bonaparte is an important figure in the history of psychoanalysis, remembered for her crucial role in arranging Freud's escape to safety in London from Nazi Vienna, in 1938. This paper connects us to Bonaparte's work on Poe's short stories. Founded on concepts of Freudian theory and an exhaustive review of the biographical facts, Marie Bonaparte concluded that the works of Edgar Allan Poe drew their most powerful inspirational force from the psychological consequences of the early death of the poet's mother. In Bonaparte's approach, which was powerfully influenced by her recognition of the impact of the death of her own mother when she was born-an understanding she gained in her analysis with Freud-the thesis of the dead-living-mother achieved the status of a paradigmatic key to analyze and understand Poe's literary legacy. This paper explores the background and support of this hypothesis and reviews Bonaparte's interpretation of Poe's most notable short stories, in which extraordinary female figures feature in the narrative. PMID:27194275

  4. Cultural variances in composition of biological and supernatural concepts of death: a content analysis of children's literature.

    PubMed

    Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk

    2014-01-01

    Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations. PMID:24738761

  5. The final days of Edgar Allan Poe: clues to an old mystery using 21st century medical science.

    PubMed

    Francis, Roger A

    This study examines all documented information regarding the final days and death of Edgar Allan Poe (1809-1849), in an attempt to determine the most likely cause of death of the American poet, short story writer, and literary critic. Information was gathered from letters, newspaper accounts, and magazine articles written during the period after Poe's death, and also from biographies and medical journal articles written up until the present. A chronology of Poe's final days was constructed, and this was used to form a differential diagnosis of possible causes of death. Death theories over the last 160 years were analyzed using this information. This analysis, along with a review of Poe's past medical history, would seem to support an alcohol-related cause of death. PMID:20222235

  6. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    PubMed

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable. PMID:26540679

  7. Spectral analysis of the Earth's topographic potential via 2D-DFT: a new data-based degree variance model to degree 90,000

    NASA Astrophysics Data System (ADS)

    Rexer, Moritz; Hirt, Christian

    2015-09-01

    Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.

  8. The Art of George Morrison and Allan Houser: The Development and Impact of Native Modernism

    ERIC Educational Resources Information Center

    Montiel, Anya

    2005-01-01

    The idea for a retrospective on George Morrison and Allan Houser as one of the inaugural exhibitions at the National Museum of the American Indian (NMAI) came from the NMAI curator of contemporary art, Truman Lowe. An artist and sculptor himself, Lowe knew both artists personally and saw them as mentors and visionaries. Lowe advised an exhibition…

  9. Where Were the Whistleblowers? The Case of Allan McDonald and Roger Boisjoly.

    ERIC Educational Resources Information Center

    Stewart, Lea P.

    Employees who "blow the whistle" on their company because they believe it is engaged in practices that are illegal, immoral, or harmful to the public, often face grave consequences for their actions, including demotion, harassment, forced resignation, or termination. The case of Allan McDonald and Roger Boisjoly, engineers who blew the whistle on…

  10. Horror from the Soul--Gothic Style in Allan Poe's Horror Fictions

    ERIC Educational Resources Information Center

    Sun, Chunyan

    2015-01-01

    Edgar Allan Poe made tremendous contribution to horror fiction. Poe's inheritance of gothic fiction and American literature tradition combined with his living experience forms the background of his horror fictions. He inherited the tradition of the gothic fictions and made innovations on it, so as to penetrate to subconsciousness. Poe's horror…

  11. European Studies as Answer to Allan Bloom's "The Closing of the American Mind."

    ERIC Educational Resources Information Center

    Macdonald, Michael H.

    European studies can provide a solution to several of the issues raised in Allan Bloom's "The Closing of the American Mind." European studies pursue the academic quest for what is truth, what is goodness, and what is beauty. In seeking to answer these questions, the Greeks were among the first to explore many of humanity's problems and their…

  12. Allan M. Freedman, LLB: a lawyer’s gift to Canadian chiropractors

    PubMed Central

    Brown, Douglas M.

    2007-01-01

    This paper reviews the leadership role, contributions, accolades, and impact of Professor Allan Freedman through a 30 year history of service to CMCC and the chiropractic profession in Canada. Professor Freedman has served as an educator, philanthropist and also as legal counsel. His influence on chiropractic organizations and chiropractors during this significant period in the profession is discussed. PMID:18060008

  13. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    ERIC Educational Resources Information Center

    Gelfert, Axel

    2014-01-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling "prose-poem" "Eureka" (1848), in particular, has…

  14. An Interview with Allan Wigfield: A Giant on Research on Expectancy Value, Motivation, and Reading Achievement

    ERIC Educational Resources Information Center

    Bembenutty, Hefer

    2012-01-01

    This article presents an interview with Allan Wigfield, professor and chair of the Department of Human Development and distinguished scholar-teacher at the University of Maryland. He has authored more than 100 peer-reviewed journal articles and book chapters on children's motivation and other topics. He is a fellow of Division 15 (Educational…

  15. Effect of the viral protease on the dynamics of bacteriophage HK97 maturation intermediates characterized by variance analysis of cryo EM particle ensembles.

    PubMed

    Gong, Yunye; Veesler, David; Doerschuk, Peter C; Johnson, John E

    2016-03-01

    Cryo EM structures of maturation-intermediate Prohead I of bacteriophage HK97 with (PhI(Pro+)) and without (PhI(Pro-)) the viral protease packaged have been reported (Veesler et al., 2014). In spite of PhI(Pro+) containing an additional ∼ 100 × 24 kD of protein, the two structures appeared identical although the two particles have substantially different biochemical properties, e.g., PhI(Pro-) is less stable to disassembly conditions such as urea. Here the same cryo EM images are used to characterize the spatial heterogeneity of the particles at 17Å resolution by variance analysis and show that PhI(Pro-) has roughly twice the standard deviation of PhI(Pro+). Furthermore, the greatest differences in standard deviation are present in the region where the δ-domain, not seen in X-ray crystallographic structures or fully seen in cryo EM, is expected to be located. Thus presence of the protease appears to stabilize the δ-domain which the protease will eventually digest. PMID:26724602

  16. Method of median semi-variance for the analysis of left-censored data: comparison with other techniques using environmental data.

    PubMed

    Zoffoli, Hugo José Oliveira; Varella, Carlos Alberto Alves; do Amaral-Sobrinho, Nelson Moura Brasil; Zonta, Everaldo; Tolón-Becerra, Alfredo

    2013-11-01

    In environmental monitoring, variables with analytically non-detected values are commonly encountered. For the statistical evaluation of these data, most of the methods that produce a less biased performance require specific computer programs. In this paper, a statistical method based on the median semi-variance (SemiV) is proposed to estimate the position and spread statistics in a dataset with single left-censoring. The performances of the SemiV method and 12 other statistical methods are evaluated using real and complete datasets. The performances of all the methods are influenced by the percentage of censored data. In general, the simple substitution and deletion methods showed biased performance, with exceptions for L/2, Inter and L/√2 methods that can be used with caution under specific conditions. In general, the SemiV method and other parametric methods showed similar performances and were less biased than other methods. The SemiV method is a simple and accurate procedure that can be used in the analysis of datasets with less than 50% of left-censored data. PMID:23830887

  17. Estimation of velocity uncertainties from GPS time series: Examples from the analysis of the South African TrigNet network

    NASA Astrophysics Data System (ADS)

    Hackl, M.; Malservisi, R.; Hugentobler, U.; Wonnacott, R.

    2011-11-01

    We present a method to derive velocity uncertainties from GPS position time series that are affected by time-correlated noise. This method is based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation (MLE). The Allan variance of the rate (AVR) is calculated in the time domain and hence is not too sensitive to gaps in the time series. We derived analytical expressions of the AVR for different kinds of noises like power law noise, white noise, flicker noise, and random walk and found an expression for the variance produced by an annual signal. These functional relations form the basis of error models that have to be fitted to the AVR in order to estimate the velocity uncertainty. Finally, we applied the method to the South Africa GPS network TrigNet. Most time series show noise characteristics that can be modeled by a power law noise plus an annual signal. The method is computationally very cheap, and the results are in good agreement with the ones obtained by methods based on MLE.

  18. Variational bayesian method of estimating variance components.

    PubMed

    Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi

    2016-07-01

    We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207

  19. Mutations in MCT8 in patients with Allan-Herndon-Dudley-syndrome affecting its cellular distribution.

    PubMed

    Kersseboom, Simone; Kremers, Gert-Jan; Friesema, Edith C H; Visser, W Edward; Klootwijk, Wim; Peeters, Robin P; Visser, Theo J

    2013-05-01

    Monocarboxylate transporter 8 (MCT8) is a thyroid hormone (TH)-specific transporter. Mutations in the MCT8 gene are associated with Allan-Herndon-Dudley Syndrome (AHDS), consisting of severe psychomotor retardation and disturbed TH parameters. To study the functional consequences of different MCT8 mutations in detail, we combined functional analysis in different cell types with live-cell imaging of the cellular distribution of seven mutations that we identified in patients with AHDS. We used two cell models to study the mutations in vitro: 1) transiently transfected COS1 and JEG3 cells, and 2) stably transfected Flp-in 293 cells expressing a MCT8-cyan fluorescent protein construct. All seven mutants were expressed at the protein level and showed a defect in T3 and T4 transport in uptake and metabolism studies. Three mutants (G282C, P537L, and G558D) had residual uptake activity in Flp-in 293 and COS1 cells, but not in JEG3 cells. Four mutants (G221R, P321L, D453V, P537L) were expressed at the plasma membrane. The mobility in the plasma membrane of P537L was similar to WT, but the mobility of P321L was altered. The other mutants studied (insV236, G282C, G558D) were predominantly localized in the endoplasmic reticulum. In essence, loss of function by MCT8 mutations can be divided in two groups: mutations that result in partial or complete loss of transport activity (G221R, P321L, D453V, P537L) and mutations that mainly disturb protein expression and trafficking (insV236, G282C, G558D). The cell type-dependent results suggest that MCT8 mutations in AHDS patients may have tissue-specific effects on TH transport probably caused by tissue-specific expression of yet unknown MCT8-interacting proteins. PMID:23550058

  20. Cosmology without cosmic variance

    SciTech Connect

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.

  1. Cosmology without cosmic variance

    DOE PAGESBeta

    Bernstein, Gary M.; Cai, Yan -Chuan

    2011-10-01

    The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less

  2. Minimum variance beamformer weights revisited.

    PubMed

    Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs

    2015-10-15

    Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207

  3. Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.

    PubMed

    Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S

    2016-04-01

    Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641

  4. Least-Squares Analysis of Phosphorus Soil Sorption Data with Weighting from Variance Function Estimation: A Statistical Case for the Freundlich Isotherm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...

  5. VARIANCE OF GEOSTATISTICIANS

    EPA Science Inventory

    Different individuals will take different approaches to the analysis and interpretation of data. his study attempted to quantify the effect of such individual differences on the quality of geostatistical spatial estimates. dentical spatial data sets were sent to 12 investigators,...

  6. Cosmic-ray-produced Cl-36 and Mn-53 in Allan Hills-77 meteorites

    NASA Astrophysics Data System (ADS)

    Nishiizumi, K.; Murrell, M. T.; Arnold, J. R.; Elmore, D.; Ferraro, R. D.; Gove, H. E.; Finkel, R. C.

    1981-01-01

    Cosmic-ray-produced Mn-53 has been determined by neutron activation in nine Allan Hills-77 meteorites. Additionally, Cl-36 has been measured in seven of these objects using tandem accelerator mass spectrometry. These results, along with C-14 and Al-26 concentrations determined elsewhere, yield terrestrial ages ranging from 10,000 to 700,000 years. Weathering was not found to result in Mn-53 loss.

  7. Analysis of speech-related variance in rapid event-related fMRI using a time-aware acquisition system.

    PubMed

    Mehta, S; Grabowski, T J; Razavi, M; Eaton, B; Bolinger, L

    2006-02-15

    Speech production introduces signal changes in fMRI data that can mimic or mask the task-induced BOLD response. Rapid event-related designs with variable ISIs address these concerns by minimizing the correlation of task and speech-related signal changes without sacrificing efficiency; however, the increase in residual variance due to speech still decreases statistical power and must be explicitly addressed primarily through post-processing techniques. We investigated the timing, magnitude, and location of speech-related variance in an overt picture naming fMRI study with a rapid event-related design, using a data acquisition system that time-stamped image acquisitions, speech, and a pneumatic belt signal on the same clock. Using a spectral subtraction algorithm to remove scanner gradient noise from recorded speech, we related the timing of speech, stimulus presentation, chest wall movement, and image acquisition. We explored the relationship of an extended speech event time course and respiration on signal variance by performing a series of voxelwise regression analyses. Our results demonstrate that these effects are spatially heterogeneous, but their anatomic locations converge across subjects. Affected locations included basal areas (orbitofrontal, mesial temporal, brainstem), areas adjacent to CSF spaces, and lateral frontal areas. If left unmodeled, speech-related variance can result in regional detection bias that affects some areas critically implicated in language function. The results establish the feasibility of detecting and mitigating speech-related variance in rapid event-related fMRI experiments with single word utterances. They further demonstrate the utility of precise timing information about speech and respiration for this purpose. PMID:16412665

  8. Further Insights into the Allan-Herndon-Dudley Syndrome: Clinical and Functional Characterization of a Novel MCT8 Mutation

    PubMed Central

    Yoon, Grace; Visser, Theo J.

    2015-01-01

    Background Mutations in the thyroid hormone (TH) transporter MCT8 have been identified as the cause for Allan-Herndon-Dudley Syndrome (AHDS), characterized by severe psychomotor retardation and altered TH serum levels. Here we report a novel MCT8 mutation identified in 4 generations of one family, and its functional characterization. Methods Proband and family members were screened for 60 genes involved in X-linked cognitive impairment and the MCT8 mutation was confirmed. Functional consequences of MCT8 mutations were studied by analysis of [125I]TH transport in fibroblasts and transiently transfected JEG3 and COS1 cells, and by subcellular localization of the transporter. Results The proband and a male cousin demonstrated clinical findings characteristic of AHDS. Serum analysis showed high T3, low rT3, and normal T4 and TSH levels in the proband. A MCT8 mutation (c.869C>T; p.S290F) was identified in the proband, his cousin, and several female carriers. Functional analysis of the S290F mutant showed decreased TH transport, metabolism and protein expression in the three cell types, whereas the S290A mutation had no effect. Interestingly, both uptake and efflux of T3 and T4 was impaired in fibroblasts of the proband, compared to his healthy brother. However, no effect of the S290F mutation was observed on TH efflux from COS1 and JEG3 cells. Immunocytochemistry showed plasma membrane localization of wild-type MCT8 and the S290A and S290F mutants in JEG3 cells. Conclusions We describe a novel MCT8 mutation (S290F) in 4 generations of a family with Allan-Herndon-Dudley Syndrome. Functional analysis demonstrates loss-of-function of the MCT8 transporter. Furthermore, our results indicate that the function of the S290F mutant is dependent on cell context. Comparison of the S290F and S290A mutants indicates that it is not the loss of Ser but its substitution with Phe, which leads to S290F dysfunction. PMID:26426690

  9. Measurements of Ultra-Stable Oscillator (USO) Allan Deviations in Space

    NASA Technical Reports Server (NTRS)

    Enzer, Daphna G.; Klipstein, William M.; Wang, Rabi T.; Dunn, Charles E.

    2013-01-01

    Researchers have used data from the GRAIL mission to the Moon to make the first in-flight verification of ultra-stable oscillators (USOs) with Allan deviation below 10 13 for 1-to-100-second averaging times. USOs are flown in space to provide stable timing and/or navigation signals for a variety of different science and programmatic missions. The Gravity Recovery and Interior Laboratory (GRAIL) mission is flying twin spacecraft, each with its own USO and with a Ka-band crosslink used to measure range fluctuations. Data from this crosslink can be combined in such a way as to give the relative time offsets of the two spacecrafts USOs and to calculate the Allan deviation to describe the USOs combined performance while orbiting the Moon. Researchers find the first direct in-space Allan deviations below 10(exp -13) for 1-to-100-second averaging times comparable to pre-launch data, and better than measurements from ground tracking of an X-band carrier coherent with the USO. Fluctuations in Earth s atmosphere limit measurement performance in direct-to-Earth links. Inflight USO performance verification was also performed for GRAIL s parent mission, the Gravity Recovery and Climate Experiment (GRACE), using both Kband and Ka-band crosslinks.

  10. Noise variance analysis using a flat panel x-ray detector: A method for additive noise assessment with application to breast CT applications

    SciTech Connect

    Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.

    2010-07-15

    Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.

  11. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed on the subsystems and components comprising the system of interest. Technological "return" and "variation" parameters are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  12. Systems Engineering Programmatic Estimation Using Technology Variance

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    2000-01-01

    Unique and innovative system programmatic estimation is conducted using the variance of the packaged technologies. Covariance analysis is performed oil the subsystems and components comprising the system of interest. Technological "returns" and "variation" parameters, are estimated. These parameters are combined with the model error to arrive at a measure of system development stability. The resulting estimates provide valuable information concerning the potential cost growth of the system under development.

  13. Network Structure and Biased Variance Estimation in Respondent Driven Sampling

    PubMed Central

    Verdery, Ashton M.; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J.

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  14. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    PubMed

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network. PMID:26679927

  15. Latitude dependence of eddy variances

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Bell, Thomas L.

    1987-01-01

    The eddy variance of a meteorological field must tend to zero at high latitudes due solely to the nature of spherical polar coordinates. The zonal averaging operator defines a length scale: the circumference of the latitude circle. When the circumference of the latitude circle is greater than the correlation length of the field, the eddy variance from transient eddies is the result of differences between statistically independent regions. When the circumference is less than the correlation length, the eddy variance is computed from points that are well correlated with each other, and so is reduced. The expansion of a field into zonal Fourier components is also influenced by the use of spherical coordinates. As is well known, a phenomenon of fixed wavelength will have different zonal wavenumbers at different latitudes. Simple analytical examples of these effects are presented along with an observational example from satellite ozone data. It is found that geometrical effects can be important even in middle latitudes.

  16. Hypothesis exploration with visualization of variance

    PubMed Central

    2014-01-01

    Background The Consortium for Neuropsychiatric Phenomics (CNP) at UCLA was an investigation into the biological bases of traits such as memory and response inhibition phenotypes—to explore whether they are linked to syndromes including ADHD, Bipolar disorder, and Schizophrenia. An aim of the consortium was in moving from traditional categorical approaches for psychiatric syndromes towards more quantitative approaches based on large-scale analysis of the space of human variation. It represented an application of phenomics—wide-scale, systematic study of phenotypes—to neuropsychiatry research. Results This paper reports on a system for exploration of hypotheses in data obtained from the LA2K, LA3C, and LA5C studies in CNP. ViVA is a system for exploratory data analysis using novel mathematical models and methods for visualization of variance. An example of these methods is called VISOVA, a combination of visualization and analysis of variance, with the flavor of exploration associated with ANOVA in biomedical hypothesis generation. It permits visual identification of phenotype profiles—patterns of values across phenotypes—that characterize groups. Visualization enables screening and refinement of hypotheses about variance structure of sets of phenotypes. Conclusions The ViVA system was designed for exploration of neuropsychiatric hypotheses by interdisciplinary teams. Automated visualization in ViVA supports ‘natural selection’ on a pool of hypotheses, and permits deeper understanding of the statistical architecture of the data. Large-scale perspective of this kind could lead to better neuropsychiatric diagnostics. PMID:25097666

  17. Noble gases in twenty Yamato H-chondrites: Comparison with Allan Hills chondrites and modern falls

    NASA Technical Reports Server (NTRS)

    Loeken, TH.; Scherer, P.; Schultz, L.

    1993-01-01

    Concentration and isotopic composition of noble gases have been measured in 20 H-chrondrites found on the Yamato Mountains ice fields in Antarctica. The distribution of exposure ages as well as of radiogenic He-4 contents is similar to that of H-chrondrites collected at the Allan Hills site. Furthermore, a comparison of the noble gas record of Antarctic H-chrondrites and finds or falls from non-Antarctic areas gives no support to the suggestion that Antarctic H-chrondrites and modern falls derive from differing interplanetary meteorite populations.

  18. The discovery and initial characterization of Allan Hills 81005 - The first lunar meteorite

    NASA Technical Reports Server (NTRS)

    Marvin, U. B.

    1983-01-01

    Antarctic meteorite ALHA81005, discovered in the Allan Hills region of Victoria Land, is a polymict anorthositic breccia which differs from other meteorites in mineralogical and chemical composition but is strikingly similar to lunar highlands soil breccias. The petrologic character and several independent lines of evidence identify ALHA81005 as a meteorite from the moon. Two small clasts of probable mare basalt occur among the highlands lithologies in Thin Section 81005,22. This lunar specimen, which shows relatively minor shock effects, has generated new ideas on the types of planetary samples found on the earth.

  19. Allan C. Gotlib, DC, CM: A worthy Member of the Order of Canada

    PubMed Central

    Brown, Douglas M.

    2016-01-01

    On June 29, 2012, His Excellency the Right Honourable David Johnston, Governor General of Canada, announced 70 new appointments to the Order of Canada. Among them was Dr. Allan Gotlib, who was subsequently installed as a Member of the Order of Canada, in recognition of his contributions to advancing research in the chiropractic profession and its inter-professional integration. This paper attempts an objective view of his career, to substantiate the accomplishments that led to Dr. Gotlib receiving Canada’s highest civilian honour. PMID:27069273

  20. Allan C. Gotlib, DC, CM: A worthy Member of the Order of Canada.

    PubMed

    Brown, Douglas M

    2016-03-01

    On June 29, 2012, His Excellency the Right Honourable David Johnston, Governor General of Canada, announced 70 new appointments to the Order of Canada. Among them was Dr. Allan Gotlib, who was subsequently installed as a Member of the Order of Canada, in recognition of his contributions to advancing research in the chiropractic profession and its inter-professional integration. This paper attempts an objective view of his career, to substantiate the accomplishments that led to Dr. Gotlib receiving Canada's highest civilian honour. PMID:27069273

  1. Simultaneous analysis of large INTEGRAL/SPI1 datasets: Optimizing the computation of the solution and its variance using sparse matrix algorithms

    NASA Astrophysics Data System (ADS)

    Bouchet, L.; Amestoy, P.; Buttari, A.; Rouet, F.-H.; Chauvin, M.

    2013-02-01

    Nowadays, analyzing and reducing the ever larger astronomical datasets is becoming a crucial challenge, especially for long cumulated observation times. The INTEGRAL/SPI X/γ-ray spectrometer is an instrument for which it is essential to process many exposures at the same time in order to increase the low signal-to-noise ratio of the weakest sources. In this context, the conventional methods for data reduction are inefficient and sometimes not feasible at all. Processing several years of data simultaneously requires computing not only the solution of a large system of equations, but also the associated uncertainties. We aim at reducing the computation time and the memory usage. Since the SPI transfer function is sparse, we have used some popular methods for the solution of large sparse linear systems; we briefly review these methods. We use the Multifrontal Massively Parallel Solver (MUMPS) to compute the solution of the system of equations. We also need to compute the variance of the solution, which amounts to computing selected entries of the inverse of the sparse matrix corresponding to our linear system. This can be achieved through one of the latest features of the MUMPS software that has been partly motivated by this work. In this paper we provide a brief presentation of this feature and evaluate its effectiveness on astrophysical problems requiring the processing of large datasets simultaneously, such as the study of the entire emission of the Galaxy. We used these algorithms to solve the large sparse systems arising from SPI data processing and to obtain both their solutions and the associated variances. In conclusion, thanks to these newly developed tools, processing large datasets arising from SPI is now feasible with both a reasonable execution time and a low memory usage.

  2. The Variance Reaction Time Model

    ERIC Educational Resources Information Center

    Sikstrom, Sverker

    2004-01-01

    The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…

  3. Variance of a Few Observations

    ERIC Educational Resources Information Center

    Joarder, Anwar H.

    2009-01-01

    This article demonstrates that the variance of three or four observations can be expressed in terms of the range and the first order differences of the observations. A more general result, which holds for any number of observations, is also stated.

  4. The natural thermoluminescence of meteorites. V - Ordinary chondrites at the Allan Hills ice fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Natural TL values for meteorites from the Main ice field are fairly low, while the Farwestern field shows a spread with many values 30-80 krad, suggestive of less than 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  5. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application. Contractors desiring a variance from a safety and health standard, or portion thereof, may submit a...

  6. 13 CFR 307.22 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Variances. 307.22 Section 307.22....22 Variances. EDA may approve variances to the requirements contained in this subpart, provided such variances: (a) Are consistent with the goals of the Economic Adjustment Assistance program and with an...

  7. 29 CFR 1920.2 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Variances. 1920.2 Section 1920.2 Labor Regulations Relating...' COMPENSATION ACT § 1920.2 Variances. (a) Variances from standards in parts 1915 through 1918 of this chapter may be granted in the same circumstances in which variances may be granted under sections 6(b)...

  8. Variance decomposition in stochastic simulators

    SciTech Connect

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    NASA Astrophysics Data System (ADS)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Neutrino mass without cosmic variance

    NASA Astrophysics Data System (ADS)

    LoVerde, Marilena

    2016-05-01

    Measuring the absolute scale of the neutrino masses is one of the most exciting opportunities available with near-term cosmological data sets. Two quantities that are sensitive to neutrino mass, scale-dependent halo bias b (k ) and the linear growth parameter f (k ) inferred from redshift-space distortions, can be measured without cosmic variance. Unlike the amplitude of the matter power spectrum, which always has a finite error, the error on b (k ) and f (k ) continues to decrease as the number density of tracers increases. This paper presents forecasts for statistics of galaxy and lensing fields that are sensitive to neutrino mass via b (k ) and f (k ). The constraints on neutrino mass from the auto- and cross-power spectra of spectroscopic and photometric galaxy samples are weakened by scale-dependent bias unless a very high density of tracers is available. In the high-density limit, using multiple tracers allows cosmic variance to be beaten, and the forecasted errors on neutrino mass shrink dramatically. In practice, beating the cosmic-variance errors on neutrino mass with b (k ) will be a challenge, but this signal is nevertheless a new probe of neutrino effects on structure formation that is interesting in its own right.

  11. Onomastic Mirroring: "The Closing of the American Mind" by Allan Bloom and "Lives on the Boundary" by Mike Rose.

    ERIC Educational Resources Information Center

    Heit, Karl

    Although Allan Bloom in "The Closing of the American Mind" and Mike Rose in "Lives on the Boundary" reveal an almost endless list of obvious differences of perspective on literacy and higher education in America, both take divergent yet similar routes to create a permanent place for liberal education. Both Bloom and Rose use the "Gothic Cathedral"…

  12. William Bennett, Allan Bloom, E. D. Hirsch, Jr.: "Great Nature Has Another Thing to Do to You and Me...."

    ERIC Educational Resources Information Center

    Standley, Fred

    1988-01-01

    Examines the views of William Bennett, Allan Bloom, and E. D. Hirsch, Jr. Challenges the relevance of these views in the current milieu, and suggests three more relevant considerations: (1) the myth of the canon; (2) the effects of literary theory; and (3) the effects of the newfound emphasis on rhetoric and composition. (MS)

  13. Mesoscale Gravity Wave Variances from AMSU-A Radiances

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.

    2004-01-01

    A variance analysis technique is developed here to extract gravity wave (GW) induced temperature fluctuations from NOAA AMSU-A (Advanced Microwave Sounding Unit-A) radiance measurements. By carefully removing the instrument/measurement noise, the algorithm can produce reliable GW variances with the minimum detectable value as small as 0.1 K2. Preliminary analyses with AMSU-A data show GW variance maps in the stratosphere have very similar distributions to those found with the UARS MLS (Upper Atmosphere Research Satellite Microwave Limb Sounder). However, the AMSU-A offers better horizontal and temporal resolution for observing regional GW variability, such as activity over sub-Antarctic islands.

  14. Investigations into an unknown organism on the martian meteorite Allan Hills 84001

    NASA Technical Reports Server (NTRS)

    Steele, A.; Goddard, D. T.; Stapleton, D.; Toporski, J. K.; Peters, V.; Bassinger, V.; Sharples, G.; Wynn-Williams, D. D.; McKay, D. S.

    2000-01-01

    Examination of fracture surfaces near the fusion crust of the martian meteorite Allan Hills (ALH) 84001 have been conducted using scanning electron microscopy (SEM) and atomic force microscopy (AFM) and has revealed structures strongly resembling mycelium. These structures were compared with similar structures found in Antarctic cryptoendolithic communities. On morphology alone, we conclude that these features are not only terrestrial in origin but probably belong to a member of the Actinomycetales, which we consider was introduced during the Antarctic residency of this meteorite. If true, this is the first documented account of terrestrial microbial activity within a meteorite from the Antarctic blue ice fields. These structures, however, do not bear any resemblance to those postulated to be martian biota, although they are a probable source of the organic contaminants previously reported in this meteorite.

  15. Allan Hills 76005 Polymict Eucrite Pairing Group: Curatorial and Scientific Update on a Jointly Curated Meteorite

    NASA Technical Reports Server (NTRS)

    Righter, K.

    2011-01-01

    Allan Hills 76005 (or 765) was collected by the joint US-Japan field search for meteorites in 1976-77. It was described in detail as "pale gray in color and consists of finely divided macrocrystalline pyroxene-rich matrix that contains abundant clastic fragments: (1) Clasts of white, plagioclase-rich rocks. (2) Medium-gray, partly devitrified, cryptocrystalline. (3) Monomineralic fragments and grains of pyroxene, plagioclases, oxide minerals, sulfides, and metal. In overall appearance it is very similar to some lunar breccias." Subsequent studies found a great diversity of basaltic clast textures and compositions, and therefore it is best classified as a polymict eucrite. Samples from the 1976-77, 77-78, and 78-79 field seasons (76, 77, and 78 prefixes) were split between US and Japan (NIPR). The US specimens are currently at NASA-JSC, Smithsonian Institution, or the Field Museum in Chicago. After this initial finding of ALH 76005, the next year s team recovered one additional mass ALH 77302, and then four additional masses were found during the third season ALH 78040 and ALH 78132, 78158 and 78165. The joint US-Japan collection effort ended after three years and the US began collecting in the Trans-Antarctic Mountains with the 1979-80 and subsequent field seasons. ALH 79017 and ALH 80102 were recovered in these first two years, and then in 1981-82 field season, 6 additional masses were recovered from the Allan Hills. Of course it took some time to establish pairing of all of these specimens, but altogether the samples comprise 4292.4 g of material. Here will be summarized the scientific findings as well as some curatorial details of how specimens have been subdivided and allocated for study. A detailed summary is also presented on the NASA-JSC curation webpage for the HED meteorite compendium.

  16. Allan-Herndon-Dudley syndrome and the monocarboxylate transporter 8 (MCT8) gene.

    PubMed

    Schwartz, Charles E; May, Melanie M; Carpenter, Nancy J; Rogers, R Curtis; Martin, Judith; Bialer, Martin G; Ward, Jewell; Sanabria, Javier; Marsa, Silvana; Lewis, James A; Echeverri, Roberto; Lubs, Herbert A; Voeller, Kytja; Simensen, Richard J; Stevenson, Roger E

    2005-07-01

    Allan-Herndon-Dudley syndrome was among the first of the X-linked mental retardation syndromes to be described (in 1944) and among the first to be regionally mapped on the X chromosome (in 1990). Six large families with the syndrome have been identified, and linkage studies have placed the gene locus in Xq13.2. Mutations in the monocarboxylate transporter 8 gene (MCT8) have been found in each of the six families. One essential function of the protein encoded by this gene appears to be the transport of triiodothyronine into neurons. Abnormal transporter function is reflected in elevated free triiodothyronine and lowered free thyroxine levels in the blood. Infancy and childhood in the Allan-Herndon-Dudley syndrome are marked by hypotonia, weakness, reduced muscle mass, and delay of developmental milestones. Facial manifestations are not distinctive, but the face tends to be elongated with bifrontal narrowing, and the ears are often simply formed or cupped. Some patients have myopathic facies. Generalized weakness is manifested by excessive drooling, forward positioning of the head and neck, failure to ambulate independently, or ataxia in those who do ambulate. Speech is dysarthric or absent altogether. Hypotonia gives way in adult life to spasticity. The hands exhibit dystonic and athetoid posturing and fisting. Cognitive development is severely impaired. No major malformations occur, intrauterine growth is not impaired, and head circumference and genital development are usually normal. Behavior tends to be passive, with little evidence of aggressive or disruptive behavior. Although clinical signs of thyroid dysfunction are usually absent in affected males, the disturbances in blood levels of thyroid hormones suggest the possibility of systematic detection through screening of high-risk populations. PMID:15889350

  17. The Natural Thermoluminescence of Meteorites. Part 5; Ordinary Chondrites at the Allan Hills Ice Fields

    NASA Technical Reports Server (NTRS)

    Benoit, Paul H.; Sears, Hazel; Sears, Derek W. G.

    1993-01-01

    Natural thermoluminescence (TL) data have been obtained for 167 ordinary chondrites from the ice fields in the vicinity of the Allan Hills in Victoria Land, Antarctica, in order to investigate their thermal and radiation history, pairing, terrestrial age, and concentration mechanisms. Using fairly conservative criteria (including natural and induced TL, find location, and petrographic data), the 167 meteorite fragments are thought to represent a maximum of 129 separate meteorites. Natural TL values for meteorites from the Main ice field are fairly low (typically 5-30 krad, indicative of terrestrial ages of approx. 400 ka), while the Far western field shows a spread with many values 30-80 krad, suggestive of less then 150-ka terrestrial ages. There appear to be trends in TL levels within individual ice fields which are suggestive of directions of ice movement at these sites during the period of meteorite concentration. These directions seem to be confirmed by the orientations of elongation preserved in meteorite pairing groups. The proportion of meteorites with very low natural TL levels (less then 5 krad) at each field is comparable to that observed at the Lewis Cliff site and for modern non-Antarctic falls and is also similar to the fraction of small perihelia (less then 0.85 AU) orbits calculated from fireball and fall observations. Induced TL data for meteorites from the Allan Hills confirm trends observed for meteorites collected during the 1977/1978 and 1978/1979 field seasons which show that a select group of H chondrites from the Antarctic experienced a different extraterrestrial thermal history to that of non-Antarctic H chondrites.

  18. Variance decomposition in stochastic simulators.

    PubMed

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models. PMID:26133418

  19. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...

  20. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...

  1. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...

  2. Speed Variance and Its Influence on Accidents.

    ERIC Educational Resources Information Center

    Garber, Nicholas J.; Gadirau, Ravi

    A study was conducted to investigate the traffic engineering factors that influence speed variance and to determine to what extent speed variance affects accident rates. Detailed analyses were carried out to relate speed variance with posted speed limit, design speeds, and other traffic variables. The major factor identified was the difference…

  3. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH PERFORMANCE STANDARDS FOR ELECTRONIC PRODUCTS: GENERAL General Provisions § 1010.4 Variances. (a) Criteria for variances. (1) Upon application by...

  4. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Variance provision. 52.2183 Section 52...) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions in Chapter 74:26:01:31.01 of the South Dakota Air...

  5. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Variance request. 142.41 Section 142...) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator Under Section 1415(a) of the Act § 142.41 Variance request. A supplier of water may request the granting of...

  6. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 4 2014-01-01 2014-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  7. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  8. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 4 2013-01-01 2013-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  9. 10 CFR 851.31 - Variance process.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 4 2012-01-01 2012-01-01 false Variance process. 851.31 Section 851.31 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.31 Variance process. (a) Application..., practices, means, methods, operations, or processes used or proposed to be used by the contractor; and...

  10. 40 CFR 52.2183 - Variance provision.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 4 2011-07-01 2011-07-01 false Variance provision. 52.2183 Section 52.2183 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) South Dakota § 52.2183 Variance provision. The revisions to the variance provisions...

  11. Using "Excel" for White's Test--An Important Technique for Evaluating the Equality of Variance Assumption and Model Specification in a Regression Analysis

    ERIC Educational Resources Information Center

    Berenson, Mark L.

    2013-01-01

    There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…

  12. Assessing the temporal variance of evapotranspiration considering climate and catchment storage factors

    NASA Astrophysics Data System (ADS)

    Zeng, Ruijie; Cai, Ximing

    2015-05-01

    Understanding the temporal variance of evapotranspiration (ET) at the catchment scale remains a challenging task, because ET variance results from the complex interactions among climate, soil, vegetation, groundwater and human activities. This study extends the framework for ET variance analysis of Koster and Suarez (1999) by incorporating the water balance and the Budyko hypothesis. ET variance is decomposed into the variance/covariance of precipitation, potential ET, and catchment storage change. The contributions to ET variance from those components are quantified by long-term climate conditions (i.e., precipitation and potential ET) and catchment properties through the Budyko equation. It is found that climate determines ET variance under cool-wet, hot-dry and hot-wet conditions; while both catchment storage change and climate together control ET variance under cool-dry conditions. Thus the major factors of ET variance can be categorized based on the conditions of climate and catchment storage change. To demonstrate the analysis, both the inter- and intra-annul ET variances are assessed in the Murray-Darling Basin, and it is found that the framework corrects the over-estimation of ET variance in the arid basin. This study provides an extended theoretical framework to assess ET temporal variance under the impacts from both climate and storage change at the catchment scale.

  13. Impact of Damping Uncertainty on SEA Model Response Variance

    NASA Technical Reports Server (NTRS)

    Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand

    2010-01-01

    Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.

  14. Mawson Formation at Allan Hills, Antarctica: Evidence for a Large-scale Phreatomagmatic Caldera

    NASA Astrophysics Data System (ADS)

    Elliot, D. H.; Fortner, E. H.; Elliot, R. J.

    2003-12-01

    At Allan Hills, Transantarctic Mountains, Jurassic Mawson Formation pyroclastic rocks are more than 300 m thick. Previously described as unconformable on older Permian and Triassic Beacon strata, the Mawson is now known to be, at least in part, intrusive. Triassic Feather Formation country rocks at the contact display a zone of in situ brecciation. This is followed inward by a zone of megaclasts (10s of m long) derived from younger Triassic Lashly Formation strata, and then by a structureless, grey, sand-rich breccia which has increasing proportions of pyroclasts laterally and vertically. The grey breccia is overlain by units, up to 10s of m thick, of stratified tuff breccia and lapilli tuff, both of which consist of high proportions of Beacon and dolerite clasts set in a matrix of pyroclasts and sand-sized debris also derived from Beacon rocks. All but the brecciated country rocks are cut by basaltic diatremes, and by tuff-breccia and lapilli-tuff intrusive bodies. Megaclasts are mainly Lashly C strata which were a minimum of 120 m stratigraphically and topographically above any extant country rock, and demonstrate that the Mawson rocks are filling a collapse structure. The sequence of events is interpreted to be: 1) initial phreatic activity, on emplacement of a Ferrar Dolerite sill at depth, causing in situ brecciation; 2) withdrawal of magma and collapse of overlying strata to form a caldera containing megaclasts of Lashly strata; 3) renewed magma emplacement which initially caused phreatic activity but increasingly became phreatomagmatic, and formed the grey breccia by disaggregation of the collapsed Beacon strata; 4) full scale phreatomagmatism that erupted the stratified tuff breccia and lapilli tuff; 5) intrusion of basalt diatremes, bodies of tuff breccia and lapilli tuff, and dolerite plugs and dikes. Away from the mapped area, on the southern arm of Allan Hills, tuff breccia and lapilli tuff are crudely stratified and could be either outflow facies or

  15. Understanding the influence of watershed storage caused by human interferences on ET variance

    NASA Astrophysics Data System (ADS)

    Zeng, R.; Cai, X.

    2014-12-01

    Understanding the temporal variance of evapotranspiration (ET) at the watershed scale remains a challenging task, because it is affected by complex climate conditions, soil properties, vegetation, groundwater and human activities. In a changing environment with extensive and intensive human interferences, understanding ET variance and its factors is important for sustainable water resources management. This study presents an analysis of the effect of storage change caused by human activities on ET variance Irrigation usually filters ET variance through the use of surface and groundwater; however, over-amount irrigation may cause the depletion of watershed storage, which changes the coincidence of water availability and energy supply for ET. This study develops a framework by incorporating the water balance and the Budyko Hypothesis. It decomposes the ET variance to the variances of precipitation, potential ET, catchment storage change, and their covariances. The contributions of ET variance from the various components are scaled by some weighting functions, expressed as long-term climate conditions and catchment properties. ET variance is assessed by records from 32 major river basins across the world. It is found that ET variance is dominated by precipitation variance under hot-dry condition and by evaporative demand variance under cool-wet condition; while the coincidence of water and energy supply controls ET variance under moderate climate condition. Watershed storage change plays an increasing important role in determining ET variance with relatively shorter time scale. By incorporating storage change caused by human interferences, this framework corrects the over-estimation of ET variance in hot-dry climate and under-estimation of ET variance in cool-wet climate. Furthermore, classification of dominant factors on ET variance shows similar patterns as geographic zonation.

  16. The variance of the adjusted Rand index.

    PubMed

    Steinley, Douglas; Brusco, Michael J; Hubert, Lawrence

    2016-06-01

    For 30 years, the adjusted Rand index has been the preferred method for comparing 2 partitions (e.g., clusterings) of a set of observations. Although the index is widely used, little is known about its variability. Herein, the variance of the adjusted Rand index (Hubert & Arabie, 1985) is provided and its properties are explored. It is shown that a normal approximation is appropriate across a wide range of sample sizes and varying numbers of clusters. Further, it is shown that confidence intervals based on the normal distribution have desirable levels of coverage and accuracy. Finally, the first power analysis evaluating the ability to detect differences between 2, different adjusted Rand indices is provided. (PsycINFO Database Record PMID:26881693

  17. Simulation testing of unbiasedness of variance estimators

    USGS Publications Warehouse

    Link, W.A.

    1993-01-01

    In this article I address the evaluation of estimators of variance for parameter estimates. Given an unbiased estimator X of a parameter 0, and an estimator V of the variance of X, how does one test (via simulation) whether V is an unbiased estimator of the variance of X? The derivation of the test statistic illustrates the need for care in substituting consistent estimators for unknown parameters.

  18. Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science

    NASA Astrophysics Data System (ADS)

    Gelfert, Axel

    2014-03-01

    Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling `prose-poem' Eureka (1848), in particular, has sometimes been scrutinized for anticipations of later scientific developments. By contrast, the present paper argues that it should be understood as a contribution to the raging debates about scientific methodology at the time. This methodological interest, which is echoed in Poe's `tales of ratiocination', gives rise to a proposed new mode of—broadly abductive—inference, which Poe attributes to the hybrid figure of the `poet-mathematician'. Without creative imagination and intuition, Science would necessarily remain incomplete, even by its own standards. This concern with imaginative (abductive) inference ties in nicely with his coherentism, which grants pride of place to the twin virtues of Simplicity and Consistency, which must constrain imagination lest it degenerate into mere fancy.

  19. Atmospheric composition 1 million years ago from blue ice in the Allan Hills, Antarctica

    PubMed Central

    Higgins, John A.; Kurbatov, Andrei V.; Spaulding, Nicole E.; Brook, Ed; Introne, Douglas S.; Chimiak, Laura M.; Yan, Yuzhen; Mayewski, Paul A.; Bender, Michael L.

    2015-01-01

    Here, we present direct measurements of atmospheric composition and Antarctic climate from the mid-Pleistocene (∼1 Ma) from ice cores drilled in the Allan Hills blue ice area, Antarctica. The 1-Ma ice is dated from the deficit in 40Ar relative to the modern atmosphere and is present as a stratigraphically disturbed 12-m section at the base of a 126-m ice core. The 1-Ma ice appears to represent most of the amplitude of contemporaneous climate cycles and CO2 and CH4 concentrations in the ice range from 221 to 277 ppm and 411 to 569 parts per billion (ppb), respectively. These concentrations, together with measured δD of the ice, are at the warm end of the field for glacial–interglacial cycles of the last 800 ky and span only about one-half of the range. The highest CO2 values in the 1-Ma ice fall within the range of interglacial values of the last 400 ka but are up to 7 ppm higher than any interglacial values between 450 and 800 ka. The lowest CO2 values are 30 ppm higher than during any glacial period between 450 and 800 ka. This study shows that the coupling of Antarctic temperature and atmospheric CO2 extended into the mid-Pleistocene and demonstrates the feasibility of discontinuously extending the current ice core record beyond 800 ka by shallow coring in Antarctic blue ice areas. PMID:25964367

  20. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  1. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  2. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  3. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  4. 40 CFR 59.106 - Variance.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Automobile Refinish Coatings § 59.106 Variance. (a) Any regulated...

  5. Noise characteristics in DORIS station positions time series derived from IGN-JPL, INASAN and CNES-CLS analysis centres

    NASA Astrophysics Data System (ADS)

    Khelifa, S.

    2014-12-01

    Using wavelet transform and Allan variance, we have analysed the solutions of weekly position residuals of 09 high latitude DORIS stations in STCD (STation Coordinate Difference) format provided from the three Analysis Centres : IGN-JPL (solution ign11wd01), INASAN (solution ina10wd01) and CNES-CLS (solution lca11wd02), in order to compare the spectral characteristics of their residual noise. The temporal correlations between the three solutions, two by two and station by station, for each component (North, East and Vertical) reveal a high correlation in the horizontal components (North and East). For the North component, the correlation average is about 0.88, 0.81 and 0.79 between, respectively, IGN-INA, IGN-LCA and INA-LCA solutions, then for the East component it is about 0.84, 0.82 and 0.76, respectively. However, the correlations for the Vertical component are moderate with an average of 0.64, 0.57 and 0.58 in, respectively, IGN-INA, IGN-LCA and INA-LCA solutions. After removing the trends and seasonal components from the analysed time series, the Allan variance analysis shows that the three solutions are dominated by a white noise in the all three components (North, East and Vertical). The wavelet transform analysis, using the VisuShrink method with soft thresholding, reveals that the noise level in the LCA solution is less important compared to IGN and INA solutions. Indeed, the standard deviation of the noise for the three components is in the range of 5-11, 5-12 and 4-9mm in the IGN, INA, and LCA solutions, respectively.

  6. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances. (a) An employer may apply for a...

  7. 20 CFR 654.402 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Variances. 654.402 Section 654.402 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR SPECIAL RESPONSIBILITIES OF THE EMPLOYMENT SERVICE SYSTEM Housing for Agricultural Workers Purpose and Applicability § 654.402 Variances....

  8. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...

  9. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...

  10. 40 CFR 142.41 - Variance request.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... nature and duration of variance requested. (b) Relevant analytical results of water quality sampling of... relevant to ability to comply. (3) Analytical results of raw water quality relevant to the variance request... request made under § 142.40(b), a statement that the system will perform monitoring and other...

  11. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...

  12. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...

  13. 21 CFR 1010.4 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Variances. 1010.4 Section 1010.4 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) RADIOLOGICAL HEALTH... and Radiological Health, Food and Drug Administration, may grant a variance from one or...

  14. On Some Representations of Sample Variance

    ERIC Educational Resources Information Center

    Joarder, Anwar H.

    2002-01-01

    The usual formula for variance depending on rounding off the sample mean lacks precision, especially when computer programs are used for the calculation. The well-known simplification of the total sums of squares does not always give benefit. Since the variance of two observations is easily calculated without the use of a sample mean, and the…

  15. 10 CFR 1021.343 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures § 1021.343 Variances. (a) Emergency actions. DOE may take an action without observing all provisions of this part or the CEQ Regulations, in accordance with 40 CFR 1506.11, in emergency situations... 10 Energy 4 2010-01-01 2010-01-01 false Variances. 1021.343 Section 1021.343 Energy DEPARTMENT...

  16. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF... § 1304.408 Variances. The Vice President or the designee thereof is authorized, following...

  17. Nonlinear Epigenetic Variance: Review and Simulations

    ERIC Educational Resources Information Center

    Kan, Kees-Jan; Ploeger, Annemie; Raijmakers, Maartje E. J.; Dolan, Conor V.; van Der Maas, Han L. J.

    2010-01-01

    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies…

  18. Variance in trace constituents following the final stratospheric warming

    NASA Technical Reports Server (NTRS)

    Hess, Peter

    1990-01-01

    Concentration variations with time in trace stratospheric constituents N2O, CF2Cl2, CFCl3, and CH4 were investigated using samples collected aboard balloons flown over southern France during the summer months of 1977-1979. Data are analyzed using a tracer transport model, and the mechanisms behind the modeled tracer variance are examined. An analysis of the N2O profiles for the month of June showed that a large fraction of the variance reported by Ehhalt et al. (1983) is on an interannual time scale.

  19. Temporal Relation Extraction in Outcome Variances of Clinical Pathways.

    PubMed

    Yamashita, Takanori; Wakata, Yoshifumi; Hamai, Satoshi; Nakashima, Yasuharu; Iwamoto, Yukihide; Franagan, Brendan; Nakashima, Naoki; Hirokawa, Sachio

    2015-01-01

    Recently the clinical pathway has progressed with digitalization and the analysis of activity. There are many previous studies on the clinical pathway but not many feed directly into medical practice. We constructed a mind map system that applies the spanning tree. This system can visualize temporal relations in outcome variances, and indicate outcomes that affect long-term hospitalization. PMID:26262376

  20. Portfolio optimization with mean-variance model

    NASA Astrophysics Data System (ADS)

    Hoe, Lam Weng; Siew, Lam Weng

    2016-06-01

    Investors wish to achieve the target rate of return at the minimum level of risk in their investment. Portfolio optimization is an investment strategy that can be used to minimize the portfolio risk and can achieve the target rate of return. The mean-variance model has been proposed in portfolio optimization. The mean-variance model is an optimization model that aims to minimize the portfolio risk which is the portfolio variance. The objective of this study is to construct the optimal portfolio using the mean-variance model. The data of this study consists of weekly returns of 20 component stocks of FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI). The results of this study show that the portfolio composition of the stocks is different. Moreover, investors can get the return at minimum level of risk with the constructed optimal mean-variance portfolio.

  1. Carbonates in fractures of Martian meteorite Allan Hills 84001: petrologic evidence for impact origin

    NASA Technical Reports Server (NTRS)

    Scott, E. R.; Krot, A. N.; Yamaguchi, A.

    1998-01-01

    Carbonates in Martian meteorite Allan Hills 84001 occur as grains on pyroxene grain boundaries, in crushed zones, and as disks, veins, and irregularly shaped grains in healed pyroxene fractures. Some carbonate disks have tapered Mg-rich edges and are accompanied by smaller, thinner and relatively homogeneous, magnesite microdisks. Except for the microdisks, all types of carbonate grains show the same unique chemical zoning pattern on MgCO3-FeCO3-CaCO3 plots. This chemical characteristic and the close spatial association of diverse carbonate types show that all carbonates formed by a similar process. The heterogeneous distribution of carbonates in fractures, tapered shapes of some disks, and the localized occurrence of Mg-rich microdisks appear to be incompatible with growth from an externally derived CO2-rich fluid that changed in composition over time. These features suggest instead that the fractures were closed as carbonates grew from an internally derived fluid and that the microdisks formed from a residual Mg-rich fluid that was squeezed along fractures. Carbonate in pyroxene fractures is most abundant near grains of plagioclase glass that are located on pyroxene grain boundaries and commonly contain major or minor amounts of carbonate. We infer that carbonates in fractures formed from grain boundary carbonates associated with plagiociase that were melted by impact and dispersed into the surrounding fractured pyroxene. Carbonates in fractures, which include those studied by McKay et al. (1996), could not have formed at low temperatures and preserved mineralogical evidence for Martian organisms.

  2. Portfolio optimization using median-variance approach

    NASA Astrophysics Data System (ADS)

    Wan Mohd, Wan Rosanisah; Mohamad, Daud; Mohamed, Zulkifli

    2013-04-01

    Optimization models have been applied in many decision-making problems particularly in portfolio selection. Since the introduction of Markowitz's theory of portfolio selection, various approaches based on mathematical programming have been introduced such as mean-variance, mean-absolute deviation, mean-variance-skewness and conditional value-at-risk (CVaR) mainly to maximize return and minimize risk. However most of the approaches assume that the distribution of data is normal and this is not generally true. As an alternative, in this paper, we employ the median-variance approach to improve the portfolio optimization. This approach has successfully catered both types of normal and non-normal distribution of data. With this actual representation, we analyze and compare the rate of return and risk between the mean-variance and the median-variance based portfolio which consist of 30 stocks from Bursa Malaysia. The results in this study show that the median-variance approach is capable to produce a lower risk for each return earning as compared to the mean-variance approach.

  3. Bulk and stable isotopic compositions of carbonate minerals in Martian meteorite Allan Hills 84001: no proof of high formation temperature.

    PubMed

    Treiman, A H; Romanek, C S

    1998-07-01

    Understanding the origin of carbonate minerals in the Martian meteorite Allan Hills (ALH) 84001 is crucial to evaluating the hypothesis that they contain traces of ancient Martian life. Using arguments based on chemical equilibria among carbonates and fluids, an origin at >650 degrees C (inimical to life) has been proposed. However, the bulk and stable isotopic compositions of the carbonate minerals are open to multiple interpretations and so lend no particular support to a high-temperature origin. Other methods (possibly less direct) will have to be used to determine the formation temperature of the carbonates in ALH84001. PMID:11543073

  4. Bulk and Stable Isotopic Compositions of Carbonate Minerals in Martian Meteorite Allan Hills 84001: No Proof of High Formation Temperature

    NASA Technical Reports Server (NTRS)

    Treiman, Allan H.; Romanek, Christopher S.

    1998-01-01

    Understanding the origin of carbonate minerals in the Martian meteorite Allan Hills (ALH) 84001 is crucial to evaluating the hypothesis that they contain traces of ancient Martian life. Using arguments based on chemical equilibria among carbonates and fluids, an origin at greater than 650 C (inimical to life) has been proposed. However, the bulk and stable isotopic compositions of the carbonate minerals are open to multiple interpretations and so lend no particular support to a high-temperature origin. Other methods (possibly less direct) will have to be used to determine the formation temperature of the carbonates in ALH 84001.

  5. [ECoG classification based on wavelet variance].

    PubMed

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  6. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  7. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  8. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  9. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  10. 40 CFR 59.206 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VOLATILE ORGANIC COMPOUND EMISSION STANDARDS FOR CONSUMER AND COMMERCIAL PRODUCTS National Volatile Organic Compound Emission Standards for Consumer Products § 59.206 Variances. (a) Any regulated entity who...

  11. Reducing variance in batch partitioning measurements

    SciTech Connect

    Mariner, Paul E.

    2010-08-11

    The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

  12. Variance anisotropy in compressible 3-D MHD

    NASA Astrophysics Data System (ADS)

    Oughton, S.; Matthaeus, W. H.; Wan, Minping; Parashar, Tulasi

    2016-06-01

    We employ spectral method numerical simulations to examine the dynamical development of anisotropy of the variance, or polarization, of the magnetic and velocity field in compressible magnetohydrodynamic (MHD) turbulence. Both variance anisotropy and spectral anisotropy emerge under influence of a large-scale mean magnetic field B0; these are distinct effects, although sometimes related. Here we examine the appearance of variance parallel to B0, when starting from a highly anisotropic state. The discussion is based on a turbulence theoretic approach rather than a wave perspective. We find that parallel variance emerges over several characteristic nonlinear times, often attaining a quasi-steady level that depends on plasma beta. Consistency with solar wind observations seems to occur when the initial state is dominated by quasi-two-dimensional fluctuations.

  13. Discrimination of frequency variance for tonal sequencesa)

    PubMed Central

    Byrne, Andrew J.; Viemeister, Neal F.; Stellmack, Mark A.

    2014-01-01

    Real-world auditory stimuli are highly variable across occurrences and sources. The present study examined the sensitivity of human listeners to differences in global stimulus variability. In a two-interval, forced-choice task, variance discrimination was measured using sequences of five 100-ms tone pulses. The frequency of each pulse was sampled randomly from a distribution that was Gaussian in logarithmic frequency. In the non-signal interval, the sampled distribution had a variance of σSTAN2, while in the signal interval, the variance of the sequence was σSIG2 (with σSIG2 > σSTAN2). The listener's task was to choose the interval with the larger variance. To constrain possible decision strategies, the mean frequency of the sampling distribution of each interval was randomly chosen for each presentation. Psychometric functions were measured for various values of σSTAN2. Although the performance was remarkably similar across listeners, overall performance was poorer than that of an ideal observer (IO) which perfectly compares interval variances. However, like the IO, Weber's Law behavior was observed, with a constant ratio of (σSIG2-σSTAN2) to σSTAN2 yielding similar performance. A model which degraded the IO with a frequency-resolution noise and a computational noise provided a reasonable fit to the real data. PMID:25480064

  14. Encoding of natural sounds by variance of the cortical local field potential.

    PubMed

    Ding, Nai; Simon, Jonathan Z; Shamma, Shihab A; David, Stephen V

    2016-06-01

    Neural encoding of sensory stimuli is typically studied by averaging neural signals across repetitions of the same stimulus. However, recent work has suggested that the variance of neural activity across repeated trials can also depend on sensory inputs. Here we characterize how intertrial variance of the local field potential (LFP) in primary auditory cortex of awake ferrets is affected by continuous natural sound stimuli. We find that natural sounds often suppress the intertrial variance of low-frequency LFP (<16 Hz). However, the amount of the variance reduction is not significantly correlated with the amplitude of the mean response at the same recording site. Moreover, the variance changes occur with longer latency than the mean response. Although the dynamics of the mean response and intertrial variance differ, spectro-temporal receptive field analysis reveals that changes in LFP variance have frequency tuning similar to multiunit activity at the same recording site, suggesting a local origin for changes in LFP variance. In summary, the spectral tuning of LFP intertrial variance and the absence of a correlation with the amplitude of the mean evoked LFP suggest substantial heterogeneity in the interaction between spontaneous and stimulus-driven activity across local neural populations in auditory cortex. PMID:26912594

  15. The Regents of the University of California, Petitioner, vs. Allan Bakke, Respondent. On Writ of Certiorari to the Supreme Court of California.

    ERIC Educational Resources Information Center

    Supreme Court of the U. S., Washington, DC.

    The main question of this case is whether Allan Bakke was denied the equal protection of the laws in contravention of the 14th Amendment, solely because of his race, as the result of a racial quota admission policy. A statement of the case which reviews pertinent data such as the admission procedure of the medical school, Bakke's interview and…

  16. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  17. Inhomogeneity-induced variance of cosmological parameters

    NASA Astrophysics Data System (ADS)

    Wiegand, A.; Schwarz, D. J.

    2012-02-01

    Context. Modern cosmology relies on the assumption of large-scale isotropy and homogeneity of the Universe. However, locally the Universe is inhomogeneous and anisotropic. This raises the question of how local measurements (at the ~102 Mpc scale) can be used to determine the global cosmological parameters (defined at the ~104 Mpc scale)? Aims: We connect the questions of cosmological backreaction, cosmic averaging and the estimation of cosmological parameters and show how they relate to the problem of cosmic variance. Methods: We used Buchert's averaging formalism and determined a set of locally averaged cosmological parameters in the context of the flat Λ cold dark matter model. We calculated their ensemble means (i.e. their global value) and variances (i.e. their cosmic variance). We applied our results to typical survey geometries and focused on the study of the effects of local fluctuations of the curvature parameter. Results: We show that in the context of standard cosmology at large scales (larger than the homogeneity scale and in the linear regime), the question of cosmological backreaction and averaging can be reformulated as the question of cosmic variance. The cosmic variance is found to be highest in the curvature parameter. We propose to use the observed variance of cosmological parameters to measure the growth factor. Conclusions: Cosmological backreaction and averaging are real effects that have been measured already for a long time, e.g. by the fluctuations of the matter density contrast averaged over spheres of a certain radius. Backreaction and averaging effects from scales in the linear regime, as considered in this work, are shown to be important for the precise measurement of cosmological parameters.

  18. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  19. Variance in binary stellar population synthesis

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane L.

    2016-03-01

    In the years preceding LISA, Milky Way compact binary population simulations can be used to inform the science capabilities of the mission. Galactic population simulation efforts generally focus on high fidelity models that require extensive computational power to produce a single simulated population for each model. Each simulated population represents an incomplete sample of the functions governing compact binary evolution, thus introducing variance from one simulation to another. We present a rapid Monte Carlo population simulation technique that can simulate thousands of populations in less than a week, thus allowing a full exploration of the variance associated with a binary stellar evolution model.

  20. Decomposition of Variance for Spatial Cox Processes

    PubMed Central

    Jalilian, Abdollah; Guan, Yongtao; Waagepetersen, Rasmus

    2012-01-01

    Spatial Cox point processes is a natural framework for quantifying the various sources of variation governing the spatial distribution of rain forest trees. We introduce a general criterion for variance decomposition for spatial Cox processes and apply it to specific Cox process models with additive or log linear random intensity functions. We moreover consider a new and flexible class of pair correlation function models given in terms of normal variance mixture covariance functions. The proposed methodology is applied to point pattern data sets of locations of tropical rain forest trees. PMID:23599558

  1. Variance Reduction Using Nonreversible Langevin Samplers

    NASA Astrophysics Data System (ADS)

    Duncan, A. B.; Lelièvre, T.; Pavliotis, G. A.

    2016-05-01

    A standard approach to computing expectations with respect to a given target measure is to introduce an overdamped Langevin equation which is reversible with respect to the target distribution, and to approximate the expectation by a time-averaging estimator. As has been noted in recent papers [30, 37, 61, 72], introducing an appropriately chosen nonreversible component to the dynamics is beneficial, both in terms of reducing the asymptotic variance and of speeding up convergence to the target distribution. In this paper we present a detailed study of the dependence of the asymptotic variance on the deviation from reversibility. Our theoretical findings are supported by numerical simulations.

  2. Spatial Variances of Wind Fields and Their Relation to Second-Order Structure Functions and Spectra

    NASA Astrophysics Data System (ADS)

    King, G. P.; Vogelzang, J.; Stoffelen, A.; Portabella, M.

    2014-12-01

    Kinetic energy variance as a function of spatial scale for wind fields is commonly estimated either using second-order structure functions (in the spatial domain) or by spectral analysis (in the frequency domain). It will be demonstrated that neither spectra nor second-order structure functions offer a good representation of the variance as a function of scale. These difficulties can be circumvented by using a statistical quantity called spatial variance. It combines the advantages of spectral analysis and spatial statistics. In particular, when applied to observations, spatial variances have a clear interpretation and are tolerant for missing data. They can be related to second-order structure functions, both for discrete and continuous data. For data sets without missing points the relation is statistically exact. Spatial variances can also be Fourier transformed to yield a relation with spectra. The flexibility of spatial variances is used to study various sampling strategies, and to compare them with second-order structure functions and spectral variances. It is shown that the spectral sampling strategy is not seriously biased to calm conditions for scatterometer ocean surface vector winds, and that one-fifth of the second-order structure function value is a good proxy for the cumulative variance.

  3. Videotape Project in Child Variance. Final Report.

    ERIC Educational Resources Information Center

    Morse, William C.; Smith, Judith M.

    The design, production, dissemination, and evaluation of a series of videotaped training packages designed to enable teachers, parents, and paraprofessionals to interpret child variance in light of personal and alternative perspectives of behavior are discussed. The goal of each package is to highlight unique contributions of different theoretical…

  4. Testing Variances in Psychological and Educational Research.

    ERIC Educational Resources Information Center

    Ramsey, Philip H.

    1994-01-01

    A review of the literature indicates that the two best procedures for testing variances are one that was proposed by O'Brien (1981) and another that was proposed by Brown and Forsythe (1974). An examination of these procedures for a variety of populations confirms their robustness and indicates how optimal power can usually be obtained. (SLD)

  5. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  6. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  7. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  8. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  9. 10 CFR 1022.16 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... such an action) DOE shall document the emergency actions in accordance with NEPA procedures at 10 CFR... ENERGY (GENERAL PROVISIONS) COMPLIANCE WITH FLOODPLAIN AND WETLAND ENVIRONMENTAL REVIEW REQUIREMENTS Procedures for Floodplain and Wetland Reviews § 1022.16 Variances. (a) Emergency actions. DOE may...

  10. Variance Reduction for a Discrete Velocity Gas

    NASA Astrophysics Data System (ADS)

    Morris, A. B.; Varghese, P. L.; Goldstein, D. B.

    2011-05-01

    We extend a variance reduction technique developed by Baker and Hadjiconstantinou [1] to a discrete velocity gas. In our previous work, the collision integral was evaluated by importance sampling of collision partners [2]. Significant computational effort may be wasted by evaluating the collision integral in regions where the flow is in equilibrium. In the current approach, substantial computational savings are obtained by only solving for the deviations from equilibrium. In the near continuum regime, the deviations from equilibrium are small and low noise evaluation of the collision integral can be achieved with very coarse statistical sampling. Spatially homogenous relaxation of the Bobylev-Krook-Wu distribution [3,4], was used as a test case to verify that the method predicts the correct evolution of a highly non-equilibrium distribution to equilibrium. When variance reduction is not used, the noise causes the entropy to undershoot, but the method with variance reduction matches the analytic curve for the same number of collisions. We then extend the work to travelling shock waves and compare the accuracy and computational savings of the variance reduction method to DSMC over Mach numbers ranging from 1.2 to 10.

  11. Multiple Comparison Procedures when Population Variances Differ.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Lee, JaeShin

    A review of the literature on multiple comparison procedures suggests several alternative approaches for comparing means when population variances differ. These include: (1) the approach of P. A. Games and J. F. Howell (1976); (2) C. W. Dunnett's C confidence interval (1980); and (3) Dunnett's T3 solution (1980). These procedures control the…

  12. Variance Anisotropy of Solar Wind fluctuations

    NASA Astrophysics Data System (ADS)

    Oughton, S.; Matthaeus, W. H.; Wan, M.; Osman, K.

    2013-12-01

    Solar wind observations at MHD scales indicate that the energy associated with velocity and magnetic field fluctuations transverse to the mean magnetic field is typically much larger than that associated with parallel fluctuations [eg, 1]. This is often referred to as variance anisotropy. Various explanations for it have been suggested, including that the fluctuations are predominantly shear Alfven waves [1] and that turbulent dynamics leads to such states [eg, 2]. Here we investigate the origin and strength of such variance anisotropies, using spectral method simulations of the compressible (polytropic) 3D MHD equations. We report on results from runs with initial conditions that are either (i) broadband turbulence or (ii) fluctuations polarized in the same sense as shear Alfven waves. The dependence of the variance anisotropy on the plasma beta and Mach number is examined [3], along with the timescale for any variance anisotropy to develop. Implications for solar wind fluctuations will be discussed. References: [1] Belcher, J. W. and Davis Jr., L. (1971), J. Geophys. Res., 76, 3534. [2] Matthaeus, W. H., Ghosh, S., Oughton, S. and Roberts, D. A. (1996), J. Geophys. Res., 101, 7619. [3] Smith, C. W., B. J. Vasquez and K. Hamilton (2006), J. Geophys. Res., 111, A09111.

  13. Comparing the Variances of Two Dependent Groups.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1990-01-01

    Recently, C. E. McCulloch (1987) suggested a modification of the Morgan-Pitman test for comparing the variances of two dependent groups. This paper demonstrates that there are situations where the procedure is not robust. A subsample approach, similar to the Box-Scheffe test, and the Sandvik-Olsson procedure are also assessed. (TJH)

  14. 78 FR 14122 - Revocation of Permanent Variances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-04

    ... OSHA's scaffolds standards for construction (77 FR 46948). Today's notice revoking the variances takes... Safety and Health Act of 1970 (OSH Act; 29 U.S.C. 651, 655) in 1971 (see 36 FR 7340). Paragraphs (a)(4..., construction, and use of scaffolds (61 FR 46026). In the preamble to the final rule, OSHA stated that it...

  15. 7 CFR 205.290 - Temporary variances.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Temporary variances. 205.290 Section 205.290 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) ORGANIC FOODS PRODUCTION ACT PROVISIONS NATIONAL ORGANIC PROGRAM...

  16. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 18 Conservation of Power and Water Resources 2 2012-04-01 2012-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...

  17. 18 CFR 1304.408 - Variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 2 2011-04-01 2011-04-01 false Variances. 1304.408 Section 1304.408 Conservation of Power and Water Resources TENNESSEE VALLEY AUTHORITY APPROVAL OF CONSTRUCTION IN THE TENNESSEE RIVER SYSTEM AND REGULATION OF STRUCTURES AND OTHER ALTERATIONS...

  18. A surface layer variance heat budget for ENSO

    NASA Astrophysics Data System (ADS)

    Boucharel, Julien; Timmermann, Axel; Santoso, Agus; England, Matthew H.; Jin, Fei-Fei; Balmaseda, Magdalena A.

    2015-05-01

    Characteristics of the El Niño-Southern Oscillation (ENSO), such as frequency, propagation, spatial extent, and amplitude, strongly depend on the climatological background state of the tropical Pacific. Multidecadal changes in the ocean mean state are hence likely to modulate ENSO properties. To better link background state variations with low-frequency amplitude changes of ENSO, we develop a diagnostic framework that determines locally the contributions of different physical feedback terms on the ocean surface temperature variance. Our analysis shows that multidecadal changes of ENSO variance originate from the delicate balance between the background-state-dependent positive thermocline feedback and the atmospheric damping of sea surface temperatures anomalies. The role of higher-order processes and atmospheric and oceanic nonlinearities is also discussed. The diagnostic tool developed here can be easily applied to other tropical ocean areas and climate phenomena.

  19. No evidence for anomalously low variance circles on the sky

    SciTech Connect

    Moss, Adam; Scott, Douglas; Zibin, James P. E-mail: dscott@phas.ubc.ca

    2011-04-01

    In a recent paper, Gurzadyan and Penrose claim to have found directions on the sky centred on which are circles of anomalously low variance in the cosmic microwave background (CMB). These features are presented as evidence for a particular picture of the very early Universe. We attempted to repeat the analysis of these authors, and we can indeed confirm that such variations do exist in the temperature variance for annuli around points in the data. However, we find that this variation is entirely expected in a sky which contains the usual CMB anisotropies. In other words, properly simulated Gaussian CMB data contain just the sorts of variations claimed. Gurzadyan and Penrose have not found evidence for pre-Big Bang phenomena, but have simply re-discovered that the CMB contains structure.

  20. Isotopic evidence for a terrestrial source of organic compounds found in martian meteorites Allan Hills 84001 and Elephant Moraine 79001.

    PubMed

    Jull, A J; Courtney, C; Jeffrey, D A; Beck, J W

    1998-01-16

    Stepped-heating experiments on martian meteorites Allan Hills 84001 (ALH84001) and Elephant Moraine 79001 (EETA79001) revealed low-temperature (200 to 430 degrees Celsius) fractions with a carbon isotopic composition delta13C between -22 and -33 per mil and a carbon-14 content that is 40 to 60 percent of that of modern terrestrial carbon, consistent with a terrestrial origin for most of the organic material. Intermediate-temperature (400 to 600 degrees Celsius) carbonate-rich fractions of ALH84001 have delta13C of +32 to +40 per mil with a low carbon-14 content, consistent with an extraterrestrial origin, whereas some of the carbonate fraction of EETA79001 is terrestrial. In addition, ALH84001 contains a small preterrestrial carbon component of unknown origin that combusts at intermediate temperatures. This component is likely a residual acid-insoluble carbonate or a more refractory organic phase. PMID:9430584

  1. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization

    PubMed Central

    Dazard, Jean-Eudes; Xu, Hua; Rao, J. Sunil

    2015-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets (p ≫ n paradigm), such as in ‘omics’-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real ‘omics’ test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR (‘Mean-Variance Regularization’), downloadable from the CRAN. PMID:26819572

  2. Variance of sensory threshold measurements: discrimination of feigners from trustworthy performers.

    PubMed

    Yarnitsky, D; Sprecher, E; Tamir, A; Zaslansky, R; Hemli, J A

    1994-09-01

    Sensory threshold measurements are criticized as subjective and therefore not to be relied upon in clinical diagnostic practice, particularly when deliberate deception by the patient is suspected. In an attempt to devise a method which permits dependable sensory threshold interpretation, individual variability of thresholds was examined in normal and neuropathic subjects. Normals were also instructed to feign sensory impairment resulting from hypothetical injury. For each subject, a number of threshold readings were averaged, yielding individual means and variances. Feigning normal subjects evidenced a larger variance compared to trustworthy normal and neuropathic subjects. Thus, alertness to variance reinforces the psychophysical analysis: small variance values suggest trustworthy normal or pathological results, whereas large variance calls the interpreter's attention to feigned results or inattentive test performance. PMID:7807165

  3. 42 CFR 456.525 - Request for renewal of variance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Request for renewal of variance. 456.525 Section..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.525 Request for renewal of variance. (a) The agency must submit a request for renewal of...

  4. 10 CFR 851.32 - Action on variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Action on variance requests. 851.32 Section 851.32 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.32 Action on variance requests. (a... approval of a variance application, the Chief Health, Safety and Security Officer must forward to the...

  5. 41 CFR 50-204.1a - Variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Variances. 50-204.1a... and Application § 50-204.1a Variances. (a) Variances from standards in this part may be granted in the same circumstances in which variances may be granted under sections 6(b)(6)(A) or 6(d) of the...

  6. 21 CFR 898.14 - Exemptions and variances.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Exemptions and variances. 898.14 Section 898.14... variances. (a) A request for an exemption or variance shall be submitted in the form of a petition under... with the device; and (4) Other information justifying the exemption or variance. (b) An exemption...

  7. 10 CFR 851.30 - Consideration of variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 4 2010-01-01 2010-01-01 false Consideration of variances. 851.30 Section 851.30 Energy DEPARTMENT OF ENERGY WORKER SAFETY AND HEALTH PROGRAM Variances § 851.30 Consideration of variances. (a) Variances shall be granted by the Under Secretary after considering the recommendation of the Chief...

  8. 42 CFR 456.521 - Conditions for granting variance requests.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Conditions for granting variance requests. 456.521..., and Variances for Hospitals and Mental Hospitals Ur Plan: Remote Facility Variances from Time Requirements § 456.521 Conditions for granting variance requests. (a) Except as described under paragraph...

  9. PHD filtering with localised target number variance

    NASA Astrophysics Data System (ADS)

    Delande, Emmanuel; Houssineau, Jérémie; Clark, Daniel

    2013-05-01

    Mahler's Probability Hypothesis Density (PHD filter), proposed in 2000, addresses the challenges of the multipletarget detection and tracking problem by propagating a mean density of the targets in any region of the state space. However, when retrieving some local evidence on the target presence becomes a critical component of a larger process - e.g. for sensor management purposes - the local target number is insufficient unless some confidence on the estimation of the number of targets can be provided as well. In this paper, we propose a first implementation of a PHD filter that also includes an estimation of localised variance in the target number following each update step; we then illustrate the advantage of the PHD filter + variance on simulated data from a multiple-target scenario.

  10. The Theory of Variances in Equilibrium Reconstruction

    SciTech Connect

    Zakharov, Leonid E.; Lewandowski, Jerome; Foley, Elizabeth L.; Levinton, Fred M.; Yuh, Howard Y.; Drozdov, Vladimir; McDonald, Darren

    2008-01-14

    The theory of variances of equilibrium reconstruction is presented. It complements existing practices with information regarding what kind of plasma profiles can be reconstructed, how accurately, and what remains beyond the abilities of diagnostic systems. The σ-curves, introduced by the present theory, give a quantitative assessment of quality of effectiveness of diagnostic systems in constraining equilibrium reconstructions. The theory also suggests a method for aligning the accuracy of measurements of different physical nature.

  11. Minimum variance and variance of outgoing quality limit MDS-1(c1, c2) plans

    NASA Astrophysics Data System (ADS)

    Raju, C.; Vidya, R.

    2016-06-01

    In this article, the outgoing quality (OQ) and total inspection (TI) of multiple deferred state sampling plans MDS-1(c1,c2) are studied. It is assumed that the inspection is rejection rectification. Procedures for designing MDS-1(c1,c2) sampling plans with minimum variance of OQ and TI are developed. A procedure for obtaining a plan for a designated upper limit for the variance of the OQ (VOQL) is outlined.

  12. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights. PMID:16078388

  13. AVATAR -- Automatic variance reduction in Monte Carlo calculations

    SciTech Connect

    Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D.

    1997-05-01

    AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.

  14. Analyzing the Effect of JPEG Compression on Local Variance of Image Intensity.

    PubMed

    Yang, Jianquan; Zhu, Guopu; Shi, Yun-Qing

    2016-06-01

    The local variance of image intensity is a typical measure of image smoothness. It has been extensively used, for example, to measure the visual saliency or to adjust the filtering strength in image processing and analysis. However, to the best of our knowledge, no analytical work has been reported about the effect of JPEG compression on image local variance. In this paper, a theoretical analysis on the variation of local variance caused by JPEG compression is presented. First, the expectation of intensity variance of 8×8 non-overlapping blocks in a JPEG image is derived. The expectation is determined by the Laplacian parameters of the discrete cosine transform coefficient distributions of the original image and the quantization step sizes used in the JPEG compression. Second, some interesting properties that describe the behavior of the local variance under different degrees of JPEG compression are discussed. Finally, both the simulation and the experiments are performed to verify our derivation and discussion. The theoretical analysis presented in this paper provides some new insights into the behavior of local variance under JPEG compression. Moreover, it has the potential to be used in some areas of image processing and analysis, such as image enhancement, image quality assessment, and image filtering. PMID:27093626

  15. Estimation of variance distribution in three-dimensional reconstruction. II. Applications

    NASA Astrophysics Data System (ADS)

    Liu, Weiping; Boisset, Nicolas; Frank, Joachim

    1995-12-01

    A previously developed theory of three-dimensional (3-D) variance estimation [J. Opt. Soc. Am. 12, XXXX (1995)] is applied to the structural study of a hemocyanin-Fab complex with the electron microscope. The precise locations of structurally variable regions of the macromolecule are determined from the 3-D variance maps. The structural differences among different classes of the macromolecular complex are assessed by the use of the statistical t -test, and the 3-D antibody binding sites are revealed. From a model analysis, a rule is demonstrated for visually identifying a 3-D conformational change by the inspection of the 3-D variance map. Our analysis lays the foundation for numerous practical applications of variance estimation in the 3-D imaging of macromolecules. Copyright (c) 1995 Optical Society of America

  16. Geological evolution of the Coombs Allan Hills area, Ferrar large igneous province, Antarctica: Debris avalanches, mafic pyroclastic density currents, phreatocauldrons

    NASA Astrophysics Data System (ADS)

    Ross, Pierre-Simon; White, James D. L.; McClintock, Murray

    2008-05-01

    The Jurassic Ferrar large igneous province of Antarctica comprises igneous intrusions, flood lavas, and mafic volcaniclastic deposits (now lithified). The latter rocks are particularly diverse and well-exposed in the Coombs-Allan Hills area of South Victoria Land, where they are assigned to the Mawson Formation. In this paper we use these rocks in conjunction with the pre-Ferrar sedimentary rocks (Beacon Supergroup) and the lavas themselves (Kirkpatrick Basalt) to reconstruct the geomorphological and geological evolution of the landscape. In the Early Jurassic, the surface of the region was an alluvial plain, with perhaps 1 km of mostly continental siliciclastic sediments underlying it. After the fall of silicic ash from an unknown but probably distal source, mafic magmatism of the Ferrar province began. The oldest record of this event at Allan Hills is a ≤ 180 m-thick debris-avalanche deposit (member m1 of the Mawson Formation) which contains globular domains of mafic igneous rock. These domains are inferred to represent dismembered Ferrar intrusions emplaced in the source area of the debris avalanche; shallow emplacement of Ferrar magmas caused a slope failure that mobilized the uppermost Beacon Supergroup, and the silicic ash deposits, into a pre-existing valley or basin. The period which followed ('Mawson time') was the main stage for explosive eruptions in the Ferrar province, and several cubic kilometres of both new magma and sedimentary rock were fragmented over many years. Phreatomagmatic explosions were the dominant fragmentation mechanism, with magma-water interaction taking place in both sedimentary aquifers and existing vents filled by volcaniclastic debris. At Coombs Hills, a vent complex or 'phreatocauldron' was formed by coalescence of diatreme-like structures; at Allan Hills, member m2 of the Mawson Formation consists mostly of thick, coarse-grained, poorly sorted layers inferred to represent the lithified deposits of pyroclastic density currents

  17. Visual SLAM Using Variance Grid Maps

    NASA Technical Reports Server (NTRS)

    Howard, Andrew B.; Marks, Tim K.

    2011-01-01

    An algorithm denoted Gamma-SLAM performs further processing, in real time, of preprocessed digitized images acquired by a stereoscopic pair of electronic cameras aboard an off-road robotic ground vehicle to build accurate maps of the terrain and determine the location of the vehicle with respect to the maps. Part of the name of the algorithm reflects the fact that the process of building the maps and determining the location with respect to them is denoted simultaneous localization and mapping (SLAM). Most prior real-time SLAM algorithms have been limited in applicability to (1) systems equipped with scanning laser range finders as the primary sensors in (2) indoor environments (or relatively simply structured outdoor environments). The few prior vision-based SLAM algorithms have been feature-based and not suitable for real-time applications and, hence, not suitable for autonomous navigation on irregularly structured terrain. The Gamma-SLAM algorithm incorporates two key innovations: Visual odometry (in contradistinction to wheel odometry) is used to estimate the motion of the vehicle. An elevation variance map (in contradistinction to an occupancy or an elevation map) is used to represent the terrain. The Gamma-SLAM algorithm makes use of a Rao-Blackwellized particle filter (RBPF) from Bayesian estimation theory for maintaining a distribution over poses and maps. The core idea of the RBPF approach is that the SLAM problem can be factored into two parts: (1) finding the distribution over robot trajectories, and (2) finding the map conditioned on any given trajectory. The factorization involves the use of a particle filter in which each particle encodes both a possible trajectory and a map conditioned on that trajectory. The base estimate of the trajectory is derived from visual odometry, and the map conditioned on that trajectory is a Cartesian grid of elevation variances. In comparison with traditional occupancy or elevation grid maps, the grid elevation variance

  18. Fringe biasing: A variance reduction technique for optically thick meshes

    SciTech Connect

    Smedley-Stevenson, R. P.

    2013-07-01

    Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)

  19. Assessment of the genetic variance of late-onset Alzheimer's disease.

    PubMed

    Ridge, Perry G; Hoyt, Kaitlyn B; Boehme, Kevin; Mukherjee, Shubhabrata; Crane, Paul K; Haines, Jonathan L; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Schellenberg, Gerard D; Kauwe, John S K

    2016-05-01

    Alzheimer's disease (AD) is a complex genetic disorder with no effective treatments. More than 20 common markers have been identified, which are associated with AD. Recently, several rare variants have been identified in Amyloid Precursor Protein (APP), Triggering Receptor Expressed On Myeloid Cells 2 (TREM2) and Unc-5 Netrin Receptor C (UNC5C) that affect risk for AD. Despite the many successes, the genetic architecture of AD remains unsolved. We used Genome-wide Complex Trait Analysis to (1) estimate phenotypic variance explained by genetics; (2) calculate genetic variance explained by known AD single nucleotide polymorphisms (SNPs); and (3) identify the genomic locations of variation that explain the remaining unexplained genetic variance. In total, 53.24% of phenotypic variance is explained by genetics, but known AD SNPs only explain 30.62% of the genetic variance. Of the unexplained genetic variance, approximately 41% is explained by unknown SNPs in regions adjacent to known AD SNPs, and the remaining unexplained genetic variance outside these regions. PMID:27036079

  20. Estimators for variance components in structured stair nesting models

    NASA Astrophysics Data System (ADS)

    Monteiro, Sandra; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    The purpose of this paper is to present the estimation of the components of variance in structured stair nesting models. The relationship between the canonical variance components and the original ones, will be very important in obtaining that estimators.

  1. Fine-Grained Rims in the Allan Hills 81002 and Lewis Cliff 90500 CM2 Meteorites: Their Origin and Modification

    NASA Technical Reports Server (NTRS)

    Hua, X.; Wang, J.; Buseck, P. R.

    2002-01-01

    Antarctic CM meteorites Allan Hills (ALH) 8 1002 and Lewis Cliff (LEW) 90500 contain abundant fine-grained rims (FGRs) that surround a variety of coarse-grained objects. FGRs from both meteorites have similar compositions and petrographic features, independent of their enclosed objects. The FGRs are chemically homogeneous at the 10 m scale for major and minor elements and at the 25 m scale for trace elements. They display accretionary features and contain large amounts of volatiles, presumably water. They are depleted in Ca, Mn, and S but enriched in P. All FGRs show a slightly fractionated rare earth element (REE) pattern, with enrichments of Gd and Yb and depletion of Er. Gd is twice as abundant as Er. Our results indicate that those FGRs are not genetically related to their enclosed cores. They were sampled from a reservoir of homogeneously mixed dust, prior to accretion to their parent body. The rim materials subsequently experienced aqueous alteration under identical conditions. Based on their mineral, textural, and especially chemical similarities, we conclude that ALH 8 1002 and LEW 90500 likely have a similar or identical source.

  2. Allan Brooks, naturalist and artist (1869-1946): the travails of an early twentieth century wildlife illustrator in North America.

    PubMed

    Winearls, Joan

    2008-01-01

    British by birth Allan Cyril Brooks (1869-1946) emigrated to Canada in the 1880s, and became one of the most important North American bird illustrators during the first half of the twentieth century. Brooks was one of the leading ornithologists and wildlife collectors of the time; he corresponded extensively with other ornithologists and supplied specimens to many major North American museums. From the 1890s on he hoped to support himself by painting birds and mammals, but this was not possible in Canada at that time and he was forced to turn to American sources for illustration commissions. His work can be compared with that of his contemporary, the leading American bird painter Louis Agassiz Fuertes (1874-1927), and there are striking similarities and differences in their careers. This paper discusses the work of a talented, self-taught wildlife artist working in a North American milieu, his difficulties and successes in a newly developing field, and his quest for Canadian recognition. PMID:19569391

  3. 40 CFR 124.62 - Decision on variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Decision on variances. 124.62 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.62 Decision on variances... following variances (subject to EPA objection under § 123.44 for State permits): (1) Extensions under...

  4. 40 CFR 59.509 - Can I get a variance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Can I get a variance? 59.509 Section 59... Volatile Organic Compound Emission Standards for Aerosol Coatings § 59.509 Can I get a variance? (a) Any... its reasonable control may apply in writing to the Administrator for a temporary variance....

  5. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances and exceptions. 27... CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws may provide for variances and exceptions. (b) Bylaws adopted pursuant to these standards shall...

  6. 20 CFR 901.40 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Proof; variance; amendment of pleadings. 901... Suspension or Termination of Enrollment § 901.40 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading and the evidence adduced in support of the pleading,...

  7. 31 CFR 10.67 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE INTERNAL REVENUE SERVICE Rules Applicable to Disciplinary Proceedings § 10.67 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in pleadings and the...

  8. 7 CFR 718.105 - Tolerances, variances, and adjustments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Tolerances, variances, and adjustments. 718.105... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances, and... marketing quota crop allotment. (d) An administrative variance is applicable to all allotment crop...

  9. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 4 2010-07-01 2010-07-01 false Missoula variance provision. 52.1390... (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County Air Pollution Control Program's Chapter X, Variances, which was...

  10. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor... RULES OF PRACTICE FOR VARIANCES, LIMITATIONS, VARIATIONS, TOLERANCES, AND EXEMPTIONS UNDER THE WILLIAMS-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All...

  11. 40 CFR 190.11 - Variances for unusual operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Variances for unusual operations. 190... Standards for the Uranium Fuel Cycle § 190.11 Variances for unusual operations. The standards specified in § 190.10 may be exceeded if: (a) The regulatory agency has granted a variance based upon...

  12. 40 CFR 124.64 - Appeals of variances.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Appeals of variances. 124.64 Section... FOR DECISIONMAKING Specific Procedures Applicable to NPDES Permits § 124.64 Appeals of variances. (a) When a State issues a permit on which EPA has made a variance decision, separate appeals of the...

  13. 31 CFR 8.59 - Proof; variance; amendment of pleadings.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Proof; variance; amendment of... BEFORE THE BUREAU OF ALCOHOL, TOBACCO AND FIREARMS Disciplinary Proceedings § 8.59 Proof; variance; amendment of pleadings. In the case of a variance between the allegations in a pleading, the...

  14. 36 CFR 30.5 - Variances, exceptions, and use permits.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Variances, exceptions, and... UNIT § 30.5 Variances, exceptions, and use permits. (a) Zoning ordinances or amendments thereto, for... Recreation Area may provide for the granting of variances and exceptions. (b) Zoning ordinances or...

  15. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Variances and exemptions. 141.4 Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS General § 141.4 Variances and exemptions. (a) Variances or exemptions from certain provisions...

  16. 29 CFR 1905.5 - Effect of variances.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 5 2014-07-01 2014-07-01 false Effect of variances. 1905.5 Section 1905.5 Labor...-STEIGER OCCUPATIONAL SAFETY AND HEALTH ACT OF 1970 General § 1905.5 Effect of variances. All variances granted pursuant to this part shall have only future effect. In his discretion, the Assistant...

  17. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-06-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  18. MC Estimator Variance Reduction with Antithetic and Common Random Fields

    NASA Astrophysics Data System (ADS)

    Guthke, P.; Bardossy, A.

    2011-12-01

    Monte Carlo methods are widely used to estimate the outcome of complex physical models. For physical models with spatial parameter uncertainty, it is common to apply spatial random functions to the uncertain variables, which can then be used to interpolate between known values or to simulate a number of equally likely realizations .The price, that has to be paid for such a stochastic approach, are many simulations of the physical model instead of just running one model with one 'best' input parameter set. The number of simulations is often limited because of computational constraints, so that a modeller has to make a compromise between the benefit in terms of an increased accuracy of the results and the effort in terms of a massively increased computational time. Our objective is, to reduce the estimator variance of dependent variables in Monte Carlo frameworks. Therefore, we adapt two variance reduction techniques (antithetic variates and common random numbers) to a sequential random field simulation scheme that uses copulas as spatial dependence functions. The proposed methodology leads to pairs of spatial random fields with special structural properties, that are advantageous in MC frameworks. Antithetic Random fields (ARF) exhibit a reversed structure on the large scale, while the dependence on the local scale is preserved. Common random fields (CRF) show the same large scale structures, but different spatial dependence on the local scale. The performances of the proposed methods are examined with two typical applications of stochastic hydrogeology. It is shown, that ARF have the property to massively reduce the number of simulation runs required for convergence in Monte Carlo frameworks while keeping the same accuracy in terms of estimator variance. Furthermore, in multi-model frameworks like in sensitivity analysis of the spatial structure, where more than one spatial dependence model is used, the influence of different dependence structures becomes obvious

  19. Regression between earthquake magnitudes having errors with known variances

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2016-07-01

    Recent publications on the regression between earthquake magnitudes assume that both magnitudes are affected by error and that only the ratio of error variances is known. If X and Y represent observed magnitudes, and x and y represent the corresponding theoretical values, the problem is to find the a and b of the best-fit line y = a x + b. This problem has a closed solution only for homoscedastic errors (their variances are all equal for each of the two variables). The published solution was derived using a method that cannot provide a sum of squares of residuals. Therefore, it is not possible to compare the goodness of fit for different pairs of magnitudes. Furthermore, the method does not provide expressions for the x and y. The least-squares method introduced here does not have these drawbacks. The two methods of solution result in the same equations for a and b. General properties of a discussed in the literature but not proved, or proved for particular cases, are derived here. A comparison of different expressions for the variances of a and b is provided. The paper also considers the statistical aspects of the ongoing debate regarding the prediction of y given X. Analysis of actual data from the literature shows that a new approach produces an average improvement of less than 0.1 magnitude units over the standard approach when applied to Mw vs. mb and Mw vs. MS regressions. This improvement is minor, within the typical error of Mw. Moreover, a test subset of 100 predicted magnitudes shows that the new approach results in magnitudes closer to the theoretically true magnitudes for only 65 % of them. For the remaining 35 %, the standard approach produces closer values. Therefore, the new approach does not always give the most accurate magnitude estimates.

  20. Speckle variance OCT imaging of the vasculature in live mammalian embryos

    NASA Astrophysics Data System (ADS)

    Sudheendran, N.; Syed, S. H.; Dickinson, M. E.; Larina, I. V.; Larin, K. V.

    2011-03-01

    Live imaging of normal and abnormal vascular development in mammalian embryos is important tool in embryonic research, which can potentially contribute to understanding, prevention and treatment of cardiovascular birth defects. Here, we used speckle variance analysis of swept source optical coherence tomography (OCT) data sets acquired from live mouse embryos to reconstruct the 3-D structure of the embryonic vasculature. Both Doppler OCT and speckle variance algorithms were used to reconstruct the vascular structure. The results demonstrates that speckle variance imaging provides more accurate representation of the vascular structure, as it is not sensitive to the blood flow direction, while the Doppler OCT imaging misses blood flow component perpendicular to the beam direction. These studies suggest that speckle variance imaging is a promising tool to study vascular development in cultured mouse embryos.

  1. Probability of the residual wavefront variance of an adaptive optics system and its application.

    PubMed

    Huang, Jian; Liu, Chao; Deng, Ke; Yao, Zhousi; Xian, Hao; Li, Xinyang

    2016-02-01

    For performance evaluation of an adaptive optics (AO) system, the probability of the system residual wavefront variance can provide more information than the wavefront variance average. By studying the Zernike coefficients of an AO system residual wavefront, we derived the exact expressions for the probability density functions of the wavefront variance and the Strehl ratio, for instantaneous and long-term exposures owing to the insufficient control loop bandwidth of the AO system. Our calculations agree with the residual wavefront data of a closed loop AO system. Using these functions, we investigated the relationship between the AO system bandwidth and the distribution of the residual wavefront variance. Additionally, we analyzed the availability of an AO system for evaluating the AO performance. These results will assist in designing and probabilistic analysis of AO systems. PMID:26906850

  2. Dynamics of mean-variance-skewness of cumulative crop yield impact temporal yield variance

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Production risk associated with cropping systems influences farmers’ decisions to adopt a new management practice or a production system. Cumulative yield (CY), temporal yield variance (TYV) and coefficient of variation (CV) were used to assess the risk associated with adopting combinations of new m...

  3. Motion Detection Using Mean Normalized Temporal Variance

    SciTech Connect

    Chan, C W

    2003-08-04

    Scene-Based Wave Front Sensing uses the correlation between successive wavelets to determine the phase aberrations which cause the blurring of digital images. Adaptive Optics technology uses that information to control deformable mirrors to correct for the phase aberrations making the image clearer. The correlation between temporal subimages gives tip-tilt information. If these images do not have identical image content, tip-tilt estimations may be incorrect. Motion detection is necessary to help avoid errors initiated by dynamic subimage content. With a finely limited number of pixels per subaperature, most conventional motion detection algorithms fall apart on our subimages. Despite this fact, motion detection based on the normalized variance of individual pixels proved to be effective.

  4. Calculating bone-lead measurement variance.

    PubMed Central

    Todd, A C

    2000-01-01

    The technique of (109)Cd-based X-ray fluorescence (XRF) measurements of lead in bone is well established. A paper by some XRF researchers [Gordon CL, et al. The Reproducibility of (109)Cd-based X-ray Fluorescence Measurements of Bone Lead. Environ Health Perspect 102:690-694 (1994)] presented the currently practiced method for calculating the variance of an in vivo measurement once a calibration line has been established. This paper corrects typographical errors in the method published by those authors; presents a crude estimate of the measurement error that can be acquired without computational peak fitting programs; and draws attention to the measurement error attributable to covariance, an important feature in the construct of the currently accepted method that is flawed under certain circumstances. PMID:10811562

  5. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  6. Adjusting for Unequal Variances when Comparing Means in One-Way and Two-Way Fixed Effects ANOVA Models.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1989-01-01

    Two methods of handling unequal variances in the two-way fixed effects analysis of variance (ANOVA) model are described. One is based on an improved Wilcox (1988) method for the one-way model, and the other is an extension of G. S. James' (1951) second order method. (TJH)

  7. From means and variances to persons and patterns

    PubMed Central

    Grice, James W.

    2015-01-01

    A novel approach for conceptualizing and analyzing data from psychological studies is presented and discussed. This approach is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. These models therefore go beyond the variable-based, path models in use today which are limiting with regard to the types of inferences psychologists can draw from their research. In terms of analysis, the newer approach replaces traditional aggregate statistics such as means, variances, and covariances with methods of pattern detection and analysis. While these methods are person-centered and do not require parametric assumptions, they are both demanding and rigorous. They also provide psychologists with the information needed to draw the primary inference they often wish to make from their research; namely, the inference to best explanation. PMID:26257672

  8. Low variance at large scales of WMAP 9 year data

    SciTech Connect

    Gruppuso, A.; Finelli, F.; Rosa, A. De; Mandolesi, N.; Natoli, P.; Paci, F.; Molinari, D. E-mail: natoli@fe.infn.it E-mail: finelli@iasfbo.inaf.it E-mail: derosa@iasfbo.inaf.it

    2013-07-01

    We use an optimal estimator to study the variance of the WMAP 9 CMB field at low resolution, in both temperature and polarization. Employing realistic Monte Carlo simulation, we find statistically significant deviations from the ΛCDM model in several sky cuts for the temperature field. For the considered masks in this analysis, which cover at least the 54% of the sky, the WMAP 9 CMB sky and ΛCDM are incompatible at ≥ 99.94% C.L. at large angles ( > 5°). We find instead no anomaly in polarization. As a byproduct of our analysis, we present new, optimal estimates of the WMAP 9 CMB angular power spectra from the WMAP 9 year data at low resolution.

  9. Variance of indoor radon concentration: Major influencing factors.

    PubMed

    Yarmoshenko, I; Vasilyev, A; Malinovsky, G; Bossew, P; Žunić, Z S; Onischenko, A; Zhukovsky, M

    2016-01-15

    Variance of radon concentration in dwelling atmosphere is analysed with regard to geogenic and anthropogenic influencing factors. Analysis includes review of 81 national and regional indoor radon surveys with varying sampling pattern, sample size and duration of measurements and detailed consideration of two regional surveys (Sverdlovsk oblast, Russia and Niška Banja, Serbia). The analysis of the geometric standard deviation revealed that main factors influencing the dispersion of indoor radon concentration over the territory are as follows: area of territory, sample size, characteristics of measurements technique, the radon geogenic potential, building construction characteristics and living habits. As shown for Sverdlovsk oblast and Niška Banja town the dispersion as quantified by GSD is reduced by restricting to certain levels of control factors. Application of the developed approach to characterization of the world population radon exposure is discussed. PMID:26409145

  10. PET image reconstruction: mean, variance, and optimal minimax criterion

    NASA Astrophysics Data System (ADS)

    Liu, Huafeng; Gao, Fei; Guo, Min; Xue, Liying; Nie, Jing; Shi, Pengcheng

    2015-04-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min-max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential.

  11. Heritable environmental variance causes nonlinear relationships between traits: application to birth weight and stillbirth of pigs.

    PubMed

    Mulder, Herman A; Hill, William G; Knol, Egbert F

    2015-04-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318

  12. Heritable Environmental Variance Causes Nonlinear Relationships Between Traits: Application to Birth Weight and Stillbirth of Pigs

    PubMed Central

    Mulder, Herman A.; Hill, William G.; Knol, Egbert F.

    2015-01-01

    There is recent evidence from laboratory experiments and analysis of livestock populations that not only the phenotype itself, but also its environmental variance, is under genetic control. Little is known about the relationships between the environmental variance of one trait and mean levels of other traits, however. A genetic covariance between these is expected to lead to nonlinearity between them, for example between birth weight and survival of piglets, where animals of extreme weights have lower survival. The objectives were to derive this nonlinear relationship analytically using multiple regression and apply it to data on piglet birth weight and survival. This study provides a framework to study such nonlinear relationships caused by genetic covariance of environmental variance of one trait and the mean of the other. It is shown that positions of phenotypic and genetic optima may differ and that genetic relationships are likely to be more curvilinear than phenotypic relationships, dependent mainly on the environmental correlation between these traits. Genetic correlations may change if the population means change relative to the optimal phenotypes. Data of piglet birth weight and survival show that the presence of nonlinearity can be partly explained by the genetic covariance between environmental variance of birth weight and survival. The framework developed can be used to assess effects of artificial and natural selection on means and variances of traits and the statistical method presented can be used to estimate trade-offs between environmental variance of one trait and mean levels of others. PMID:25631318

  13. Violation of Homogeneity of Variance Assumption in the Integrated Moving Averages Time Series Model.

    ERIC Educational Resources Information Center

    Gullickson, Arlen R.; And Others

    This study is an analysis of the robustness of the Box-Tiao integrated moving averages model for analysis of time series quasi experiments. One of the assumptions underlying the Box-Tiao model is that all N values of alpha subscript t come from the same population which has a variance sigma squared. The robustness was studied only in terms of…

  14. An Introduction to Regression & Canonical Commonality Analyses: Partitioning Predicted Variance into Constituent Parts

    ERIC Educational Resources Information Center

    Yetkiner, Zeynep Ebrar

    2009-01-01

    Commonality analysis is a method of partitioning variance to determine the predictive ability unique to each predictor (or predictor set) and common to two or more of the predictors (or predictor sets). The purposes of the present paper are to (a) explain commonality analysis in a multiple regression context as an alternative for middle grades…

  15. Predicting Risk Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation

    ERIC Educational Resources Information Center

    Weber, Elke U.; Shafir, Sharoni; Blais, Ann-Renee

    2004-01-01

    This article examines the statistical determinants of risk preference. In a meta-analysis of animal risk preference (foraging birds and insects), the coefficient of variation (CV), a measure of risk per unit of return, predicts choices far better than outcome variance, the risk measure of normative models. In a meta-analysis of human risk…

  16. Explanatory Variance in Maximal Oxygen Uptake

    PubMed Central

    Robert McComb, Jacalyn J.; Roh, Daesung; Williams, James S.

    2006-01-01

    The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max) from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females), ages 18 - 24 years, underwent the following testing procedures: (a) a 7-site skin fold assessment; (b) a land VO2max running treadmill test; and (c) a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants’ head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF), height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27) of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF). Key Points Body Fat is an important predictor of VO2 max. Individuals with low skill level in water running may shorten their stride length to avoid the onset of fatigue at higher work-loads, therefore, the net oxygen cost of the exercise cannot be controlled in inexperienced individuals in water running at fatiguing workloads. Experiments using water running protocols to predict VO2max should use individuals trained in the mechanics of water running. A submaximal water running protocol is needed in the research literature for individuals trained in the mechanics of water running, given the popularity of water running rehabilitative exercise programs and training programs. PMID:24260003

  17. Olivine in Martian Meteorite Allan Hills 84001: Evidence for a High-Temperature Origin and Implications for Signs of Life

    NASA Technical Reports Server (NTRS)

    Shearer, C. K.; Leshin, L. A.; Adcock, C. T.

    1999-01-01

    Olivine from Martian meteorite Allan Hills (ALH) 84001 occurs as clusters within orthopyroxene adjacent to fractures containing disrupted carbonate globules and feldspathic shock glass. The inclusions are irregular in shape and range in size from approx. 40 microns to submicrometer. Some of the inclusions are elongate and boudinage-like. The olivine grains are in sharp contact with the enclosing orthopyroxene and often contain small inclusions of chromite The olivine exhibits a very limited range of composition from Fo(sub 65) to Fo(sub 66) (n = 25). The delta(sup 18)O values of the olivine and orthopyroxene analyzed by ion microprobe range from +4.3 to +5.3% and are indistinguishable from each other within analytical uncertainty. The mineral chemistries, O-isotopic data, and textural relationships indicate that the olivine inclusions were produced at a temperature greater than 800 C. It is unlikely that the olivines formed during the same event that gave rise to the carbonates in ALH 84001, which have more elevated and variable delta(sup 18)O values, and were probably formed from fluids that were not in isotopic equilibrium with the orthopyroxene or olivine The reactions most likely instrumental in the formation of olivine could be either the dehydration of hydrous silicates that formed during carbonate precipitation or the reduction of orthopyroxene and spinel If the olivine was formed by either reaction during a postcarbonate beating event, the implications are profound with regards to the interpretations of McKay et al. Due to the low diffusion rates in carbonates, this rapid, high-temperature event would have resulted in the preservation of the fine-scale carbonate zoning' while partially devolatilizing select carbonate compositions on a submicrometer scale. This may have resulted in the formation of the minute magnetite grains that McKay et al attributed to biogenic activity.

  18. Linear minimum variance filters applied to carrier tracking

    NASA Technical Reports Server (NTRS)

    Gustafson, D. E.; Speyer, J. L.

    1976-01-01

    A new approach is taken to the problem of tracking a fixed amplitude signal with a Brownian-motion phase process. Classically, a first-order phase-lock loop (PLL) is used; here, the problem is treated via estimation of the quadrature signal components. In this space, the state dynamics are linear with white multiplicative noise. Therefore, linear minimum-variance filters, which have a particularly simple mechanization, are suggested. The resulting error dynamics are linear at any signal/noise ratio, unlike the classical PLL. During synchronization, and above threshold, this filter with constant gains degrades by 3 per cent in output rms phase error with respect to the classical loop. However, up to 80 per cent of the maximum possible noise improvement is obtained below threshold, where the classical loop is nonoptimum, as demonstrated by a Monte Carlo analysis. Filter mechanizations are presented for both carrier and baseband operation.

  19. Estimating the encounter rate variance in distance sampling

    USGS Publications Warehouse

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  20. A digitally implemented phase-locked loop detection scheme for analysis of the phase and power stability of a calibration tone

    NASA Technical Reports Server (NTRS)

    Densmore, A. C.

    1988-01-01

    A digital phase-locked loop (PLL) scheme is described which detects the phase and power of a high SNR calibration tone. The digital PLL is implemented in software directly from the given description. It was used to evaluate the stability of the Goldstone Deep Space Station open loop receivers for Radio Science. Included is a derivative of the Allan variance sensitivity of the PLL imposed by additive white Gaussian noise; a lower limit is placed on the carrier frequency.

  1. Estimation of Variance Components of Quantitative Traits in Inbred Populations

    PubMed Central

    Abney, Mark; McPeek, Mary Sara; Ober, Carole

    2000-01-01

    Summary Use of variance-component estimation for mapping of quantitative-trait loci in humans is a subject of great current interest. When only trait values, not genotypic information, are considered, variance-component estimation can also be used to estimate heritability of a quantitative trait. Inbred pedigrees present special challenges for variance-component estimation. First, there are more variance components to be estimated in the inbred case, even for a relatively simple model including additive, dominance, and environmental effects. Second, more identity coefficients need to be calculated from an inbred pedigree in order to perform the estimation, and these are computationally more difficult to obtain in the inbred than in the outbred case. As a result, inbreeding effects have generally been ignored in practice. We describe here the calculation of identity coefficients and estimation of variance components of quantitative traits in large inbred pedigrees, using the example of HDL in the Hutterites. We use a multivariate normal model for the genetic effects, extending the central-limit theorem of Lange to allow for both inbreeding and dominance under the assumptions of our variance-component model. We use simulated examples to give an indication of under what conditions one has the power to detect the additional variance components and to examine their impact on variance-component estimation. We discuss the implications for mapping and heritability estimation by use of variance components in inbred populations. PMID:10677322

  2. Multiperiod Mean-Variance Portfolio Optimization via Market Cloning

    SciTech Connect

    Ankirchner, Stefan; Dermoune, Azzouz

    2011-08-15

    The problem of finding the mean variance optimal portfolio in a multiperiod model can not be solved directly by means of dynamic programming. In order to find a solution we therefore first introduce independent market clones having the same distributional properties as the original market, and we replace the portfolio mean and variance by their empirical counterparts. We then use dynamic programming to derive portfolios maximizing a weighted sum of the empirical mean and variance. By letting the number of market clones converge to infinity we are able to solve the original mean variance problem.

  3. Cumberland Falls chondritic inclusions. III - Consortium study of relationship to inclusions in Allan Hills 78113 aubrite

    NASA Technical Reports Server (NTRS)

    Lipschutz, Michael E.; Verkouteren, R. Michael; Sears, Derek W. G.; Hasan, Fouad A.; Prinz, Martin

    1988-01-01

    The contents of Ag, Au, Bi, Cd, Co, Cs, Ga, In, Rb, Sb, Se, Te, Tl, U, and Zn in large chondritic clasts from the Cumbersand Falls aubrite were determined by radiochemical neutron activation analysis, and the results, together with the results of a mineralogical investigation, were compared with respective data obtained for three primitive inclusions from the ALH A78113 aubrite. The results indicated that the clasts from both aubrite sources constitute a single chondritic suite. The analyses data, together with the results of thermoluminescence data for Cumberland Falls chondritic inclusions and achondritic host, indicate that inclusions in Cumberland Falls and in ALH A78113 aubrite represent a primitive chondrite sample suite whose properties were established during primary nebular accretion and condensation over a broad redox range.

  4. Water vapor variance measurements using a Raman lidar

    NASA Technical Reports Server (NTRS)

    Evans, K.; Melfi, S. H.; Ferrare, R.; Whiteman, D.

    1992-01-01

    Because of the importance of atmospheric water vapor variance, we have analyzed data from the NASA/Goddard Raman lidar to obtain temporal scales of water vapor mixing ratio as a function of altitude over observation periods extending to 12 hours. The ground-based lidar measures water vapor mixing ration from near the earth's surface to an altitude of 9-10 km. Moisture profiles are acquired once every minute with 75 m vertical resolution. Data at each 75 meter altitude level can be displayed as a function of time from the beginning to the end of an observation period. These time sequences have been spectrally analyzed using a fast Fourier transform technique. An example of such a temporal spectrum obtained between 00:22 and 10:29 UT on December 6, 1991 is shown in the figure. The curve shown on the figure represents the spectral average of data from 11 height levels centered on an altitude of 1 km (1 plus or minus .375 km). The spectra shows a decrease in energy density with frequency which generally follows a -5/3 power law over the spectral interval 3x10 (exp -5) to 4x10 (exp -3) Hz. The flattening of the spectrum for frequencies greater than 6x10 (exp -3) Hz is most likely a measure of instrumental noise. Spectra like that shown in the figure are calculated for other altitudes and show changes in spectral features with height. Spectral analysis versus height have been performed for several observation periods which demonstrate changes in water vapor mixing ratio spectral character from one observation period to the next. The combination of these temporal spectra with independent measurements of winds aloft provide an opportunity to infer spatial scales of moisture variance.

  5. Neuroticism explains unwanted variance in Implicit Association Tests of personality: possible evidence for an affective valence confound

    PubMed Central

    Fleischhauer, Monika; Enge, Sören; Miller, Robert; Strobel, Alexander; Strobel, Anja

    2013-01-01

    Meta-analytic data highlight the value of the Implicit Association Test (IAT) as an indirect measure of personality. Based on evidence suggesting that confounding factors such as cognitive abilities contribute to the IAT effect, this study provides a first investigation of whether basic personality traits explain unwanted variance in the IAT. In a gender-balanced sample of 204 volunteers, the Big-Five dimensions were assessed via self-report, peer-report, and IAT. By means of structural equation modeling (SEM), latent Big-Five personality factors (based on self- and peer-report) were estimated and their predictive value for unwanted variance in the IAT was examined. In a first analysis, unwanted variance was defined in the sense of method-specific variance which may result from differences in task demands between the two IAT block conditions and which can be mirrored by the absolute size of the IAT effects. In a second analysis, unwanted variance was examined in a broader sense defined as those systematic variance components in the raw IAT scores that are not explained by the latent implicit personality factors. In contrast to the absolute IAT scores, this also considers biases associated with the direction of IAT effects (i.e., whether they are positive or negative in sign), biases that might result, for example, from the IAT's stimulus or category features. None of the explicit Big-Five factors was predictive for method-specific variance in the IATs (first analysis). However, when considering unwanted variance that goes beyond pure method-specific variance (second analysis), a substantial effect of neuroticism occurred that may have been driven by the affective valence of IAT attribute categories and the facilitated processing of negative stimuli, typically associated with neuroticism. The findings thus point to the necessity of using attribute category labels and stimuli of similar affective valence in personality IATs to avoid confounding due to recoding. PMID

  6. A nitrogen and argon stable isotope study of Allan Hills 84001: implications for the evolution of the Martian atmosphere.

    PubMed

    Grady, M M; Wright, I P; Pillinger, C T

    1998-07-01

    The abundances and isotopic compositions of N and Ar have been measured by stepped combustion of the Allan Hills 84001 (ALH 84001) Martian orthopyroxenite. Material described as shocked is N-poor ([N] approximately 0.34 ppm; delta 15N approximately +23%); although during stepped combustion, 15N-enriched N (delta 15N approximately +143%) is released in a narrow temperature interval between 700 degrees C and 800 degrees C (along with 13C-enriched C (delta 13C approximately +19%) and 40Ar). Cosmogenic species are found to be negligible at this temperature; thus, the isotopically heavy component is identified, in part, as Martian atmospheric gas trapped relatively recently in the history of ALH84001. The N and Ar data show that ALH84001 contains species from the Martian lithosphere, a component interpreted as ancient trapped atmosphere (in addition to the modern atmospheric species), and excess 40Ar from K decay. Deconvolution of radiogenic 40Ar from other Ar components, on the basis of end-member 36Ar/14N and 40Ar/36Ar ratios, has enabled calculation of a K-Ar age for ALH 84001 as 3.5-4.6 Ga, depending on assumed K abundance. If the component believed to be Martian palaeoatmosphere was introduced to ALH 84001 at the time the K-Ar age was set, then the composition of the atmosphere at this time is constrained to: delta 15N > or = +200%, 40Ar/36Ar < or = 3000 and 36Ar/14N > or = 17 x 10(-5). In terms of the petrogenetic history of the meteorite, ALH 84001 crystallised soon after differentiation of the planet, may have been shocked and thermally metamorphosed in an early period of bombardment, and then subjected to a second event. This later process did not reset the K-Ar system but perhaps was responsible for introducing (recent) atmospheric gases into ALH 84001; and it might mark the time at which ALH 84001 suffered fluid alteration resulting in the formation of the plagioclase and carbonate mineral assemblages. PMID:11543078

  7. Extreme metamorphism in a firn core from the Allan Hills, Antarctica, as an analogue for glacial conditions

    NASA Astrophysics Data System (ADS)

    Dadic, Ruzica; Schneebeli, Martin; Bertler, Nancy; Schwikowski, Margit; Matzl, Margret

    2015-04-01

    Understanding processes in near-zero accumulation areas can help to better understand the ranges of isotopic composition in ice cores, particularly during ice ages, when accumulation rates were lower than today. Snow metamorphism is a primary driver of the transition from snow to ice and can be accompanied by altered isotopic compositions and chemical species concentration. High degree snow metamorphism, which results in major structural changes, is little-studied but has been identified in certain places in Antarctica. Here we report on a 5-m firn core collected adjacent to a blue-ice field in the Allan Hills, Antarctica. We determined the physical properties of the snow using computer tomography (microCT) and measured the isotopic composition of δD and δ18O, as well as 210Pb activity. The core shows a high degree of snow metamorphism and an exponential decrease in specific surface area (SSA), but no clear densification, with depth. The micro-CT measurements show a homogenous and stable structure throughout the entire core, with obvious erosion features in the near-surface, where high-resolution data is available. The observed firn structure is likely caused by a combination of unique depositional and post-depositional processes. The defining depositional process is the impact deposition under high winds and with a high initial density. The defining post-depositional processes are a) increased moisture transport due to forced ventilation and high winds and b) decades of temperature-gradient driven metamorphic growth in the near surface due to prolonged exposure to seasonal temperature cycling. Both post-processes are enhanced in low accumulation regions where snow stays close to surface for a long time. We observe an irregular signal in δD and δ18O that does not follow the stratigraphic sequence. The isotopic signal is likely caused by the same post-depositional processes that are responsible for the firn structure, and that are driven by local climate

  8. Thermoluminescence survey of 12 meteorites collected by the European 1988 Antarctic meteorite expedition to Allan Hills and the importance of acid washing for thermoluminescence sensitivity measurements

    SciTech Connect

    Benoit, P.H.; Sears, H.; Sears, D.W.G. )

    1991-06-01

    Natural and induced thermoluminescence (TL) data are reported for 12 meteorites recovered from the Allan Hills region of Antarctica by the European field party during the 1988/1989 field season. The samples include one with extremely high natural TL, ALH88035, suggestive of exposure to unusually high radiation doses (i.e., low degrees of shielding), and one, ALH88034, whose low natural TL suggests reheating within the last 100,000 years. The remainder have natural TL values suggestive of terrestrial ages similar to those of other meteorites from Allan Hills. ALH88015 (L6) has induced TL data suggestive of intense shock. TL sensitivities of these meteorites are generally lower than observed falls of their petrologic types, as is also observed for Antarctic meteorites in general. Acid-washing experiments indicate that this is solely the result of terrestrial weathering rather than a nonterrestrial Antarctic-non-Antarctic difference. However, other TL parameters, such as natural TL and induced peak temperature-width, are unchanged by acid washing and are sensitive indicators of a meteorite's metamorphic and recent radiation history. 16 refs.

  9. Thermoluminescence survey of 12 meteorites collected by the European 1988 Antarctic meteorite expedition to Allan Hills and the importance of acid washing for thermoluminescence sensitivity measurements

    NASA Technical Reports Server (NTRS)

    Benoit, P. H.; Sears, H.; Sears, D. W. G.

    1991-01-01

    Natural and induced thermoluminescence (TL) data are reported for 12 meteorites recovered from the Allan Hills region of Antarctica by the European field party during the 1988/1989 field season. The samples include one with extremely high natural TL, ALH88035, suggestive of exposure to unusually high radiation doses (i.e., low degrees of shielding), and one, ALH88034, whose low natural TL suggests reheating within the last 100,000 years. The remainder have natural TL values suggestive of terrestrial ages similar to those of other meteorites from Allan Hills. ALH88015 (L6) has induced TL data suggestive of intense shock. TL sensitivities of these meteorites are generally lower than observed falls of their petrologic types, as is also observed for Antarctic meteorites in general. Acid-washing experiments indicate that this is solely the result of terrestrial weathering rather than a nonterrestrial Antarctic-non-Antarctic difference. However, other TL parameters, such as natural TL and induced peak temperature-width, are unchanged by acid washing and are sensitive indicators of a meteorite's metamorphic and recent radiation history.

  10. Thermoluminescence survey of 12 meteorites collected by the European 1988 Antarctic meteorite expedition to Allan Hills and the importance of acid washing for thermoluminescence sensitivity measurements

    NASA Astrophysics Data System (ADS)

    Benoit, P. H.; Sears, H.; Sears, D. W. G.

    1991-06-01

    Natural and induced thermoluminescence (TL) data are reported for 12 meteorites recovered from the Allan Hills region of Antarctica by the European field party during the 1988/1989 field season. The samples include one with extremely high natural TL, ALH88035, suggestive of exposure to unusually high radiation doses (i.e., low degrees of shielding), and one, ALH88034, whose low natural TL suggests reheating within the last 100,000 years. The remainder have natural TL values suggestive of terrestrial ages similar to those of other meteorites from Allan Hills. ALH88015 (L6) has induced TL data suggestive of intense shock. TL sensitivities of these meteorites are generally lower than observed falls of their petrologic types, as is also observed for Antarctic meteorites in general. Acid-washing experiments indicate that this is solely the result of terrestrial weathering rather than a nonterrestrial Antarctic-non-Antarctic difference. However, other TL parameters, such as natural TL and induced peak temperature-width, are unchanged by acid washing and are sensitive indicators of a meteorite's metamorphic and recent radiation history.

  11. 7 CFR 718.105 - Tolerances, variances, and adjustments.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 7 2014-01-01 2014-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...

  12. 7 CFR 718.105 - Tolerances, variances, and adjustments.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 7 2012-01-01 2012-01-01 false Tolerances, variances, and adjustments. 718.105 Section 718.105 Agriculture Regulations of the Department of Agriculture (Continued) FARM SERVICE AGENCY... APPLICABLE TO MULTIPLE PROGRAMS Determination of Acreage and Compliance § 718.105 Tolerances, variances,...

  13. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 1 2013-07-01 2013-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  14. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 1 2014-07-01 2014-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  15. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 1 2011-07-01 2011-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  16. 36 CFR 27.4 - Variances and exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 1 2012-07-01 2012-07-01 false Variances and exceptions. 27.4 Section 27.4 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR CAPE COD NATIONAL SEASHORE; ZONING STANDARDS § 27.4 Variances and exceptions. (a) Zoning bylaws...

  17. Variances and Covariances of Kendall's Tau and Their Estimation.

    ERIC Educational Resources Information Center

    Cliff, Norman; Charlin, Ventura

    1991-01-01

    Variance formulas of H. E. Daniels and M. G. Kendall (1947) are generalized to allow for the presence of ties and variance of the sample tau correlation. Applications of these generalized formulas are discussed and illustrated using data from a 1965 study of contraceptive use in 15 developing countries. (SLD)

  18. Characterizing the evolution of genetic variance using genetic covariance tensors.

    PubMed

    Hine, Emma; Chenoweth, Stephen F; Rundle, Howard D; Blows, Mark W

    2009-06-12

    Determining how genetic variance changes under selection in natural populations has proved to be a very resilient problem in evolutionary genetics. In the same way that understanding the availability of genetic variance within populations requires the simultaneous consideration of genetic variance in sets of functionally related traits, determining how genetic variance changes under selection in natural populations will require ascertaining how genetic variance-covariance (G) matrices evolve. Here, we develop a geometric framework using higher-order tensors, which enables the empirical characterization of how G matrices have diverged among populations. We then show how divergence among populations in genetic covariance structure can then be associated with divergence in selection acting on those traits using key equations from evolutionary theory. Using estimates of G matrices of eight male sexually selected traits from nine geographical populations of Drosophila serrata, we show that much of the divergence in genetic variance occurred in a single trait combination, a conclusion that could not have been reached by examining variation among the individual elements of the nine G matrices. Divergence in G was primarily in the direction of the major axes of genetic variance within populations, suggesting that genetic drift may be a major cause of divergence in genetic variance among these populations. PMID:19414471

  19. 40 CFR 52.1390 - Missoula variance provision.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 4 2014-07-01 2014-07-01 false Missoula variance provision. 52.1390 Section 52.1390 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) APPROVAL AND PROMULGATION OF IMPLEMENTATION PLANS (CONTINUED) Montana § 52.1390 Missoula variance provision. The Missoula City-County...

  20. 29 CFR 1904.38 - Variances from the recordkeeping rule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... process your variance petition. (i) The Assistant Secretary will offer your employees and their authorized... the facts or conduct that may warrant revocation of your variance; and (ii) Provide you, your employees, and authorized employee representatives with an opportunity to participate in the...

  1. Productive Failure in Learning the Concept of Variance

    ERIC Educational Resources Information Center

    Kapur, Manu

    2012-01-01

    In a study with ninth-grade mathematics students on learning the concept of variance, students experienced either direct instruction (DI) or productive failure (PF), wherein they were first asked to generate a quantitative index for variance without any guidance before receiving DI on the concept. Whereas DI students relied only on the canonical…

  2. 10 CFR 52.93 - Exemptions and variances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CFR 52.7, and that the special circumstances outweigh any decrease in safety that may result from the... 10 Energy 2 2010-01-01 2010-01-01 false Exemptions and variances. 52.93 Section 52.93 Energy... Combined Licenses § 52.93 Exemptions and variances. (a) Applicants for a combined license under...

  3. A Study of Variance Estimation Methods. Working Paper Series.

    ERIC Educational Resources Information Center

    Zhang, Fan; Weng, Stanley; Salvucci, Sameena; Hu, Ming-xiu

    This working paper contains reports of five studies of variance estimation methods. The first, An Empirical Study of Poststratified Estimator, by Fan Zhang uses data from the National Household Education Survey to illustrate use of poststratified estimation. The second paper, BRR Variance Estimation Using BPLX Hadamard Procedure, by Stanley Weng…

  4. 21 CFR 821.2 - Exemptions and variances.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Exemptions and variances. 821.2 Section 821.2 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES MEDICAL DEVICE TRACKING REQUIREMENTS General Provisions § 821.2 Exemptions and variances. (a) A manufacturer, importer, or distributor...

  5. 40 CFR 142.40 - Requirements for a variance.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Section 142.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the Administrator... one or more variances to any public water system within a State that does not have primary...

  6. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...

  7. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...

  8. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...

  9. 40 CFR 142.43 - Disposition of a variance request.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ....43 Section 142.43 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS IMPLEMENTATION Variances Issued by the... variance may be terminated at any time upon a finding that the nature of the raw water source is such...

  10. Budgeting and controllable cost variances. The case of multiple diagnoses, multiple services, and multiple resources.

    PubMed

    Broyles, R W; Lay, C M

    1982-12-01

    This paper examines an unfavorable cost variance in an institution which employs multiple resources to provide stay specific and ancillary services to patients presenting multiple diagnoses. It partitions the difference between actual and expected costs into components that are the responsibility of an identifiable individual or group of individuals. The analysis demonstrates that the components comprising an unfavorable cost variance are attributable to factor prices, the use of real resources, the mix of patients, and the composition of care provided by the institution. In addition, the interactive effects of these factors are also identified. PMID:7183731

  11. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    SciTech Connect

    Yu, Zhiyong

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  12. Smoothed Temporal Variance Spectrum: weak line profile variations and NRP diagnostics

    NASA Astrophysics Data System (ADS)

    Kholtygin, A. F.; Sudnik, N. P.

    2016-05-01

    We describe the version of the Temporal Variance Spectrum (TVS, Fullerton, Gies & Bolton) method with pre-smoothed line profile (smoothed Temporal Variance Spectrum, smTVS). This method introduced by Kholtygin et al. can be used to detect the ultra weak variations of the line profile even for very noisy stellar spectra. We also describe how to estimate the mode of the non-radial pulsations (NRP) using the TVS and smTVS with different time spans. The influence of the rotational modulation of the line profile on the TVS is considered. The analysis of the contribution of NRP and rotational modulation in the global TVS is studied.

  13. Variance in the chemical composition of dry beans determined from UV spectral fingerprints

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine varieties of dry beans representing 5 market classes were grown in 3 states (Maryland, Michigan, and Nebraska) and sub-samples were collected for each variety (row composites from each plot). Aqueous methanol extracts were analyzed in triplicate by UV spectrophotometry. Analysis of variance-p...

  14. Empirical data and the variance-covariance matrix for the 1969 Smithsonian Standard Earth (2)

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. M.

    1972-01-01

    The empirical data used in the 1969 Smithsonian Standard Earth (2) are presented. The variance-covariance matrix, or the normal equations, used for correlation analysis, are considered. The format and contents of the matrix, available on magnetic tape, are described and a sample printout is given.

  15. A Nonparametric Test for Homogeneity of Variances: Application to GPAs of Students across Academic Majors

    ERIC Educational Resources Information Center

    Bakir, Saad T.

    2010-01-01

    We propose a nonparametric (or distribution-free) procedure for testing the equality of several population variances (or scale parameters). The proposed test is a modification of Bakir's (1989, Commun. Statist., Simul-Comp., 18, 757-775) analysis of means by ranks (ANOMR) procedure for testing the equality of several population means. A proof is…

  16. The Operating Characteristics of the Nonparametric Levene Test for Equal Variances with Assessment and Evaluation Data

    ERIC Educational Resources Information Center

    Nordstokke, David W.; Zumbo, Bruno D.; Cairns, Sharon L.; Saklofske, Donald H.

    2011-01-01

    Many assessment and evaluation studies use statistical hypothesis tests, such as the independent samples t test or analysis of variance, to test the equality of two or more means for gender, age groups, cultures or language group comparisons. In addition, some, but far fewer, studies compare variability across these same groups or research…

  17. Variances of the components and magnitude of the polar heliospheric magnetic field

    NASA Technical Reports Server (NTRS)

    Balogh, A.; Horbury, T. S.; Forsyth, R. J.; Smith, E. J.

    1995-01-01

    The heliolatitude dependences of the variances in the components and the magnitude of the heliospheric magnetic field have been analysed, using the Ulysses magnetic field observations from close to the ecliptic plane to 80 southern solar latitude. The normalized variances in the components of the field increased significantly (by a factor about 5) as Ulysses entered the purely polar flows from the southern coronal hole. At the same time, there was at most a small increase in the variance of the field magnitude. The analysis of the different components indicates that the power in the fluctuations is not isotropically distributed: most of the power is in the components of the field transverse to the radial direction. Examining the variances calculated over different time scales from minutes to hours shows that the anisotropy of the field variances is different on different scales, indicating the influence of the two distinct populations of fluctuations in the polar solar wind which have been previously identified. We discuss these results in terms of evolutionary, dynamic processes as a function of heliocentric distance and as a function of the large scale geometry of the magnetic field associated with the polar coronal hole.

  18. An efficient method to evaluate energy variances for extrapolation methods

    NASA Astrophysics Data System (ADS)

    Puddu, G.

    2012-08-01

    The energy variance extrapolation method consists of relating the approximate energies in many-body calculations to the corresponding energy variances and inferring eigenvalues by extrapolating to zero variance. The method needs a fast evaluation of the energy variances. For many-body methods that expand the nuclear wavefunctions in terms of deformed Slater determinants, the best available method for the evaluation of energy variances scales with the sixth power of the number of single-particle states. We propose a new method which depends on the number of single-particle orbits and the number of particles rather than the number of single-particle states. We discuss as an example the case of 4He using the chiral N3LO interaction in a basis consisting up to 184 single-particle states.

  19. Variance Analysis of Immunoglobulin Alleles in Natural Populations of Rabbit (Oryctolagus Cuniculus): The Extensive Interallelic Divergence at the B Locus Could Be the Outcome of Overdominance-Type Selection

    PubMed Central

    van-der-Loo, W.

    1993-01-01

    Population genetic data are presented which should contribute to evaluation of the hypothesis that the extraordinary evolutionary patterns observed at the b locus of the rabbit immunoglobulin light chain constant region can be the outcome of overdominance-type selection. The analysis of allele correlations in natural populations revealed an excess of heterozygotes of about 10% at the b locus while heterozygote excess was not observed at loci determining the immunoglobulin heavy chain. Data from the published literature, where homozygote advantage was suggested, were reevaluated and found in agreement with data here presented. Gene diversity was evenly distributed among populations and showed similarities with patterns reported for histocompatibility loci. Analysis of genotypic disequilibria revealed strong digenic associations between the leading alleles of heavy and light chain constant region loci in conjunction with trigenic disequilibria corresponding to a preferential association of b locus heterozygosity with the predominant allele of the heavy chain e locus. It is argued that this may indicate compensatory or nonadditive aspects of a putative heterozygosity enhancing mechanism, implying that effects at the light chain might be more pronounced in populations fixed for the heavy chain polymorphism. PMID:8224818

  20. Utility functions predict variance and skewness risk preferences in monkeys

    PubMed Central

    Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram

    2016-01-01

    Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743

  1. Variance After-Effects Distort Risk Perception in Humans.

    PubMed

    Payzan-LeNestour, Elise; Balleine, Bernard W; Berrada, Tony; Pearson, Joel

    2016-06-01

    In many contexts, decision-making requires an accurate representation of outcome variance-otherwise known as "risk" in economics. Conventional economic theory assumes this representation to be perfect, thereby focusing on risk preferences rather than risk perception per se [1-3] (but see [4]). However, humans often misrepresent their physical environment. Perhaps the most striking of such misrepresentations are the many well-known sensory after-effects, which most commonly involve visual properties, such as color, contrast, size, and motion. For example, viewing downward motion of a waterfall induces the anomalous biased experience of upward motion during subsequent viewing of static rocks to the side [5]. Given that after-effects are pervasive, occurring across a wide range of time horizons [6] and stimulus dimensions (including properties such as face perception [7, 8], gender [9], and numerosity [10]), and that some evidence exists that neurons show adaptation to variance in the sole visual feature of motion [11], we were interested in assessing whether after-effects distort variance perception in humans. We found that perceived variance is decreased after prolonged exposure to high variance and increased after exposure to low variance within a number of different visual representations of variance. We demonstrate these after-effects occur across very different visual representations of variance, suggesting that these effects are not sensory, but operate at a high (cognitive) level of information processing. These results suggest, therefore, that variance constitutes an independent cognitive property and that prolonged exposure to extreme variance distorts risk perception-a fundamental challenge for economic theory and practice. PMID:27161500

  2. Analysis of a magnetically trapped atom clock

    SciTech Connect

    Kadio, D.; Band, Y. B.

    2006-11-15

    We consider optimization of a rubidium atom clock that uses magnetically trapped Bose condensed atoms in a highly elongated trap, and determine the optimal conditions for minimum Allan variance of the clock using microwave Ramsey fringe spectroscopy. Elimination of magnetic field shifts and collisional shifts are considered. The effects of spin-dipolar relaxation are addressed in the optimization of the clock. We find that for the interstate interaction strength equal to or larger than the intrastate interaction strengths, a modulational instability results in phase separation and symmetry breaking of the two-component condensate composed of the ground and excited hyperfine clock levels, and this mechanism limits the clock accuracy.

  3. Evaluation of Meterorite Amono Acid Analysis Data Using Multivariate Techniques

    NASA Technical Reports Server (NTRS)

    McDonald, G.; Storrie-Lombardi, M.; Nealson, K.

    1999-01-01

    The amino acid distributions in the Murchison carbonaceous chondrite, Mars meteorite ALH84001, and ice from the Allan Hills region of Antarctica are shown, using a multivariate technique known as Principal Component Analysis (PCA), to be statistically distinct from the average amino acid compostion of 101 terrestrial protein superfamilies.

  4. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  5. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  6. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  7. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  8. 40 CFR 260.33 - Procedures for variances from classification as a solid waste, for variances to be classified as...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classification as a solid waste, for variances to be classified as a boiler, or for non-waste determinations. 260... from classification as a solid waste, for variances to be classified as a boiler, or for non-waste... as boilers, or applications for non-waste determinations. (a) The applicant must apply to...

  9. On-orbit frequency stability analysis of the GPS NAVSTAR-1 quartz clock and the NAVSTARs-6 and -8 rubidium clocks

    NASA Technical Reports Server (NTRS)

    Mccaskill, T. B.; Buisson, J. A.; Reid, W. G.

    1984-01-01

    An on-orbit frequency stability performance analysis of the GPS NAVSTAR-1 quartz clock and the NAVSTARs-6 and -8 rubidium clocks is presented. The clock offsets were obtained from measurements taken at the GPS monitor stations which use high performance cesium standards as a reference. Clock performance is characterized through the use of the Allan variance, which is evaluated for sample times of 15 minutes to two hours, and from one day to 10 days. The quartz and rubidium clocks' offsets were corrected for aging rate before computing the frequency stability. The effect of small errors in aging rate is presented for the NAVSTAR-8 rubidium clock's stability analysis. The analysis includes presentation of time and frequency residuals with respect to linear and quadratic models, which aid in obtaining aging rate values and identifying systematic and random effects. The frequency stability values were further processed with a time domain noise process analysis, which is used to classify random noise process and modulation type.

  10. Modeling variance structure of body shape traits of Lipizzan horses.

    PubMed

    Kaps, M; Curik, I; Baban, M

    2010-09-01

    Heterogeneity of variance of growth traits over age is a common issue in estimating genetic parameters and is addressed in this study by selecting appropriate variance structure models for additive genetic and environmental variances. Modeling and partitioning those variances connected with analyzing small data sets were demonstrated on Lipizzan horses. The following traits were analyzed: withers height, chest girth, and cannon bone circumference. The measurements were taken at birth, and at approximately 6, 12, 24, and 36 mo of age of 660 Lipizzan horses born in Croatia between 1948 and 2000. The corresponding pedigree file consisted of 1,458 horses. Sex, age of dam, and stud-year-season interaction were considered fixed effects; additive genetic and permanent environment effects were defined as random. Linear adjustments of age at measuring were done within measuring groups. Maternal effects were included only for measurements taken at birth and at 6 mo. Additive genetic variance structures were modeled by using uniform structures or structures based on polynomial random regression. Environmental variance structures were modeled by using one of the following models: unstructured, exponential, Gaussian, or combinations of identity or diagonal with structures based on polynomial random regression. The parameters were estimated by using REML. Comparison and fits of the models were assessed by using Akaike and Bayesian information criteria, and by checking graphically the adequacy of the shape of the overall (phenotypic) and component (additive genetic and environmental) variance functions. The best overall fit was obtained from models with unstructured error variance. Compared with the model with uniform additive genetic variance, models with structures based on random regression only slightly improved overall fit. Exponential and Gaussian models were generally not suitable because they do not accommodate adequately heterogeneity of variance. Using the unstructured

  11. Election 84: Search for a New Coalition. Proceedings of the Allan Shivers Election Analysis Conference (Austin, Texas, November 17, 1984).

    ERIC Educational Resources Information Center

    Jeffrey, Robert C., Ed.

    This booklet contains the proceedings of a conference that focused on the psychological and fiscal impact of the electronic media in the 1984 election campaign. Comments are made by Robert Teeter, the principal pollster for the national Republican party, and Peter Hart, the principal pollster for the national Democratic party. Both describe their…

  12. A note on preliminary tests of equality of variances.

    PubMed

    Zimmerman, Donald W

    2004-05-01

    Preliminary tests of equality of variances used before a test of location are no longer widely recommended by statisticians, although they persist in some textbooks and software packages. The present study extends the findings of previous studies and provides further reasons for discontinuing the use of preliminary tests. The study found Type I error rates of a two-stage procedure, consisting of a preliminary Levene test on samples of different sizes with unequal variances, followed by either a Student pooled-variances t test or a Welch separate-variances t test. Simulations disclosed that the twostage procedure fails to protect the significance level and usually makes the situation worse. Earlier studies have shown that preliminary tests often adversely affect the size of the test, and also that the Welch test is superior to the t test when variances are unequal. The present simulations reveal that changes in Type I error rates are greater when sample sizes are smaller, when the difference in variances is slight rather than extreme, and when the significance level is more stringent. Furthermore, the validity of the Welch test deteriorates if it is used only on those occasions where a preliminary test indicates it is needed. Optimum protection is assured by using a separate-variances test unconditionally whenever sample sizes are unequal. PMID:15171807

  13. Penzias, Arno Allan (1933-)

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    Radioscientist, born in Munich in Germany, Nobel prizewinner (1978) `for the discovery of cosmic microwave background radiation', a refugee from Germany at the age of 6, found his way to America and experience in microwave physics. Joined Bell Laboratories, Holmdel, New Jersey, searched for and investigated line emission from the interstellar OH molecule. Was able to gain the use of a large radio...

  14. Variance Estimation for Myocardial Blood Flow by Dynamic PET.

    PubMed

    Moody, Jonathan B; Murthy, Venkatesh L; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P

    2015-11-01

    The estimation of myocardial blood flow (MBF) by (13)N-ammonia or (82)Rb dynamic PET typically relies on an empirically determined generalized Renkin-Crone equation to relate the kinetic parameter K1 to MBF. Because the Renkin-Crone equation defines MBF as an implicit function of K1, the MBF variance cannot be determined using standard error propagation techniques. To overcome this limitation, we derived novel analytical approximations that provide first- and second-order estimates of MBF variance in terms of the mean and variance of K1 and the Renkin-Crone parameters. The accuracy of the analytical expressions was validated by comparison with Monte Carlo simulations, and MBF variance was evaluated in clinical (82)Rb dynamic PET scans. For both (82)Rb and (13)N-ammonia, good agreement was observed between both (first- and second-order) analytical variance expressions and Monte Carlo simulations, with moderately better agreement for second-order estimates. The contribution of the Renkin-Crone relation to overall MBF uncertainty was found to be as high as 68% for (82)Rb and 35% for (13)N-ammonia. For clinical (82)Rb PET data, the conventional practice of neglecting the statistical uncertainty in the Renkin-Crone parameters resulted in underestimation of the coefficient of variation of global MBF and coronary flow reserve by 14-49%. Knowledge of MBF variance is essential for assessing the precision and reliability of MBF estimates. The form and statistical uncertainty in the empirical Renkin-Crone relation can make substantial contributions to the variance of MBF. The novel analytical variance expressions derived in this work enable direct estimation of MBF variance which includes this previously neglected contribution. PMID:25974932

  15. Global Gravity Wave Variances from Aura MLS: Characteristics and Interpretation

    NASA Technical Reports Server (NTRS)

    Wu, Dong L.; Eckermann, Stephen D.

    2008-01-01

    The gravity wave (GW)-resolving capabilities of 118-GHz saturated thermal radiances acquired throughout the stratosphere by the Microwave Limb Sounder (MLS) on the Aura satellite are investigated and initial results presented. Because the saturated (optically thick) radiances resolve GW perturbations from a given altitude at different horizontal locations, variances are evaluated at 12 pressure altitudes between 21 and 51 km using the 40 saturated radiances found at the bottom of each limb scan. Forward modeling simulations show that these variances are controlled mostly by GWs with vertical wavelengths z 5 km and horizontal along-track wavelengths of y 100-200 km. The tilted cigar-shaped three-dimensional weighting functions yield highly selective responses to GWs of high intrinsic frequency that propagate toward the instrument. The latter property is used to infer the net meridional component of GW propagation by differencing the variances acquired from ascending (A) and descending (D) orbits. Because of improved vertical resolution and sensitivity, Aura MLS GW variances are 5?8 times larger than those from the Upper Atmosphere Research Satellite (UARS) MLS. Like UARS MLS variances, monthly-mean Aura MLS variances in January and July 2005 are enhanced when local background wind speeds are large, due largely to GW visibility effects. Zonal asymmetries in variance maps reveal enhanced GW activity at high latitudes due to forcing by flow over major mountain ranges and at tropical and subtropical latitudes due to enhanced deep convective generation as inferred from contemporaneous MLS cloud-ice data. At 21-28-km altitude (heights not measured by the UARS MLS), GW variance in the tropics is systematically enhanced and shows clear variations with the phase of the quasi-biennial oscillation, in general agreement with GW temperature variances derived from radiosonde, rocketsonde, and limb-scan vertical profiles.

  16. [Spatial variance characters of urban synthesis pattern indices at different scales].

    PubMed

    Yue, Wenze; Xu, Jianhua; Xu, Lihua; Tan, Wenqi; Mei, Anxin

    2005-11-01

    Scale holds the key to understand pattern-process interactions, and indeed, becomes one of the corner-stone concepts in landscape ecology. Geographic Information System and remote sensing techniques provide an effective tool to characterize the spatial pattern and spatial heterogeneity at different scales. As an example, these techniques are applied to analyze the urban landscape diversity index, contagion index and fractal dimension on the SPOT remote sensing images at four scales. This paper modeled the semivariogram of these three landscape indices at different scales, and the results indicated that the spatial variance characters of diversity index, contagion index and fractal dimension were similar at different scales, which was spatial dependence. The spatial dependence was showed at each scale, the smaller the scale, the stronger the spatial dependence. With the scale reduced, more details of spatial variance were discovered. The contribution of spatial autocorrelation of these three indices to total spatial variance increased gradually, but when the scale was quite small, spatial variance analysis would destroy the interior structure of landscape system. The semivariogram models of different landscape indices were very different at the same scale, illuminating that these models were incomparable at different scales. According to above analyses and based on the study of urban land use structure, 1 km extent was the more perfect scale for studying the spatial variance of urban landscape pattern in Shanghai. The spatial variance of landscape indices had the character of scale-dependence, and was a function of scale. The results differed at different scales we chose, and thus, the influence of scales on pattern could not be neglected in the research of landscape ecology. The changes of these three landscape indices displayed the regularity of urban spatial structure at different scales, i. e., they were complicated and no regularity at small scale, polycentric

  17. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    PubMed Central

    Patlak, J B

    1993-01-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance

  18. Some variance reduction methods for numerical stochastic homogenization.

    PubMed

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065

  19. 40 CFR 142.42 - Consideration of a variance request.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... contaminant level required by the national primary drinking water regulations because of the nature of the raw... effectiveness of treatment methods for the contaminant for which the variance is requested. (2) Cost and...

  20. 40 CFR 141.4 - Variances and exemptions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... subparts H, P, S, T, W, and Y of this part. ... total coliforms and E. coli and variances from any of the treatment technique requirements of subpart H... Section 141.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER...

  1. Variance Based Measure for Optimization of Parametric Realignment Algorithms

    PubMed Central

    Mehring, Carsten

    2016-01-01

    Neuronal responses to sensory stimuli or neuronal responses related to behaviour are often extracted by averaging neuronal activity over large number of experimental trials. Such trial-averaging is carried out to reduce noise and to diminish the influence of other signals unrelated to the corresponding stimulus or behaviour. However, if the recorded neuronal responses are jittered in time with respect to the corresponding stimulus or behaviour, averaging over trials may distort the estimation of the underlying neuronal response. Temporal jitter between single trial neural responses can be partially or completely removed using realignment algorithms. Here, we present a measure, named difference of time-averaged variance (dTAV), which can be used to evaluate the performance of a realignment algorithm without knowing the internal triggers of neural responses. Using simulated data, we show that using dTAV to optimize the parameter values for an established parametric realignment algorithm improved its efficacy and, therefore, reduced the jitter of neuronal responses. By removing the jitter more effectively and, therefore, enabling more accurate estimation of neuronal responses, dTAV can improve analysis and interpretation of the neural responses. PMID:27159490

  2. A multicomb variance reduction scheme for Monte Carlo semiconductor simulators

    SciTech Connect

    Gray, M.G.; Booth, T.E.; Kwan, T.J.T.; Snell, C.M.

    1998-04-01

    The authors adapt a multicomb variance reduction technique used in neutral particle transport to Monte Carlo microelectronic device modeling. They implement the method in a two-dimensional (2-D) MOSFET device simulator and demonstrate its effectiveness in the study of hot electron effects. The simulations show that the statistical variance of hot electrons is significantly reduced with minimal computational cost. The method is efficient, versatile, and easy to implement in existing device simulators.

  3. Analytic variance estimates of Swank and Fano factors

    SciTech Connect

    Gutierrez, Benjamin; Badano, Aldo; Samuelson, Frank

    2014-07-15

    Purpose: Variance estimates for detector energy resolution metrics can be used as stopping criteria in Monte Carlo simulations for the purpose of ensuring a small uncertainty of those metrics and for the design of variance reduction techniques. Methods: The authors derive an estimate for the variance of two energy resolution metrics, the Swank factor and the Fano factor, in terms of statistical moments that can be accumulated without significant computational overhead. The authors examine the accuracy of these two estimators and demonstrate how the estimates of the coefficient of variation of the Swank and Fano factors behave with data from a Monte Carlo simulation of an indirect x-ray imaging detector. Results: The authors' analyses suggest that the accuracy of their variance estimators is appropriate for estimating the actual variances of the Swank and Fano factors for a variety of distributions of detector outputs. Conclusions: The variance estimators derived in this work provide a computationally convenient way to estimate the error or coefficient of variation of the Swank and Fano factors during Monte Carlo simulations of radiation imaging systems.

  4. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

  5. The Variance Normalization Method of Ridge Regression Analysis.

    ERIC Educational Resources Information Center

    Bulcock, J. W.; And Others

    The testing of contemporary sociological theory often calls for the application of structural-equation models to data which are inherently collinear. It is shown that simple ridge regression, which is commonly used for controlling the instability of ordinary least squares regression estimates in ill-conditioned data sets, is not a legitimate…

  6. Variance Analysis of Unevenly Spaced Time Series Data

    NASA Technical Reports Server (NTRS)

    Hackman, Christine; Parker, Thomas E.

    1996-01-01

    We have investigated the effect of uneven data spacing on the computation of delta (sub chi)(gamma). Evenly spaced simulated data sets were generated for noise processes ranging from white phase modulation (PM) to random walk frequency modulation (FM). Delta(sub chi)(gamma) was then calculated for each noise type. Data were subsequently removed from each simulated data set using typical two-way satellite time and frequency transfer (TWSTFT) data patterns to create two unevenly spaced sets with average intervals of 2.8 and 3.6 days. Delta(sub chi)(gamma) was then calculated for each sparse data set using two different approaches. First the missing data points were replaced by linear interpolation and delta (sub chi)(gamma) calculated from this now full data set. The second approach ignored the fact that the data were unevenly spaced and calculated delta(sub chi)(gamma) as if the data were equally spaced with average spacing of 2.8 or 3.6 days. Both approaches have advantages and disadvantages, and techniques are presented for correcting errors caused by uneven data spacing in typical TWSTFT data sets.

  7. Gender Variance on Campus: A Critical Analysis of Transgender Voices

    ERIC Educational Resources Information Center

    Mintz, Lee M.

    2011-01-01

    Transgender college students face discrimination, harassment, and oppression on college and university campuses; consequently leading to limited academic and social success. Current literature is focused on describing the experiences of transgender students and the practical implications associated with attempting to meet their needs (Beemyn,…

  8. Balancing between sensitization and repression: the role of opium in the life and art of Edgar Allan Poe and Samuel Taylor Coleridge.

    PubMed

    Iszáj, Fruzsina; Demetrovics, Zsolt

    2011-01-01

    The creative process contains both conscious and unconscious work. Therefore, artists have to face their unconscious processes and work with emotional material that is difficult to keep under control in the course of artistic creation. Bringing these contents of consciousness to the surface needs special sensitivity and special control functions while working with them. Considering these mechanisms, psychoactive substance can serve a double function in the case of artists. On the one hand, chemical substances may enhance the artists' sensitivity. On the other hand, they can help moderate the hypersensitivity and repress extreme emotions and burdensome contents of consciousness. The authors posit how the use of opiates could have influenced the life and creative work of Edgar Allan Poe and Samuel Taylor Coleridge. PMID:21859403

  9. Thermodynamic and dynamic contributions to future changes in regional precipitation variance: focus on the Southeastern United States

    NASA Astrophysics Data System (ADS)

    Li, Laifang; Li, Wenhong

    2015-07-01

    The frequency and severity of extreme events are tightly associated with the variance of precipitation. As climate warms, the acceleration in hydrological cycle is likely to enhance the variance of precipitation across the globe. However, due to the lack of an effective analysis method, the mechanisms responsible for the changes of precipitation variance are poorly understood, especially on regional scales. Our study fills this gap by formulating a variance partition algorithm, which explicitly quantifies the contributions of atmospheric thermodynamics (specific humidity) and dynamics (wind) to the changes in regional-scale precipitation variance. Taking Southeastern (SE) United States (US) summer precipitation as an example, the algorithm is applied to the simulations of current and future climate by phase 5 of Coupled Model Intercomparison Project (CMIP5) models. The analysis suggests that compared to observations, most CMIP5 models (~60 %) tend to underestimate the summer precipitation variance over the SE US during the 1950-1999, primarily due to the errors in the modeled dynamic processes (i.e. large-scale circulation). Among the 18 CMIP5 models analyzed in this study, six of them reasonably simulate SE US summer precipitation variance in the twentieth century and the underlying physical processes; these models are thus applied for mechanistic study of future changes in SE US summer precipitation variance. In the future, the six models collectively project an intensification of SE US summer precipitation variance, resulting from the combined effects of atmospheric thermodynamics and dynamics. Between them, the latter plays a more important role. Specifically, thermodynamics results in more frequent and intensified wet summers, but does not contribute to the projected increase in the frequency and intensity of dry summers. In contrast, atmospheric dynamics explains the projected enhancement in both wet and dry summers, indicating its importance in understanding

  10. Forecast Variance Estimates Using Dart Inversion

    NASA Astrophysics Data System (ADS)

    Gica, E.

    2014-12-01

    The tsunami forecast tool developed by the NOAA Center for Tsunami Research (NCTR) provides real-time tsunami forecast and is composed of the following major components: a pre-computed tsunami propagation database, an inversion algorithm that utilizes real-time tsunami data recorded at DART stations to define the tsunami source, and inundation models that predict tsunami wave characteristics at specific coastal locations. The propagation database is a collection of basin-wide tsunami model runs generated from 50x100 km "unit sources" with a slip of 1 meter. Linear combination and scaling of unit sources is possible since the nonlinearity in the deep ocean is negligible. To define the tsunami source using the unit sources, real-time DART data is ingested into an inversion algorithm. Based on the selected DART and length of tsunami time series, the inversion algorithm will select the best combination of unit sources and scaling factors that best fit the observed data at the selected locations. This combined source then serves as boundary condition for the inundation models. Different combinations of DARTs and length of tsunami time series used in the inversion algorithm will result in different selection of unit sources and scaling factors. Since the combined unit sources are used as boundary condition for inundation modeling, different sources will produce variations in the tsunami wave characteristics. As part of the testing procedures for the tsunami forecast tool, staff at NCTR and both National and Pacific Tsunami Warning Centers, performed post-event forecasts for several historical tsunamis. The extent of variation due to different source definitions obtained from the testing is analyzed by comparing the simulated maximum tsunami wave amplitude with recorded data at tide gauge locations. Results of the analysis will provide an error estimate defining the possible range of the simulated maximum tsunami wave amplitude for each specific inundation model.

  11. Modern diet and metabolic variance – a recipe for disaster?

    PubMed Central

    2014-01-01

    Objective Recently, a positive correlation between alanine transaminase activity and body mass was established among healthy young individuals of normal weight. Here we explore further this relationship and propose a physiological rationale for this link. Design Cross-sectional statistical analysis of adiposity across large samples of adults differing by age, diet and lifestyle. Subjects 46,684 19–20 years old Swiss male conscripts and published data on 1000 Eskimos, 518 Toronto residents and 97,000 North American Adventists. Measurements Serum concentrations of the alanine transaminase, post-prandial glucose levels, cholesterol, body height and weight, blood pressure and routine blood analysis (thrombocytes and leukocytes) for Swiss conscripts. Adiposity measures and dietary information for other groups were also obtained. Results Stepwise multiple regression after correction for random errors of physiological tests showed that 28% of the total variance in body mass is associated with ALT concentrations. This relationship remained significant when only metabolically healthy (as defined by the American Heart Association) Swiss conscripts were selected. The data indicated that high protein only or high carbohydrate only diets are associated with lower levels of obesity than a diet combining proteins and carbohydrates. Conclusion Elevated levels of alanine transaminase, and likely other transaminases, may result in overactivity of the alanine cycle that produces pyruvate from protein. When a mixed meal of protein, carbohydrate and fat is consumed, carbohydrates and fats are digested faster and metabolised to satisfy body’s energetic needs while slower digested protein is ultimately converted to malonyl CoA and stored as fat. Chronicity of this sequence is proposed to cause accumulation of somatic fat stores and thus obesity. PMID:24502225

  12. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  13. Quantitative genetic variance and multivariate clines in the Ivyleaf morning glory, Ipomoea hederacea

    PubMed Central

    Stock, Amanda J.; Campitelli, Brandon E.; Stinchcombe, John R.

    2014-01-01

    Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. PMID:25002704

  14. A comparison of two methods for detecting abrupt changes in the variance of climatic time series

    NASA Astrophysics Data System (ADS)

    Rodionov, Sergei N.

    2016-06-01

    Two methods for detecting abrupt shifts in the variance - Integrated Cumulative Sum of Squares (ICSS) and Sequential Regime Shift Detector (SRSD) - have been compared on both synthetic and observed time series. In Monte Carlo experiments, SRSD outperformed ICSS in the overwhelming majority of the modeled scenarios with different sequences of variance regimes. The SRSD advantage was particularly apparent in the case of outliers in the series. On the other hand, SRSD has more parameters to adjust than ICSS, which requires more experience from the user in order to select those parameters properly. Therefore, ICSS can serve as a good starting point of a regime shift analysis. When tested on climatic time series, in most cases both methods detected the same change points in the longer series (252-787 monthly values). The only exception was the Arctic Ocean sea surface temperature (SST) series, when ICSS found one extra change point that appeared to be spurious. As for the shorter time series (66-136 yearly values), ICSS failed to detect any change points even when the variance doubled or tripled from one regime to another. For these time series, SRSD is recommended. Interestingly, all the climatic time series tested, from the Arctic to the tropics, had one thing in common: the last shift detected in each of these series was toward a high-variance regime. This is consistent with other findings of increased climate variability in recent decades.

  15. Changes in variance explained by top SNP windows over generations for three traits in broiler chicken

    PubMed Central

    Fragomeni, Breno de Oliveira; Misztal, Ignacy; Lourenco, Daniela Lino; Aguilar, Ignacio; Okimoto, Ronald; Muir, William M.

    2014-01-01

    The purpose of this study was to determine if the set of genomic regions inferred as accounting for the majority of genetic variation in quantitative traits remain stable over multiple generations of selection. The data set contained phenotypes for five generations of broiler chicken for body weight, breast meat, and leg score. The population consisted of 294,632 animals over five generations and also included genotypes of 41,036 single nucleotide polymorphism (SNP) for 4,866 animals, after quality control. The SNP effects were calculated by a GWAS type analysis using single step genomic BLUP approach for generations 1–3, 2–4, 3–5, and 1–5. Variances were calculated for windows of 20 SNP. The top ten windows for each trait that explained the largest fraction of the genetic variance across generations were examined. Across generations, the top 10 windows explained more than 0.5% but less than 1% of the total variance. Also, the pattern of the windows was not consistent across generations. The windows that explained the greatest variance changed greatly among the combinations of generations, with a few exceptions. In many cases, a window identified as top for one combination, explained less than 0.1% for the other combinations. We conclude that identification of top SNP windows for a population may have little predictive power for genetic selection in the following generations for the traits here evaluated. PMID:25324857

  16. Monochromaticity of orientation maps in v1 implies minimum variance for hypercolumn size.

    PubMed

    Afgoustidis, Alexandre

    2015-01-01

    In the primary visual cortex of many mammals, the processing of sensory information involves recognizing stimuli orientations. The repartition of preferred orientations of neurons in some areas is remarkable: a repetitive, non-periodic, layout. This repetitive pattern is understood to be fundamental for basic non-local aspects of vision, like the perception of contours, but important questions remain about its development and function. We focus here on Gaussian Random Fields, which provide a good description of the initial stage of orientation map development and, in spite of shortcomings we will recall, a computable framework for discussing general principles underlying the geometry of mature maps. We discuss the relationship between the notion of column spacing and the structure of correlation spectra; we prove formulas for the mean value and variance of column spacing, and we use numerical analysis of exact analytic formulae to study the variance. Referring to studies by Wolf, Geisel, Kaschube, Schnabel, and coworkers, we also show that spectral thinness is not an essential ingredient to obtain a pinwheel density of π, whereas it appears as a signature of Euclidean symmetry. The minimum variance property associated to thin spectra could be useful for information processing, provide optimal modularity for V1 hypercolumns, and be a first step toward a mathematical definition of hypercolumns. A measurement of this property in real maps is in principle possible, and comparison with the results in our paper could help establish the role of our minimum variance hypothesis in the development process. PMID:25859421

  17. Quantitative genetic variance and multivariate clines in the Ivyleaf morning glory, Ipomoea hederacea.

    PubMed

    Stock, Amanda J; Campitelli, Brandon E; Stinchcombe, John R

    2014-08-19

    Clinal variation is commonly interpreted as evidence of adaptive differentiation, although clines can also be produced by stochastic forces. Understanding whether clines are adaptive therefore requires comparing clinal variation to background patterns of genetic differentiation at presumably neutral markers. Although this approach has frequently been applied to single traits at a time, we have comparatively fewer examples of how multiple correlated traits vary clinally. Here, we characterize multivariate clines in the Ivyleaf morning glory, examining how suites of traits vary with latitude, with the goal of testing for divergence in trait means that would indicate past evolutionary responses. We couple this with analysis of genetic variance in clinally varying traits in 20 populations to test whether past evolutionary responses have depleted genetic variance, or whether genetic variance declines approaching the range margin. We find evidence of clinal differentiation in five quantitative traits, with little evidence of isolation by distance at neutral loci that would suggest non-adaptive or stochastic mechanisms. Within and across populations, the traits that contribute most to population differentiation and clinal trends in the multivariate phenotype are genetically variable as well, suggesting that a lack of genetic variance will not cause absolute evolutionary constraints. Our data are broadly consistent theoretical predictions of polygenic clines in response to shallow environmental gradients. Ecologically, our results are consistent with past findings of natural selection on flowering phenology, presumably due to season-length variation across the range. PMID:25002704

  18. Spectral Fingerprinting and Analysis of Variance-Principal Component Analysis: A Tool for Classifying Variance in Plant Materials

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genetics and a variety of environmental factors (such as rainfall, pests, soil, irrigation levels, and fertilization) can lead to chemical differences in the same plant materials. A simple and inexpensive spectral fingerprinting (UV, IR, NIR, and Direct MS) method is described that allows classifica...

  19. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  20. Practice reduces task relevant variance modulation and forms nominal trajectory

    NASA Astrophysics Data System (ADS)

    Osu, Rieko; Morishige, Ken-Ichi; Nakanishi, Jun; Miyamoto, Hiroyuki; Kawato, Mitsuo

    2015-12-01

    Humans are capable of achieving complex tasks with redundant degrees of freedom. Much attention has been paid to task relevant variance modulation as an indication of online feedback control strategies to cope with motor variability. Meanwhile, it has been discussed that the brain learns internal models of environments to realize feedforward control with nominal trajectories. Here we examined trajectory variance in both spatial and temporal domains to elucidate the relative contribution of these control schemas. We asked subjects to learn reaching movements with multiple via-points, and found that hand trajectories converged to stereotyped trajectories with the reduction of task relevant variance modulation as learning proceeded. Furthermore, variance reduction was not always associated with task constraints but was highly correlated with the velocity profile. A model assuming noise both on the nominal trajectory and motor command was able to reproduce the observed variance modulation, supporting an expression of nominal trajectories in the brain. The learning-related decrease in task-relevant modulation revealed a reduction in the influence of optimal feedback around the task constraints. After practice, the major part of computation seems to be taken over by the feedforward controller around the nominal trajectory with feedback added only when it becomes necessary.