Spectral Ambiguity of Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1996-01-01
We study the extent to which knowledge of Allan variance and other finite-difference variances determines the spectrum of a random process. The variance of first differences is known to determine the spectrum. We show that, in general, the Allan variance does not. A complete description of the ambiguity is given.
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
The third-difference approach to modified Allan variance (MVAR) leads to a tractable formula for a measure of MVAR estimator confidence, the equivalent degrees of freedom (edf), in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. A simple approximation for edf is given, and its errors are tabulated. A theorem allowing conservative estimates of edf in the presence of compound noise processes is given.
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
Estimating the Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, Charles
1995-01-01
A paper at the 1992 FCS showed how to express the modified Allan variance (mvar) in terms of the third difference of the cumulative sum of time residuals. Although this reformulated definition was presented merely as a computational trick for simplifying the calculation of mvar estimates, it has since turned out to be a powerful theoretical tool for deriving the statistical quality of those estimates in terms of their equivalent degrees of freedom (edf), defined for an estimator V by edf V = 2(EV)2/(var V). Confidence intervals for mvar can then be constructed from levels of the appropriate 2 distribution.
Avoiding Aliasing in Allan Variance: An Application to Fiber Link Data Analysis.
Calosso, Claudio E; Clivati, Cecilia; Micalizio, Salvatore
2016-04-01
Optical fiber links are known as the most performing tools to transfer ultrastable frequency reference signals. However, these signals are affected by phase noise up to bandwidths of several kilohertz and a careful data processing strategy is required to properly estimate the uncertainty. This aspect is often overlooked and a number of approaches have been proposed to implicitly deal with it. Here, we face this issue in terms of aliasing and show how typical tools of signal analysis can be adapted to the evaluation of optical fiber links performance. In this way, it is possible to use the Allan variance (AVAR) as estimator of stability and there is no need to introduce other estimators. The general rules we derive can be extended to all optical links. As an example, we apply this method to the experimental data we obtained on a 1284-km coherent optical link for frequency dissemination, which we realized in Italy. PMID:26800534
Application of the Allan Variance to Time Series Analysis in Astrometry and Geodesy: A Review.
Malkin, Zinovy
2016-04-01
The Allan variance (AVAR) was introduced 50 years ago as a statistical tool for assessing the frequency standards deviations. For the past decades, AVAR has increasingly been used in geodesy and astrometry to assess the noise characteristics in geodetic and astrometric time series. A specific feature of astrometric and geodetic measurements, as compared with clock measurements, is that they are generally associated with uncertainties; thus, an appropriate weighting should be applied during data analysis. In addition, some physically connected scalar time series naturally form series of multidimensional vectors. For example, three station coordinates time series X, Y, and Z can be combined to analyze 3-D station position variations. The classical AVAR is not intended for processing unevenly weighted and/or multidimensional data. Therefore, AVAR modifications, namely weighted AVAR (WAVAR), multidimensional AVAR (MAVAR), and weighted multidimensional AVAR (WMAVAR), were introduced to overcome these deficiencies. In this paper, a brief review is given of the experience of using AVAR and its modifications in processing astrogeodetic time series. PMID:26540681
A Simple Algorithm for Approximating Confidence on the Modified Allan Variance and the Time Variance
NASA Technical Reports Server (NTRS)
Weiss, Marc A.; Greenhall, Charles A.
1996-01-01
An approximating algorithm for computing equvalent degrees of freedom of the Modified Allan Variance and its square root, the Modified Allan Deviation (MVAR and MDEV), and the Time Variance and Time Deviation (TVAR and TDEV) is presented, along with an algorithm for approximating the inverse chi-square distribution.
On the application of Allan variance method for Ring Laser Gyro performance characterization
Ng, L.C.
1993-10-15
This report describes the method of Allan variance and its application to the characterization of a Ring Laser Gyro`s (RLG) performance. Allan variance, a time domain analysis technique, is an accepted IEEE standard for gyro specifications. The method was initially developed by David Allan of the National Bureau of Standards to quantify the error statistics of a Cesium beam frequency standard employed as the US Frequency Standards in 1960`s. The method can, in general, be applied to analyze the error characteristics of any precision measurement instrument. The key attribute of the method is that it allows for a finer, easier characterization and identification of error sources and their contribution to the overall noise statistics. This report presents an overview of the method, explains the relationship between Allan variance and power spectral density distribution of underlying noise sources, describes the batch and recursive implementation approaches, validates the Allan variance computation with a simulation model, and illustrates the Allan variance method using data collected from several Honeywell LIMU units.
Numbers Of Degrees Of Freedom Of Allan-Variance Estimators
NASA Technical Reports Server (NTRS)
Greenhall, Charles A.
1992-01-01
Report discusses formulas for estimation of Allan variances. Presents algorithms for closed-form approximations of numbers of degrees of freedom characterizing results obtained when various estimators applied to five power-law components of classical mathematical model of clock noise.
Online Estimation of Allan Variance Coefficients Based on a Neural-Extended Kalman Filter
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
Online estimation of Allan variance coefficients based on a neural-extended Kalman filter.
Miao, Zhiyong; Shen, Feng; Xu, Dingjie; He, Kunpeng; Tian, Chunmiao
2015-01-01
As a noise analysis method for inertial sensors, the traditional Allan variance method requires the storage of a large amount of data and manual analysis for an Allan variance graph. Although the existing online estimation methods avoid the storage of data and the painful procedure of drawing slope lines for estimation, they require complex transformations and even cause errors during the modeling of dynamic Allan variance. To solve these problems, first, a new state-space model that directly models the stochastic errors to obtain a nonlinear state-space model was established for inertial sensors. Then, a neural-extended Kalman filter algorithm was used to estimate the Allan variance coefficients. The real noises of an ADIS16405 IMU and fiber optic gyro-sensors were analyzed by the proposed method and traditional methods. The experimental results show that the proposed method is more suitable to estimate the Allan variance coefficients than the traditional methods. Moreover, the proposed method effectively avoids the storage of data and can be easily implemented using an online processor. PMID:25625903
The Third-Difference Approach to Modified Allan Variance
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
1995-01-01
This study gives strategies for estimating the modified Allan variance (mvar) and formulas for computing the equivalent degrees of freedom (edf) of the estimators. A third-difference formulation of mvar leads to a tractable formula for edf in the presence of power-law phase noise. The effect of estimation stride on edf is tabulated. First-degree rational-function approximations for edf are derived.
Relationship between Allan variances and Kalman Filter parameters
NASA Technical Reports Server (NTRS)
Vandierendonck, A. J.; Mcgraw, J. B.; Brown, R. G.
1984-01-01
A relationship was constructed between the Allan variance parameters (H sub z, H sub 1, H sub 0, H sub -1 and H sub -2) and a Kalman Filter model that would be used to estimate and predict clock phase, frequency and frequency drift. To start with the meaning of those Allan Variance parameters and how they are arrived at for a given frequency source is reviewed. Although a subset of these parameters is arrived at by measuring phase as a function of time rather than as a spectral density, they all represent phase noise spectral density coefficients, though not necessarily that of a rational spectral density. The phase noise spectral density is then transformed into a time domain covariance model which can then be used to derive the Kalman Filter model parameters. Simulation results of that covariance model are presented and compared to clock uncertainties predicted by Allan variance parameters. A two state Kalman Filter model is then derived and the significance of each state is explained.
The dynamic Allan Variance IV: characterization of atomic clock anomalies.
Galleani, Lorenzo; Tavella, Patrizia
2015-05-01
The number of applications where precise clocks play a key role is steadily increasing, satellite navigation being the main example. Precise clock anomalies are hence critical events, and their characterization is a fundamental problem. When an anomaly occurs, the clock stability changes with time, and this variation can be characterized with the dynamic Allan variance (DAVAR). We obtain the DAVAR for a series of common clock anomalies, namely, a sinusoidal term, a phase jump, a frequency jump, and a sudden change in the clock noise variance. These anomalies are particularly common in space clocks. Our analytic results clarify how the clock stability changes during these anomalies. PMID:25965674
Measurement of Allan variance and phase noise at fractions of a millihertz
NASA Technical Reports Server (NTRS)
Conroy, Bruce L.; Le, Duc
1990-01-01
Although the measurement of Allan variance of oscillators is well documented, there is a need for a simplified system for finding the degradation of phase noise and Allan variance step-by-step through a system. This article describes an instrumentation system for simultaneous measurement of additive phase noise and degradation in Allan variance through a transmitter system. Also included are measurements of a 20-kW X-band transmitter showing the effect of adding a pass tube regulator.
Cavalié, Olivier; Vernotte, François
2016-04-01
The Allan variance was introduced 50 years ago for analyzing the stability of frequency standards. In addition to its metrological interest, it may be also considered as an estimator of the large trends of the power spectral density (PSD) of frequency deviation. For instance, the Allan variance is able to discriminate different types of noise characterized by different power laws in the PSD. The Allan variance was also used in other fields than time and frequency metrology: for more than 20 years, it has been used in accelerometry, geophysics, geodesy, astrophysics, and even finances. However, it seems that up to now, it has been exclusively applied for time series analysis. We propose here to use the Allan variance on spatial data. Interferometric synthetic aperture radar (InSAR) is used in geophysics to image ground displacements in space [over the synthetic aperture radar (SAR) image spatial coverage] and in time thanks to the regular SAR image acquisitions by dedicated satellites. The main limitation of the technique is the atmospheric disturbances that affect the radar signal while traveling from the sensor to the ground and back. In this paper, we propose to use the Allan variance for analyzing spatial data from InSAR measurements. The Allan variance was computed in XY mode as well as in radial mode for detecting different types of behavior for different space-scales, in the same way as the different types of noise versus the integration time in the classical time and frequency application. We found that radial Allan variance is the more appropriate way to have an estimator insensitive to the spatial axis and we applied it on SAR data acquired over eastern Turkey for the period 2003-2011. Spatial Allan variance allowed us to well characterize noise features, classically found in InSAR such as phase decorrelation producing white noise or atmospheric delays, behaving like a random walk signal. We finally applied the spatial Allan variance to an InSAR time
Twenty-Five Years of Applications of the Modified Allan Variance in Telecommunications.
Bregni, Stefano
2016-04-01
The Modified Allan Variance (MAVAR) was originally defined in 1981 for measuring frequency stability in precision oscillators. Due to its outstanding accuracy in discriminating power-law noise, it attracted significant interest among telecommunications engineers since the early 1990s, when it was approved as a standard measure in international standards, redressed as Time Variance (TVAR), for specifying the time stability of network synchronization signals and of equipment clocks. A dozen years later, the usage of MAVAR was also introduced for Internet traffic analysis to estimate self-similarity and long-range dependence. Further, in this field, it demonstrated superior accuracy and sensitivity, better than most popular tools already in use. This paper surveys the last 25 years of progress in extending the field of application of the MAVAR in telecommunications. First, the rationale and principles of the MAVAR are briefly summarized. Its adaptation as TVAR for specification of timing stability is presented. The usage of MAVAR/TVAR in telecommunications standards is reviewed. Examples of measurements on real telecommunications equipment clocks are presented, providing an overview on their actual performance in terms of MAVAR. Moreover, applications of MAVAR to network traffic analysis are surveyed. The superior accuracy of MAVAR in estimating long-range dependence is emphasized by highlighting some remarkable practical examples of real network traffic analysis. PMID:26529754
On the Design of Attitude-Heading Reference Systems Using the Allan Variance.
Hidalgo-Carrió, Javier; Arnold, Sascha; Poulakis, Pantelis
2016-04-01
The Allan variance is a method to characterize stochastic random processes. The technique was originally developed to characterize the stability of atomic clocks and has also been successfully applied to the characterization of inertial sensors. Inertial navigation systems (INS) can provide accurate results in a short time, which tend to rapidly degrade in longer time intervals. During the last decade, the performance of inertial sensors has significantly improved, particularly in terms of signal stability, mechanical robustness, and power consumption. The mass and volume of inertial sensors have also been significantly reduced, offering system-level design and accommodation advantages. This paper presents a complete methodology for the characterization and modeling of inertial sensors using the Allan variance, with direct application to navigation systems. Although the concept of sensor fusion is relatively straightforward, accurate characterization and sensor-information filtering is not a trivial task, yet they are essential for good performance. A complete and reproducible methodology utilizing the Allan variance, including all the intermediate steps, is described. An end-to-end (E2E) process for sensor-error characterization and modeling up to the final integration in the sensor-fusion scheme is explained in detail. The strength of this approach is demonstrated with representative tests on novel, high-grade inertial sensors. Experimental navigation results are presented from two distinct robotic applications: a planetary exploration rover prototype and an autonomous underwater vehicle (AUV). PMID:26800535
NASA Technical Reports Server (NTRS)
Stanley, William D.
1994-01-01
An investigation of the Allan variance method as a possible means for characterizing fluctuations in radiometric noise diodes has been performed. The goal is to separate fluctuation components into white noise, flicker noise, and random-walk noise. The primary means is by discrete-time processing, and the study focused primarily on the digital processes involved. Noise satisfying the requirements was generated by direct convolution, fast Fourier transformation (FFT) processing in the time domain, and FFT processing in the frequency domain. Some of the numerous results obtained are presented along with the programs used in the study.
Allan deviation analysis of financial return series
NASA Astrophysics Data System (ADS)
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
ERIC Educational Resources Information Center
Braun, W. John
2012-01-01
The Analysis of Variance is often taught in introductory statistics courses, but it is not clear that students really understand the method. This is because the derivation of the test statistic and p-value requires a relatively sophisticated mathematical background which may not be well-remembered or understood. Thus, the essential concept behind…
Budget variance analysis using RVUs.
Berlin, M F; Budzynski, M R
1998-01-01
This article details the use of the variance analysis as management tool to evaluate the financial health of the practice. A common financial tool for administrators has been a simple calculation measuring the difference between actual financials vs. budget financials. Standard cost accounting provides a methodology known as variance analysis to better understand the actual vs. budgeted financial streams. The standard variance analysis has been modified by applying relative value units (RVUs) as standards for the practice. PMID:10387247
Analysis of Variance: Variably Complex
ERIC Educational Resources Information Center
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
These authors have previously described how to use the "t" test to compare two groups. In this article, they describe the use of a different test, analysis of variance (ANOVA) to compare more than two groups. ANOVA is a test of group differences: do at least two of the means differ from each other? ANOVA assumes (1) normal distribution of…
Variance analysis. Part I, Extending flexible budget variance analysis to acuity.
Finkler, S A
1991-01-01
The author reviews the concepts of flexible budget variance analysis, including the price, quantity, and volume variances generated by that technique. He also introduces the concept of acuity variance and provides direction on how such a variance measure can be calculated. Part II in this two-part series on variance analysis will look at how personal computers can be useful in the variance analysis process. PMID:1870002
Another Line for the Analysis of Variance
ERIC Educational Resources Information Center
Brown, Bruce L.; Harshbarger, Thad R.
1976-01-01
A test is developed for hypotheses about the grand mean in the analysis of variance, using the known relationship between the t distribution and the F distribution with 1 df (degree of freedom) for the numerator. (Author/RC)
Nonorthogonal Analysis of Variance Programs: An Evaluation.
ERIC Educational Resources Information Center
Hosking, James D.; Hamer, Robert M.
1979-01-01
Six computer programs for four methods of nonorthogonal analysis of variance are compared for capabilities, accuracy, cost, transportability, quality of documentation, associated computational capabilities, and ease of use: OSIRIS; SAS; SPSS; MANOVA; BMDP2V; and MULTIVARIANCE. (CTM)
Allan Deviation Plot as a Tool for Quartz-Enhanced Photoacoustic Sensors Noise Analysis.
Giglio, Marilena; Patimisco, Pietro; Sampaolo, Angelo; Scamarcio, Gaetano; Tittel, Frank K; Spagnolo, Vincenzo
2016-04-01
We report here on the use of the Allan deviation plot to analyze the long-term stability of a quartz-enhanced photoacoustic (QEPAS) gas sensor. The Allan plot provides information about the optimum averaging time for the QEPAS signal and allows the prediction of its ultimate detection limit. The Allan deviation can also be used to determine the main sources of noise coming from the individual components of the sensor. Quartz tuning fork thermal noise dominates for integration times up to 275 s, whereas at longer averaging times, the main contribution to the sensor noise originates from laser power instabilities. PMID:26529758
Formative Use of Intuitive Analysis of Variance
ERIC Educational Resources Information Center
Trumpower, David L.
2013-01-01
Students' informal inferential reasoning (IIR) is often inconsistent with the normative logic underlying formal statistical methods such as Analysis of Variance (ANOVA), even after instruction. In two experiments reported here, student's IIR was assessed using an intuitive ANOVA task at the beginning and end of a statistics course. In…
Directional variance analysis of annual rings
NASA Astrophysics Data System (ADS)
Kumpulainen, P.; Marjanen, K.
2010-07-01
The wood quality measurement methods are of increasing importance in the wood industry. The goal is to produce more high quality products with higher marketing value than is produced today. One of the key factors for increasing the market value is to provide better measurements for increased information to support the decisions made later in the product chain. Strength and stiffness are important properties of the wood. They are related to mean annual ring width and its deviation. These indicators can be estimated from images taken from the log ends by two-dimensional power spectrum analysis. The spectrum analysis has been used successfully for images of pine. However, the annual rings in birch, for example are less distinguishable and the basic spectrum analysis method does not give reliable results. A novel method for local log end variance analysis based on Radon-transform is proposed. The directions and the positions of the annual rings can be estimated from local minimum and maximum variance estimates. Applying the spectrum analysis on the maximum local variance estimate instead of the original image produces more reliable estimate of the annual ring width. The proposed method is not limited to log end analysis only. It is usable in other two-dimensional random signal and texture analysis tasks.
Analysis of variance based on fuzzy observations
NASA Astrophysics Data System (ADS)
Nourbakhsh, M.; Mashinchi, M.; Parchami, A.
2013-04-01
Analysis of variance (ANOVA) is an important method in exploratory and confirmatory data analysis. The simplest type of ANOVA is one-way ANOVA for comparison among means of several populations. In this article, we extend one-way ANOVA to a case where observed data are fuzzy observations rather than real numbers. Two real-data examples are given to show the performance of this method.
Analysis of variance of microarray data.
Ayroles, Julien F; Gibson, Greg
2006-01-01
Analysis of variance (ANOVA) is an approach used to identify differentially expressed genes in complex experimental designs. It is based on testing for the significance of the magnitude of effect of two or more treatments taking into account the variance within and between treatment classes. ANOVA is a highly flexible analytical approach that allows investigators to simultaneously assess the contributions of multiple factors to gene expression variation, including technical (dye, batch) effects and biological (sex, genotype, drug, time) ones, as well as interactions between factors. This chapter provides an overview of the theory of linear mixture modeling and the sequence of steps involved in fitting gene-specific models and discusses essential features of experimental design. Commercial and open-source software for performing ANOVA is widely available. PMID:16939792
RISK ANALYSIS, ANALYSIS OF VARIANCE: GETTING MORE FROM OUR DATA
Technology Transfer Automated Retrieval System (TEKTRAN)
Analysis of variance (ANOVA) and regression are common statistical techniques used to analyze agronomic experimental data and determine significant differences among yields due to treatments or other experimental factors. Risk analysis provides an alternate and complimentary examination of the same...
Automatic variance analysis of multistage care pathways.
Li, Xiang; Liu, Haifeng; Zhang, Shilei; Mei, Jing; Xie, Guotong; Yu, Yiqin; Li, Jing; Lakshmanan, Geetika T
2014-01-01
A care pathway (CP) is a standardized process that consists of multiple care stages, clinical activities and their relations, aimed at ensuring and enhancing the quality of care. However, actual care may deviate from the planned CP, and analysis of these deviations can help clinicians refine the CP and reduce medical errors. In this paper, we propose a CP variance analysis method to automatically identify the deviations between actual patient traces in electronic medical records (EMR) and a multistage CP. As the care stage information is usually unavailable in EMR, we first align every trace with the CP using a hidden Markov model. From the aligned traces, we report three types of deviations for every care stage: additional activities, absent activities and violated constraints, which are identified by using the techniques of temporal logic and binomial tests. The method has been applied to a CP for the management of congestive heart failure and real world EMR, providing meaningful evidence for the further improvement of care quality. PMID:25160280
Applications of non-parametric statistics and analysis of variance on sample variances
NASA Technical Reports Server (NTRS)
Myers, R. H.
1981-01-01
Nonparametric methods that are available for NASA-type applications are discussed. An attempt will be made here to survey what can be used, to attempt recommendations as to when each would be applicable, and to compare the methods, when possible, with the usual normal-theory procedures that are avavilable for the Gaussion analog. It is important here to point out the hypotheses that are being tested, the assumptions that are being made, and limitations of the nonparametric procedures. The appropriateness of doing analysis of variance on sample variances are also discussed and studied. This procedure is followed in several NASA simulation projects. On the surface this would appear to be reasonably sound procedure. However, difficulties involved center around the normality problem and the basic homogeneous variance assumption that is mase in usual analysis of variance problems. These difficulties discussed and guidelines given for using the methods.
Functional Analysis of Variance for Association Studies
Vsevolozhskaya, Olga A.; Zaykin, Dmitri V.; Greenwood, Mark C.; Wei, Changshuai; Lu, Qing
2014-01-01
While progress has been made in identifying common genetic variants associated with human diseases, for most of common complex diseases, the identified genetic variants only account for a small proportion of heritability. Challenges remain in finding additional unknown genetic variants predisposing to complex diseases. With the advance in next-generation sequencing technologies, sequencing studies have become commonplace in genetic research. The ongoing exome-sequencing and whole-genome-sequencing studies generate a massive amount of sequencing variants and allow researchers to comprehensively investigate their role in human diseases. The discovery of new disease-associated variants can be enhanced by utilizing powerful and computationally efficient statistical methods. In this paper, we propose a functional analysis of variance (FANOVA) method for testing an association of sequence variants in a genomic region with a qualitative trait. The FANOVA has a number of advantages: (1) it tests for a joint effect of gene variants, including both common and rare; (2) it fully utilizes linkage disequilibrium and genetic position information; and (3) allows for either protective or risk-increasing causal variants. Through simulations, we show that FANOVA outperform two popularly used methods – SKAT and a previously proposed method based on functional linear models (FLM), – especially if a sample size of a study is small and/or sequence variants have low to moderate effects. We conduct an empirical study by applying three methods (FANOVA, SKAT and FLM) to sequencing data from Dallas Heart Study. While SKAT and FLM respectively detected ANGPTL 4 and ANGPTL 3 associated with obesity, FANOVA was able to identify both genes associated with obesity. PMID:25244256
Wave propagation analysis using the variance matrix.
Sharma, Richa; Ivan, J Solomon; Narayanamurthy, C S
2014-10-01
The propagation of a coherent laser wave-field through a pseudo-random phase plate is studied using the variance matrix estimated from Shack-Hartmann wavefront sensor data. The uncertainty principle is used as a tool in discriminating the data obtained from the Shack-Hartmann wavefront sensor. Quantities of physical interest such as the twist parameter, and the symplectic eigenvalues, are estimated from the wavefront sensor measurements. A distance measure between two variance matrices is introduced and used to estimate the spatial asymmetry of a wave-field in the experiment. The estimated quantities are then used to compare a distorted wave-field with its undistorted counterpart. PMID:25401243
Relating the Hadamard Variance to MCS Kalman Filter Clock Estimation
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1996-01-01
The Global Positioning System (GPS) Master Control Station (MCS) currently makes significant use of the Allan Variance. This two-sample variance equation has proven excellent as a handy, understandable tool, both for time domain analysis of GPS cesium frequency standards, and for fine tuning the MCS's state estimation of these atomic clocks. The Allan Variance does not explicitly converge for the nose types of alpha less than or equal to minus 3 and can be greatly affected by frequency drift. Because GPS rubidium frequency standards exhibit non-trivial aging and aging noise characteristics, the basic Allan Variance analysis must be augmented in order to (a) compensate for a dynamic frequency drift, and (b) characterize two additional noise types, specifically alpha = minus 3, and alpha = minus 4. As the GPS program progresses, we will utilize a larger percentage of rubidium frequency standards than ever before. Hence, GPS rubidium clock characterization will require more attention than ever before. The three sample variance, commonly referred to as a renormalized Hadamard Variance, is unaffected by linear frequency drift, converges for alpha is greater than minus 5, and thus has utility for modeling noise in GPS rubidium frequency standards. This paper demonstrates the potential of Hadamard Variance analysis in GPS operations, and presents an equation that relates the Hadamard Variance to the MCS's Kalman filter process noises.
A Computer Program to Determine Reliability Using Analysis of Variance
ERIC Educational Resources Information Center
Burns, Edward
1976-01-01
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
Variance analysis. Part II, The use of computers.
Finkler, S A
1991-09-01
This is the second in a two-part series on variance analysis. In the first article (JONA, July/August 1991), the author discussed flexible budgeting, including the calculation of price, quantity, volume, and acuity variances. In this second article, the author focuses on the use of computers by nurse managers to aid in the process of calculating, understanding, and justifying variances. PMID:1919788
Determining Sample Sizes for Precise Contrast Analysis with Heterogeneous Variances
ERIC Educational Resources Information Center
Jan, Show-Li; Shieh, Gwowen
2014-01-01
The analysis of variance (ANOVA) is one of the most frequently used statistical analyses in practical applications. Accordingly, the single and multiple comparison procedures are frequently applied to assess the differences among mean effects. However, the underlying assumption of homogeneous variances may not always be tenable. This study…
Cyclostationary analysis with logarithmic variance stabilisation
NASA Astrophysics Data System (ADS)
Borghesani, Pietro; Shahriar, Md Rifat
2016-03-01
Second order cyclostationary (CS2) components in vibration or acoustic emission signals are typical symptoms of a wide variety of faults in rotating and alternating mechanical systems. The square envelope spectrum (SES), obtained via Hilbert transform of the original signal, is at the basis of the most common indicators used for detection of CS2 components. It has been shown that the SES is equivalent to an autocorrelation of the signal's discrete Fourier transform, and that CS2 components are a cause of high correlations in the frequency domain of the signal, thus resulting in peaks in the SES. Statistical tests have been proposed to determine if peaks in the SES are likely to belong to a normal variability in the signal or if they are proper symptoms of CS2 components. Despite the need for automated fault recognition and the theoretical soundness of these tests, this approach to machine diagnostics has been mostly neglected in industrial applications. In fact, in a series of experimental applications, even with proper pre-whitening steps, it has been found that healthy machines might produce high spectral correlations and therefore result in a highly biased SES distribution which might cause a series of false positives. In this paper a new envelope spectrum is defined, with the theoretical intent of rendering the hypothesis test variance-free. This newly proposed indicator will prove unbiased in case of multiple CS2 sources of spectral correlation, thus reducing the risk of false alarms.
Meta-analysis of ratios of sample variances.
Prendergast, Luke A; Staudte, Robert G
2016-05-20
When conducting a meta-analysis of standardized mean differences (SMDs), it is common to use Cohen's d, or its variants, that require equal variances in the two arms of each study. While interpretation of these SMDs is simple, this alone should not be used as a justification for assuming equal variances. Until now, researchers have either used an F-test for each individual study or perhaps even conveniently ignored such tools altogether. In this paper, we propose a meta-analysis of ratios of sample variances to assess whether the equality of variances assumptions is justified prior to a meta-analysis of SMDs. Quantile-quantile plots, an omnibus test for equal variances or an overall meta-estimate of the ratio of variances can all be used to formally justify the use of less common methods when evidence of unequal variances is found. The methods in this paper are simple to implement and the validity of the approaches are reinforced by simulation studies and an application to a real data set. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27062644
Analysis of Variance: What Is Your Statistical Software Actually Doing?
ERIC Educational Resources Information Center
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
On variance estimate for covariate adjustment by propensity score analysis.
Zou, Baiming; Zou, Fei; Shuster, Jonathan J; Tighe, Patrick J; Koch, Gary G; Zhou, Haibo
2016-09-10
Propensity score (PS) methods have been used extensively to adjust for confounding factors in the statistical analysis of observational data in comparative effectiveness research. There are four major PS-based adjustment approaches: PS matching, PS stratification, covariate adjustment by PS, and PS-based inverse probability weighting. Though covariate adjustment by PS is one of the most frequently used PS-based methods in clinical research, the conventional variance estimation of the treatment effects estimate under covariate adjustment by PS is biased. As Stampf et al. have shown, this bias in variance estimation is likely to lead to invalid statistical inference and could result in erroneous public health conclusions (e.g., food and drug safety and adverse events surveillance). To address this issue, we propose a two-stage analytic procedure to develop a valid variance estimator for the covariate adjustment by PS analysis strategy. We also carry out a simple empirical bootstrap resampling scheme. Both proposed procedures are implemented in an R function for public use. Extensive simulation results demonstrate the bias in the conventional variance estimator and show that both proposed variance estimators offer valid estimates for the true variance, and they are robust to complex confounding structures. The proposed methods are illustrated for a post-surgery pain study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999553
Analysis of Variance Components for Genetic Markers with Unphased Genotypes
Wang, Tao
2016-01-01
An ANOVA type general multi-allele (GMA) model was proposed in Wang (2014) on analysis of variance components for quantitative trait loci or genetic markers with phased or unphased genotypes. In this study, by applying the GMA model, we further examine estimation of the genetic variance components for genetic markers with unphased genotypes based on a random sample from a study population. In one locus and two loci cases, we first derive the least square estimates (LSE) of model parameters in fitting the GMA model. Then we construct estimators of the genetic variance components for one marker locus in a Hardy-Weinberg disequilibrium population and two marker loci in an equilibrium population. Meanwhile, we explore the difference between the classical general linear model (GLM) and GMA based approaches in association analysis of genetic markers with quantitative traits. We show that the GMA model can retain the same partition on the genetic variance components as the traditional Fisher's ANOVA model, while the GLM cannot. We clarify that the standard F-statistics based on the partial reductions in sums of squares from GLM for testing the fixed allelic effects could be inadequate for testing the existence of the variance component when allelic interactions are present. We point out that the GMA model can reduce the confounding between the allelic effects and allelic interactions at least for independent alleles. As a result, the GMA model could be more beneficial than GLM for detecting allelic interactions. PMID:27468297
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Strength of Relationship in Multivariate Analysis of Variance.
ERIC Educational Resources Information Center
Smith, I. Leon
Methods for the calculation of eta coefficient, or correlation ratio, squared have recently been presented for examining the strength of relationship in univariate analysis of variance. This paper extends them to the multivariate case in which the effects of independent variables may be examined in relation to two or more dependent variables, and…
Variance estimation for radiation analysis and multi-sensor fusion.
Mitchell, Dean James
2010-09-01
Variance estimates that are used in the analysis of radiation measurements must represent all of the measurement and computational uncertainties in order to obtain accurate parameter and uncertainty estimates. This report describes an approach for estimating components of the variance associated with both statistical and computational uncertainties. A multi-sensor fusion method is presented that renders parameter estimates for one-dimensional source models based on input from different types of sensors. Data obtained with multiple types of sensors improve the accuracy of the parameter estimates, and inconsistencies in measurements are also reflected in the uncertainties for the estimated parameter. Specific analysis examples are presented that incorporate a single gross neutron measurement with gamma-ray spectra that contain thousands of channels. The parameter estimation approach is tolerant of computational errors associated with detector response functions and source model approximations.
Analysis of Variance in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
Deloach, Richard
2010-01-01
This paper is a tutorial introduction to the analysis of variance (ANOVA), intended as a reference for aerospace researchers who are being introduced to the analytical methods of the Modern Design of Experiments (MDOE), or who may have other opportunities to apply this method. One-way and two-way fixed-effects ANOVA, as well as random effects ANOVA, are illustrated in practical terms that will be familiar to most practicing aerospace researchers.
Variance reduction in Monte Carlo analysis of rarefied gas diffusion
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The present analysis uses the Monte Carlo method to solve the problem of rarefied diffusion between parallel walls. The diffusing molecules are evaporated or emitted from one of two parallel walls and diffused through another molecular species. The analysis treats the diffusing molecule as undergoing a Markov random walk and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs the expected Markov walk payoff is retained but its variance is reduced so that the M. C. result has a much smaller error.
Two-dimensional finite-element temperature variance analysis
NASA Technical Reports Server (NTRS)
Heuser, J. S.
1972-01-01
The finite element method is extended to thermal analysis by forming a variance analysis of temperature results so that the sensitivity of predicted temperatures to uncertainties in input variables is determined. The temperature fields within a finite number of elements are described in terms of the temperatures of vertices and the variational principle is used to minimize the integral equation describing thermal potential energy. A computer calculation yields the desired solution matrix of predicted temperatures and provides information about initial thermal parameters and their associated errors. Sample calculations show that all predicted temperatures are most effected by temperature values along fixed boundaries; more accurate specifications of these temperatures reduce errors in thermal calculations.
FMRI group analysis combining effect estimates and their variances.
Chen, Gang; Saad, Ziad S; Nath, Audrey R; Beauchamp, Michael S; Cox, Robert W
2012-03-01
Conventional functional magnetic resonance imaging (FMRI) group analysis makes two key assumptions that are not always justified. First, the data from each subject is condensed into a single number per voxel, under the assumption that within-subject variance for the effect of interest is the same across all subjects or is negligible relative to the cross-subject variance. Second, it is assumed that all data values are drawn from the same Gaussian distribution with no outliers. We propose an approach that does not make such strong assumptions, and present a computationally efficient frequentist approach to FMRI group analysis, which we term mixed-effects multilevel analysis (MEMA), that incorporates both the variability across subjects and the precision estimate of each effect of interest from individual subject analyses. On average, the more accurate tests result in higher statistical power, especially when conventional variance assumptions do not hold, or in the presence of outliers. In addition, various heterogeneity measures are available with MEMA that may assist the investigator in further improving the modeling. Our method allows group effect t-tests and comparisons among conditions and among groups. In addition, it has the capability to incorporate subject-specific covariates such as age, IQ, or behavioral data. Simulations were performed to illustrate power comparisons and the capability of controlling type I errors among various significance testing methods, and the results indicated that the testing statistic we adopted struck a good balance between power gain and type I error control. Our approach is instantiated in an open-source, freely distributed program that may be used on any dataset stored in the universal neuroimaging file transfer (NIfTI) format. To date, the main impediment for more accurate testing that incorporates both within- and cross-subject variability has been the high computational cost. Our efficient implementation makes this approach
Analysis of variance tables based on experimental structure.
Brien, C J
1983-03-01
A stepwise procedure for obtaining the experimental structure for a particular experiment is presented together with rules for deriving the analysis-of-variance table from that structure. The procedure involves the division of the factors into groups and is essentially a generalization of the method of Nelder (1965, Proceedings of the Royal Society, Series A 283, 147-162; 1965, Proceedings of the Royal Society, Series A 283, 163-178), to what are termed 'multi-tiered' experiments. The proposed method is illustrated for a wine-tasting experiment. PMID:6871362
Analysis of variance of thematic mapping experiment data.
Rosenfield, G.H.
1981-01-01
As an example of the methodology, data from an experiment using three scales of land-use and land-cover mapping have been analyzed. The binomial proportions of correct interpretations have been analyzed untransformed and transformed by both the arcsine and the logit transformations. A weighted analysis of variance adjustment has been used. There is evidence of a significant difference among the three scales of mapping (1:24 000, 1:100 000 and 1:250 000) using the transformed data. Multiple range tests showed that all three scales are different for the arcsine transformed data. - from Author
Variance reduction in Monte Carlo analysis of rarefied gas diffusion.
NASA Technical Reports Server (NTRS)
Perlmutter, M.
1972-01-01
The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.
Analysis of variance of an underdetermined geodetic displacement problem
Darby, D.
1982-06-01
It has been suggested recently that point displacements in a free geodetic network traversing a strike-slip fault may be estimated from repeated surveys by minimizing only those displacement components normal to the strike. It is desirable to justify this procedure. We construct, from estimable quantities, a deformation parameter which is an F-statistic of the type occurring in the analysis of variance of linear models not of full rank. A test of its significance provides the criterion to justify the displacement solution. It is also interesting to study its behaviour as one varies the supposed strike of the fault. Justification of a displacement solution using data from a strike-slip fault is found, but not for data from a rift valley. The technique can be generalized to more complex patterns of deformation such as those expected near the end-zone of a fault in a dislocation model.
The use of analysis of variance procedures in biological studies
Williams, B.K.
1987-01-01
The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.
Technology Transfer Automated Retrieval System (TEKTRAN)
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), was used to identify sources of variance in 7 broccoli samples composed of two cultivars and seven different growing condition (four levels of Se irrigation, organic farming, and convention...
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2011-01-01
Sample size determination is an important issue in planning research. In the context of one-way fixed-effect analysis of variance, the conventional sample size formula cannot be applied for the heterogeneous variance cases. This study discusses the sample size requirement for the Welch test in the one-way fixed-effect analysis of variance with…
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
NASA Astrophysics Data System (ADS)
Coe, Rob; Dalrymple, Brent
More than 1000 friends, students, and colleagues from all over the country filled Stanford Memorial Chapel (Stanford, Calif.) on February 3, 1987, to join in “A Celebration of the Life of Allan Cox.” Allan died early on the morning of January 27 while bicycling, the sport he had come to love the most. Between pieces of his favorite music by Bach and Mozart, Stanford administrators and colleagues spoke in tribute of Allan's unique qualities as friend, scientist, teacher, and dean of the School of Earth Sciences. James Rosse, Vice President and Provost of Stanford University, struck a particularly resonant chord with his personal remarks: "Allan reached out to each person he knew with the warmth and attention that can only come from deep respect and affection for others. I never heard him speak ill of others, and I do not believe he was capable of doing anything that would harm another being. He cared too much to intrude where he was not wanted, but his curiosity about people and the loving care with which he approached them broke down reserve to create remarkable friendships. His enthusiasm and good humor made him a welcome guest in the hearts of the hundreds of students and colleagues who shared the opportunity of knowing Allan Cox as a person."
Local variance for multi-scale analysis in geomorphometry
Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
A multi-variance analysis in the time domain
NASA Technical Reports Server (NTRS)
Walter, Todd
1993-01-01
Recently a new technique for characterizing the noise processes affecting oscillators was introduced. This technique minimizes the difference between the estimates of several different variances and their values as predicted by the standard power law model of noise. The method outlined makes two significant advancements: it uses exclusively time domain variances so that deterministic parameters such as linear frequency drift may be estimated, and it correctly fits the estimates using the chi-square distribution. These changes permit a more accurate fitting at long time intervals where there is the least information. This technique was applied to both simulated and real data with excellent results.
Edgar Allan Poe and neurology.
Teive, Hélio Afonso Ghizoni; Paola, Luciano de; Munhoz, Renato Puppi
2014-06-01
Edgar Allan Poe was one of the most celebrated writers of all time. He published several masterpieces, some of which include references to neurological diseases. Poe suffered from recurrent depression, suggesting a bipolar disorder, as well as alcohol and drug abuse, which in fact led to his death from complications related to alcoholism. Various hypotheses were put forward, including Wernicke's encephalopathy. PMID:24964115
Allan Bloom's Quarrel with History.
ERIC Educational Resources Information Center
Thompson, James
1988-01-01
Responds to Allan Bloom's "The Closing of the American Mind." Concludes that despite cranky comments about bourgeois culture, the focus of Bloom's attack is on historicism, which undercuts his nostalgic vision of a prosperous and just America. Condemns Bloom's exclusion of Blacks, Hispanics, and women from America's cultural heritage. (DMM)
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
A new variance-based global sensitivity analysis technique
NASA Astrophysics Data System (ADS)
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2013-11-01
A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.
Partitioning Predicted Variance into Constituent Parts: A Primer on Regression Commonality Analysis.
ERIC Educational Resources Information Center
Amado, Alfred J.
Commonality analysis is a method of decomposing the R squared in a multiple regression analysis into the proportion of explained variance of the dependent variable associated with each independent variable uniquely and the proportion of explained variance associated with the common effects of one or more independent variables in various…
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1976-01-01
An on-line minimum variance parameter identifier was developed which embodies both accuracy and computational efficiency. The new formulation resulted in a linear estimation problem with both additive and multiplicative noise. The resulting filter is shown to utilize both the covariance of the parameter vector itself and the covariance of the error in identification. It is proven that the identification filter is mean square covergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
Analysis and application of minimum variance discrete linear system identification
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1977-01-01
An on-line minimum variance (MV) parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise (AMN). The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean-square convergent and mean-square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
The Queensland "New Basics": An Interview with Allan Luke.
ERIC Educational Resources Information Center
Hunter, Lisa
2001-01-01
Presents an interview with Allan Luke, current editor of "The Journal of Adolescent and Adult Literacy," and Deputy Director General of Education for Queensland. Discusses several reform projects--Education 2010 (a futures-oriented analysis and philosophy for Queensland Schools); The New Basics ( a new curriculum/pedagogy/assessment framework);…
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Meta-analysis with missing study-level sample variance data.
Chowdhry, Amit K; Dworkin, Robert H; McDermott, Michael P
2016-07-30
We consider a study-level meta-analysis with a normally distributed outcome variable and possibly unequal study-level variances, where the object of inference is the difference in means between a treatment and control group. A common complication in such an analysis is missing sample variances for some studies. A frequently used approach is to impute the weighted (by sample size) mean of the observed variances (mean imputation). Another approach is to include only those studies with variances reported (complete case analysis). Both mean imputation and complete case analysis are only valid under the missing-completely-at-random assumption, and even then the inverse variance weights produced are not necessarily optimal. We propose a multiple imputation method employing gamma meta-regression to impute the missing sample variances. Our method takes advantage of study-level covariates that may be used to provide information about the missing data. Through simulation studies, we show that multiple imputation, when the imputation model is correctly specified, is superior to competing methods in terms of confidence interval coverage probability and type I error probability when testing a specified group difference. Finally, we describe a similar approach to handling missing variances in cross-over studies. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26888093
On the Relations among Regular, Equal Unique Variances, and Image Factor Analysis Models.
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.
2000-01-01
Investigated the conditions under which the matrix of factor loadings from the factor analysis model with equal unique variances will give a good approximation to the matrix of factor loadings from the regular factor analysis model. Extends the results to the image factor analysis model. Discusses implications for practice. (SLD)
A Bayesian Solution for Two-Way Analysis of Variance. ACT Technical Bulletin No. 8.
ERIC Educational Resources Information Center
Lindley, Dennis V.
The standard statistical analysis of data classified in two ways (say into rows and columns) is through an analysis of variance that splits the total variation of the data into the main effect of rows, the main effect of columns, and the interaction between rows and columns. This paper presents an alternative Bayesian analysis of the same…
The Parabolic Variance (PVAR): A Wavelet Variance Based on the Least-Square Fit.
Vernotte, Francois; Lenczner, Michel; Bourgeois, Pierre-Yves; Rubiola, Enrico
2016-04-01
This paper introduces the parabolic variance (PVAR), a wavelet variance similar to the Allan variance (AVAR), based on the linear regression (LR) of phase data. The companion article arXiv:1506.05009 [physics.ins-det] details the Ω frequency counter, which implements the LR estimate. The PVAR combines the advantages of AVAR and modified AVAR (MVAR). PVAR is good for long-term analysis because the wavelet spans over 2τ, the same as the AVAR wavelet, and good for short-term analysis because the response to white and flicker PM is 1/τ(3) and 1/τ(2), the same as the MVAR. After setting the theoretical framework, we study the degrees of freedom and the confidence interval for the most common noise types. Then, we focus on the detection of a weak noise process at the transition-or corner-where a faster process rolls off. This new perspective raises the question of which variance detects the weak process with the shortest data record. Our simulations show that PVAR is a fortunate tradeoff. PVAR is superior to MVAR in all cases, exhibits the best ability to divide between fast noise phenomena (up to flicker FM), and is almost as good as AVAR for the detection of random walk and drift. PMID:26571523
NASA Astrophysics Data System (ADS)
O'Connor, John D.; Hixson, Jonathan; McKnight, Patrick; Peterson, Matthew S.; Parasuraman, Raja
2010-04-01
Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation Division (MSD) sensor models, such as NV Therm IP, are developed through perception experiments that investigate phenomena associated with sensor performance (e.g. sampling, noise, sensitivity). A standardized laboratory perception testing method developed in the mid-1990's has been responsible for advances in sensor modeling that are supported by field sensor performance experiments.1 The number of participants required to yield dependable results for these experiments could not be estimated because the variance in performance due to participant differences was not known. NVESD and George Mason University (GMU) scientists measured the contribution of participant variance within the overall experimental variance for 22 individuals each exposed to 1008 stimuli. Results of the analysis indicate that the total participant contribution to overall experimental variance was between 1% and 2%.
Edgar Allan Poe's Physical Cosmology
NASA Astrophysics Data System (ADS)
Cappi, Alberto
1994-06-01
In this paper I describe the scientific content of Eureka, the prose poem written by Edgar Allan Poe in 1848. In that work, starting from metaphysical assumptions, Poe claims that the Universe is finite in an infinite Space, and that it was originated from a primordial Particle, whose fragmentation under the action of a repulsive force caused a diffusion of atoms in space. I will show that his subsequently collapsing universe represents a scientifically acceptable Newtonian model. In the framework of his evolving universe, Poe makes use of contemporary astronomical knowledge, deriving modern concepts such as a primordial atomic state of the universe and a common epoch of galaxy formation. Harrison found in Eureka the first, qualitative solution of the Olbers' paradox; I show that Poe also applies in a modern way the anthropic principle, trying to explain why the Universe is so large.
Luthria, Devanand L.; Mukhopadhyay, Sudarsan; Robbins, Rebecca J.; Finley, John W.; Banuelos, Gary S.; Harnly, James M.
2013-01-01
UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol–water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220–380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively. PMID:18572954
ERIC Educational Resources Information Center
Draper, John F., Jr.
The applicability of the Analysis of Variance, ANOVA, procedures to the analysis of dichotomous repeated measure data is described. The design models for which data were simulated in this investigation were chosen to represent simple cases of two experimental situations: situation one, in which subjects' responses to a single randomly selected set…
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Decomposing genomic variance using information from GWA, GWE and eQTL analysis.
Ehsani, A; Janss, L; Pomp, D; Sørensen, P
2016-04-01
A commonly used procedure in genome-wide association (GWA), genome-wide expression (GWE) and expression quantitative trait locus (eQTL) analyses is based on a bottom-up experimental approach that attempts to individually associate molecular variants with complex traits. Top-down modeling of the entire set of genomic data and partitioning of the overall variance into subcomponents may provide further insight into the genetic basis of complex traits. To test this approach, we performed a whole-genome variance components analysis and partitioned the genomic variance using information from GWA, GWE and eQTL analyses of growth-related traits in a mouse F2 population. We characterized the mouse trait genetic architecture by ordering single nucleotide polymorphisms (SNPs) based on their P-values and studying the areas under the curve (AUCs). The observed traits were found to have a genomic variance profile that differed significantly from that expected of a trait under an infinitesimal model. This situation was particularly true for both body weight and body fat, for which the AUCs were much higher compared with that of glucose. In addition, SNPs with a high degree of trait-specific regulatory potential (SNPs associated with subset of transcripts that significantly associated with a specific trait) explained a larger proportion of the genomic variance than did SNPs with high overall regulatory potential (SNPs associated with transcripts using traditional eQTL analysis). We introduced AUC measures of genomic variance profiles that can be used to quantify relative importance of SNPs as well as degree of deviation of a trait's inheritance from an infinitesimal model. The shape of the curve aids global understanding of traits: The steeper the left-hand side of the curve, the fewer the number of SNPs controlling most of the phenotypic variance. PMID:26678352
NASA Technical Reports Server (NTRS)
Wu, Andy
1995-01-01
Allan Deviation computations of linear frequency synthesizer systems have been reported previously using real-time simulations. Even though it takes less time compared with the actual measurement, it is still very time consuming to compute the Allan Deviation for long sample times with the desired confidence level. Also noises, such as flicker phase noise and flicker frequency noise, can not be simulated precisely. The use of frequency domain techniques can overcome these drawbacks. In this paper the system error model of a fictitious linear frequency synthesizer is developed and its performance using a Cesium (Cs) atomic frequency standard (AFS) as a reference is evaluated using frequency domain techniques. For a linear timing system, the power spectral density at the system output can be computed with known system transfer functions and known power spectral densities from the input noise sources. The resulting power spectral density can then be used to compute the Allan Variance at the system output. Sensitivities of the Allan Variance at the system output to each of its independent input noises are obtained, and they are valuable for design trade-off and trouble-shooting.
Cost-variance analysis by DRGs; a technique for clinical budget analysis.
Voss, G B; Limpens, P G; Brans-Brabant, L J; van Ooij, A
1997-02-01
In this article it is shown how a cost accounting system based on DRGs can be valuable in determining changes in clinical practice and explaining alterations in expenditure patterns from one period to another. A cost-variance analysis is performed using data from the orthopedic department from the fiscal years 1993 and 1994. Differences between predicted and observed cost for medical care, such as diagnostic procedures, therapeutic procedures and nursing care are analyzed into different components: changes in patient volume, case-mix differences, changes in resource use and variations in cost per procedure. Using a DRG cost accounting system proved to be a useful technique for clinical budget analysis. Results may stimulate discussions between hospital managers and medical professionals to explain cost variations integrating medical and economic aspects of clinical health care. PMID:10165044
Teaching Principles of One-Way Analysis of Variance Using M&M's Candy
ERIC Educational Resources Information Center
Schwartz, Todd A.
2013-01-01
I present an active learning classroom exercise illustrating essential principles of one-way analysis of variance (ANOVA) methods. The exercise is easily conducted by the instructor and is instructive (as well as enjoyable) for the students. This is conducive for demonstrating many theoretical and practical issues related to ANOVA and lends itself…
A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists
ERIC Educational Resources Information Center
Warne, Russell T.
2014-01-01
Reviews of statistical procedures (e.g., Bangert & Baumberger, 2005; Kieffer, Reese, & Thompson, 2001; Warne, Lazo, Ramos, & Ritter, 2012) show that one of the most common multivariate statistical methods in psychological research is multivariate analysis of variance (MANOVA). However, MANOVA and its associated procedures are often not…
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)
ERIC Educational Resources Information Center
Steyn, H. S., Jr.; Ellis, S. M.
2009-01-01
When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…
A Note on Noncentrality Parameters for Contrast Tests in a One-Way Analysis of Variance
ERIC Educational Resources Information Center
Liu, Xiaofeng Steven
2010-01-01
The noncentrality parameter for a contrast test in a one-way analysis of variance is based on the dot product of 2 vectors whose geometric meaning in a Euclidian space offers mnemonic hints about its constituents. Additionally, the noncentrality parameters for a set of orthogonal contrasts sum up to the noncentrality parameter for the omnibus "F"…
Missing Data and Multiple Imputation in the Context of Multivariate Analysis of Variance
ERIC Educational Resources Information Center
Finch, W. Holmes
2016-01-01
Multivariate analysis of variance (MANOVA) is widely used in educational research to compare means on multiple dependent variables across groups. Researchers faced with the problem of missing data often use multiple imputation of values in place of the missing observations. This study compares the performance of 2 methods for combining p values in…
A Demonstration of the Analysis of Variance Using Physical Movement and Space
ERIC Educational Resources Information Center
Owen, William J.; Siakaluk, Paul D.
2011-01-01
Classroom demonstrations help students better understand challenging concepts. This article introduces an activity that demonstrates the basic concepts involved in analysis of variance (ANOVA). Students who physically participated in the activity had a better understanding of ANOVA concepts (i.e., higher scores on an exam question answered 2…
Uncertainty analysis for 3D geological modeling using the Kriging variance
NASA Astrophysics Data System (ADS)
Choi, Yosoon; Choi, Younjung; Park, Sebeom; Um, Jeong-Gi
2014-05-01
The credible estimation of geological properties is critical in many geosciences fields including the geotechnical engineering, environmental engineering, mining engineering and petroleum engineering. Many interpolation techniques have been developed to estimate the geological properties from limited sampling data such as borehole logs. The Kriging is an interpolation technique that gives the best linear unbiased prediction of the intermediate values. It also provides the Kriging variance which quantifies the uncertainty of the kriging estimates. This study provides a new method to analyze the uncertainty in 3D geological modeling using the Kriging variance. The cut-off values determined by the Kriging variance were used to effectively visualize the 3D geological models with different confidence levels. This presentation describes the method for uncertainty analysis and a case study which evaluates the amount of recoverable resources by considering the uncertainty.
Toward a more robust variance-based global sensitivity analysis of model outputs
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
The Importance of Variance in Statistical Analysis: Don't Throw Out the Baby with the Bathwater.
ERIC Educational Resources Information Center
Peet, Martha W.
This paper analyzes what happens to the effect size of a given dataset when the variance is removed by categorization for the purpose of applying "OVA" methods (analysis of variance, analysis of covariance). The dataset is from a classic study by Holzinger and Swinefors (1939) in which more than 20 ability test were administered to 301 middle…
Gouyette, A
1984-01-01
A program for the HP-41 CV calculator with adapted printer is described for the analysis of variance of bioavailability data based upon the areas under the curve measured during a two-way cross-over pharmacokinetic study of two different drug formulations. The program can also perform the test of Westlake to compute the 95% confidence interval and determine if both formulations are bioequivalent. PMID:6735510
Non-destructive X-ray Computed Tomography (XCT) Analysis of Sediment Variance in Marine Cores
NASA Astrophysics Data System (ADS)
Oti, E.; Polyak, L. V.; Dipre, G.; Sawyer, D.; Cook, A.
2015-12-01
Benthic activity within marine sediments can alter the physical properties of the sediment as well as indicate nutrient flux and ocean temperatures. We examine burrowing features in sediment cores from the western Arctic Ocean collected during the 2005 Healy-Oden TransArctic Expedition (HOTRAX) and from the Gulf of Mexico Integrated Ocean Drilling Program (IODP) Expedition 308. While traditional methods for studying bioturbation require physical dissection of the cores, we assess burrowing using an X-ray computed tomography (XCT) scanner. XCT noninvasively images the sediment cores in three dimensions and produces density sensitive images suitable for quantitative analysis. XCT units are recorded as Hounsfield Units (HU), where -999 is air, 0 is water, and 4000-5000 would be a higher density mineral, such as pyrite. We rely on the fundamental assumption that sediments are deposited horizontally, and we analyze the variance over each flat-lying slice. The variance describes the spread of pixel values over a slice. When sediments are reworked, drawing higher and lower density matrix into a layer, the variance increases. Examples of this can be seen in two slices in core 19H-3A from Site U1324 of IODP Expedition 308. The first slice, located 165.6 meters below sea floor consists of relatively undisturbed sediment. Because of this, the majority of the sediment values fall between 1406 and 1497 HU, thus giving the slice a comparatively small variance of 819.7. The second slice, located 166.1 meters below sea floor, features a lower density sediment matrix disturbed by burrow tubes and the inclusion of a high density mineral. As a result, the Hounsfield Units have a larger variance of 1,197.5, which is a result of sediment matrix values that range from 1220 to 1260 HU, the high-density mineral value of 1920 HU and the burrow tubes that range from 1300 to 1410 HU. Analyzing this variance allows us to observe changes in the sediment matrix and more specifically capture
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
The Efficiency of Split Panel Designs in an Analysis of Variance Model.
Liu, Xin; Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm's efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Peng, Yichuan; Lord, Dominique; Zou, Yajie
2014-12-01
As one of the major analysis methods, statistical models play an important role in traffic safety analysis. They can be used for a wide variety of purposes, including establishing relationships between variables and understanding the characteristics of a system. The purpose of this paper is to document a new type of model that can help with the latter. This model is based on the Generalized Waring (GW) distribution. The GW model yields more information about the sources of the variance observed in datasets than other traditional models, such as the negative binomial (NB) model. In this regards, the GW model can separate the observed variability into three parts: (1) the randomness, which explains the model's uncertainty; (2) the proneness, which refers to the internal differences between entities or observations; and (3) the liability, which is defined as the variance caused by other external factors that are difficult to be identified and have not been included as explanatory variables in the model. The study analyses were accomplished using two observed datasets to explore potential sources of variation. The results show that the GW model can provide meaningful information about sources of variance in crash data and also performs better than the NB model. PMID:25173723
Technology Transfer Automated Retrieval System (TEKTRAN)
Metabolite fingerprints, obtained with direct injection mass spectrometry (MS) with both positive and negative ionization, were used with analysis of variance-principal components analysis (ANOVA-PCA) to discriminate between cultivars and growing treatments of broccoli. The sample set consisted of ...
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits
A further analysis for the minimum-variance deconvolution filter performance
NASA Technical Reports Server (NTRS)
Chi, Chong-Yung
1987-01-01
Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.
A further analysis for the minimum-variance deconvolution filter performance
NASA Astrophysics Data System (ADS)
Chi, Chong-Yung
1987-06-01
Chi and Mendel (1984) analyzed the performance of minimum-variance deconvolution (MVD). In this correspondence, a further analysis of the performance of the MVD filter is presented. It is shown that the MVD filter performs like an inverse filter and a whitening filter as SNR goes to infinity, and like a matched filter as SNR goes to zero. The estimation error of the MVD filter is colored noise, but it becomes white when SNR goes to zero. This analysis also conects the error power-spectral density of the MVD filter with the spectrum of the causal-prediction error filter.
John A. Scigliano Interviews Allan B. Ellis.
ERIC Educational Resources Information Center
Scigliano, John A.
2000-01-01
This interview with Allan Ellis focuses on a history of computer applications in education. Highlights include work at the Harvard Graduate School of Education; the New England Education Data System; and efforts to create a computer-based distance learning and development program called ISVD (Information System for Vocational Decisions). (LRW)
The Curious Mind of Allan Bloom.
ERIC Educational Resources Information Center
Gardner, Martin
1988-01-01
This article reviews Allan Bloom's 1987 book, THE CLOSING OF THE AMERICAN MIND: HOW HIGHER EDUCATION HAS FAILED DEMOCRACY AND IMPOVERISHED THE SOULS OF TODAY'S CHILDREN. Compares Bloom's book with THE HIGHER LEARNING IN AMERICA, a 1930s book by Mortimer Adler and Robert Hutchins. (JDH)
The use of repeated measures analysis of variance for plaque and gingival indices.
Gunsolley, J C; Chinchilli, V M; Koertge, T E; Palcanis, K G; Sarbin, A G; Brooks, C N
1989-03-01
Clinical trials for anti-gingivitis and anti-plaque agents commonly use the mean of Silness and Löe plaque indices and Löe and Silness gingival indices as response variables. The aim of this report is to determine if data from anti-plaque and anti-gingivitis clinical trials using Silness and Löe plaque indices and Löe and Silness gingival indices satisfy conditions necessary for the use of the univariate or multivariate approach to repeated measures. These conditions are multivariate normality, homogeneity of variance-covariance matrices, and for the univariate approach, a type-H variance-covariance matrix. Data from 5 separate clinical trials representing a wide range in sample size, pretreatment mean gingival and plaque indices and treatment effects were used to test these conditions. Either the univariate or multivariate approach to repeated measures was found to be appropriate for both responses of the 5 clinical trials. Thus, means of Silness & Löe and Löe and Silness gingival indices meet the necessary conditions for use of either the univariate and/or multivariate approach to repeated measures. However, significant time-treatment interactions are a common occurrence in these types of clinical trials and must be evaluated carefully. The analyses in this study were carried out using SAS. Other mainframe statistical software packages and many micro-computer statistical software packages have routines to analyze repeated measures experiments with analysis of variance methods. However, some of the packages may omit the multivariate approach to repeated measures or may not include interactions between within-subject and between-subject effects. These packages should be used with caution. PMID:2723097
Allan-Herndon syndrome. I. Clinical studies.
Stevenson, R E; Goodman, H O; Schwartz, C E; Simensen, R J; McLean, W T; Herndon, C N
1990-01-01
A large family with X-linked mental retardation, originally reported in 1944 by Allan, Herndon, and Dudley, has been reinvestigated. Twenty-nine males have been affected in seven generations. Clinical features include severe mental retardation, dysarthria, ataxia, athetoid movements, muscle hypoplasia, and spastic paraplegia with hyperreflexia, clonus, and Babinski reflexes. The facies appear elongated with normal head circumference, bitemporal narrowing, and large, simple ears. Contractures develop at both small and large joint. Statural growth is normal and macroorchidism does not occur. Longevity is not impaired. High-resolution chromosomes, serum creatine kinase, and amino acids are normal. This condition, termed the Allan-Herndon syndrome, appears distinct from other X-linked disorders having mental retardation, muscle hypoplasia, and spastic paraplegia. Images Figure 2 Figure 3 PMID:2393019
The Cosmology of Edgar Allan Poe
NASA Astrophysics Data System (ADS)
Cappi, Alberto
2011-06-01
Eureka is a ``prose poem'' published in 1848, where Edgar Allan Poe presents his original cosmology. While starting from metaphysical assumptions, Poe develops an evolving Newtonian model of the Universe which has many and non casual analogies with modern cosmology. Poe was well informed about astronomical and physical discoveries, and he was influenced by both contemporary science and ancient ideas. For these reasons, Eureka is a unique synthesis of metaphysics, art and science.
Analysis of variances of quasirapidities in collisions of gold nuclei with track-emulsion nuclei
Gulamov, K. G.; Zhokhova, S. I.; Lugovoi, V. V. Navotny, V. S. Saidkhanov, N. S.; Chudakov, V. M.
2012-08-15
A new method of an analysis of variances was developed for studying n-particle correlations of quasirapidities in nucleus-nucleus collisions for a large constant number n of particles. Formulas that generalize the results of the respective analysis to various values of n were derived. Calculations on the basis of simple models indicate that the method is applicable, at least for n {>=} 100. Quasirapidity correlations statistically significant at a level of 36 standard deviations were discovered in collisions between gold nuclei and track-emulsion nuclei at an energy of 10.6 GeV per nucleon. The experimental data obtained in our present study are contrasted against the theory of nucleus-nucleus collisions.
Analysis of open-loop conical scan pointing error and variance estimators
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1993-01-01
General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.
Princess Marie Bonaparte, Edgar Allan Poe, and psychobiography.
Warner, S L
1991-01-01
Princess Marie Bonaparte was a colorful yet mysterious member of Freud's inner circle of psychoanalysis. In analysis with Freud beginning in 1925 (she was then 45 years old), she became a lay analyst and writer of many papers and books. Her most ambitious task was a 700-page psychobiography of Edgar Allan Poe that was first published in French in 1933. She was fascinated by Poe's gothic stories--with the return to life of dead persons and the eerie, unexpected turns of events. Her fascination with Poe can be traced to the similarity of their early traumatic life experiences. Bonaparte had lost her mother a month after her birth. Poe's father deserted the family when Edgar was two years old, and his mother died of tuberculosis when he was three. Poe's stories helped him to accommodate to these early traumatic losses. Bonaparte vicariously shared in Poe's loss and the fantasies of the return of the deceased parent in his stories. She was sensitive and empathetic to Poe's inner world because her inner world was similar. The result of this psychological fit between Poe and Bonaparte was her psychobiography, The Life and Works of Edgar Allan Poe. It was a milestone in psychobiography but limited in its psychological scope by its strong emphasis on early childhood trauma. Nevertheless it proved Bonaparte a bona fide creative psychoanalyst and not a dilettante propped up by her friendship with Freud. PMID:1744021
Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester
2015-02-01
Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. PMID:25614363
Minimum variance imaging based on correlation analysis of Lamb wave signals.
Hua, Jiadong; Lin, Jing; Zeng, Liang; Luo, Zhi
2016-08-01
In Lamb wave imaging, MVDR (minimum variance distortionless response) is a promising approach for the detection and monitoring of large areas with sparse transducer network. Previous studies in MVDR use signal amplitude as the input damage feature, and the imaging performance is closely related to the evaluation accuracy of the scattering characteristic. However, scattering characteristic is highly dependent on damage parameters (e.g. type, orientation and size), which are unknown beforehand. The evaluation error can degrade imaging performance severely. In this study, a more reliable damage feature, LSCC (local signal correlation coefficient), is established to replace signal amplitude. In comparison with signal amplitude, one attractive feature of LSCC is its independence of damage parameters. Therefore, LSCC model in the transducer network could be accurately evaluated, the imaging performance is improved subsequently. Both theoretical analysis and experimental investigation are given to validate the effectiveness of the LSCC-based MVDR algorithm in improving imaging performance. PMID:27155349
Structural damage detection in an aeronautical panel using analysis of variance
NASA Astrophysics Data System (ADS)
Gonsalez, Camila Gianini; da Silva, Samuel; Brennan, Michael J.; Lopes Junior, Vicente
2015-02-01
This paper describes a procedure for structural health assessment based on one-way analysis of variance (ANOVA) together with Tukey's multiple comparison test, to determine whether the results are statistically significant. The feature indices are obtained from electromechanical impedance measurements using piezoceramic sensor/actuator patches bonded to the structure. Compared to the classical approach based on a simple change of the observed signals, using for example root mean square responses, the decision procedure in this paper involves a rigorous statistical test. Experimental tests were carried out on an aeronautical panel in the laboratory to validate the approach. In order to include uncontrolled variability in the dynamic responses, the measurements were taken over several days in different environmental conditions using all eight sensor/actuator patches. The damage was simulated by controlling the tightness and looseness of the bolts and was correctly diagnosed. The paper discusses the strengths and weakness of the approach in light of the experimental results.
A VLBI variance-covariance analysis interactive computer program. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bock, Y.
1980-01-01
An interactive computer program (in FORTRAN) for the variance covariance analysis of VLBI experiments is presented for use in experiment planning, simulation studies and optimal design problems. The interactive mode is especially suited to these types of analyses providing ease of operation as well as savings in time and cost. The geodetic parameters include baseline vector parameters and variations in polar motion and Earth rotation. A discussion of the theroy on which the program is based provides an overview of the VLBI process emphasizing the areas of interest to geodesy. Special emphasis is placed on the problem of determining correlations between simultaneous observations from a network of stations. A model suitable for covariance analyses is presented. Suggestions towards developing optimal observation schedules are included.
NASA Astrophysics Data System (ADS)
Li, Min-Yang; Yang, Mingchia; Vargas, Emily; Neff, Kyle; Vanli, Arda; Liang, Richard
2016-09-01
One of the major challenges towards controlling the transfer of electrical and mechanical properties of nanotubes into nanocomposites is the lack of adequate measurement systems to quantify the variations in bulk properties while the nanotubes were used as the reinforcement material. In this study, we conducted one-way analysis of variance (ANOVA) on thickness and conductivity measurements. By analyzing the data collected from both experienced and inexperienced operators, we found some operation details users might overlook that resulted in variations, since conductivity measurements of CNT thin films are very sensitive to thickness measurements. In addition, we demonstrated how issues in measurements damaged samples and limited the number of replications resulting in large variations in the electrical conductivity measurement results. Based on this study, we proposed a faster, more reliable approach to measure the thickness of CNT thin films that operators can follow to make these measurement processes less dependent on operator skills.
[Discussion of errors and measuring strategies in morphometry using analysis of variance].
Rother, P; Jahn, W; Fitzl, G; Wallmann, T; Walter, U
1986-01-01
Statistical techniques known as the analysis of variance make it possible for the morphologist to plan work in such a way as to get quantitative data with the greatest possible economy of effort. This paper explains how to decide how many measurements to make per micrograph, how many micrographs per tissue block or organ, and how many organs or individuals are necessary for getting an exactness of sufficient quality of the results. The examples furnished have been taken from measuring volume densities of mitochondria in heart muscle cells and from cell counting in lymph nodes. Finally we show, how to determine sample sizes, if we are interested in demonstration of significant differences between mean values. PMID:3569811
Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs
Tong, C
2009-04-20
Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.
NASA Technical Reports Server (NTRS)
Harder, R. L.
1974-01-01
The NASTRAN Thermal Analyzer has been intended to do variance analysis and plot the thermal boundary elements. The objective of the variance analysis addition is to assess the sensitivity of temperature variances resulting from uncertainties inherent in input parameters for heat conduction analysis. The plotting capability provides the ability to check the geometry (location, size and orientation) of the boundary elements of a model in relation to the conduction elements. Variance analysis is the study of uncertainties of the computed results as a function of uncertainties of the input data. To study this problem using NASTRAN, a solution is made for both the expected values of all inputs, plus another solution for each uncertain variable. A variance analysis module subtracts the results to form derivatives, and then can determine the expected deviations of output quantities.
[The medical history of Edgar Allan Poe].
Miranda C, Marcelo
2007-09-01
Edgar Allan Poe, one of the best American storytellers and poets, suffered an episodic behaviour disorder partially triggered by alcohol and opiate use. Much confusion still exists about the last days of his turbulent life and the cause of his death at an early age. Different etiologies have been proposed to explain his main medical problem, however, complex partial seizures triggered by alcohol, poorly recognized at the time when Poe lived, seems to be one of the most acceptable hypothesis, among others discussed. PMID:18064380
ERIC Educational Resources Information Center
Johnson, Colleen Cook; Rakow, Ernest A.
This research explored the degree to which group sizes can differ before the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) are jeopardized. Monte Carlo methodology was used, allowing for the experimental investigation of potential threats to robustness under conditions common to researchers in education. The…
FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons
ERIC Educational Resources Information Center
Fordyce, Michael W.
1977-01-01
A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)
Bush, B.; Jenkin, T.; Lipowicz, D.; Arent, D. J.; Cooke, R.
2012-01-01
Does large scale penetration of renewable generation such as wind and solar power pose economic and operational burdens on the electricity system? A number of studies have pointed to the potential benefits of renewable generation as a hedge against the volatility and potential escalation of fossil fuel prices. Research also suggests that the lack of correlation of renewable energy costs with fossil fuel prices means that adding large amounts of wind or solar generation may also reduce the volatility of system-wide electricity costs. Such variance reduction of system costs may be of significant value to consumers due to risk aversion. The analysis in this report recognizes that the potential value of risk mitigation associated with wind generation and natural gas generation may depend on whether one considers the consumer's perspective or the investor's perspective and whether the market is regulated or deregulated. We analyze the risk and return trade-offs for wind and natural gas generation for deregulated markets based on hourly prices and load over a 10-year period using historical data in the PJM Interconnection (PJM) from 1999 to 2008. Similar analysis is then simulated and evaluated for regulated markets under certain assumptions.
NASA Astrophysics Data System (ADS)
Dai, Heng; Ye, Ming
2015-09-01
Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.
Odor measurements according to EN 13725: A statistical analysis of variance components
NASA Astrophysics Data System (ADS)
Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko
2014-04-01
In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore
Loh, Po-Ru; Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O'Donovan, Michael C; Neale, Benjamin M; Patterson, Nick; Price, Alkes L
2015-12-01
Heritability analyses of genome-wide association study (GWAS) cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here we analyze the genetic architectures of schizophrenia in 49,806 samples from the PGC and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1-Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) for several pairs of GERA diseases; genetic correlations were on average 1.3 tunes stronger than the correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multicomponent, multi-trait variance-components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775
Spatial Variance in Resting fMRI Networks of Schizophrenia Patients: An Independent Vector Analysis.
Gopal, Shruti; Miller, Robyn L; Michael, Andrew; Adali, Tulay; Cetin, Mustafa; Rachakonda, Srinivas; Bustillo, Juan R; Cahill, Nathan; Baum, Stefi A; Calhoun, Vince D
2016-01-01
Spatial variability in resting functional MRI (fMRI) brain networks has not been well studied in schizophrenia, a disease known for both neurodevelopmental and widespread anatomic changes. Motivated by abundant evidence of neuroanatomical variability from previous studies of schizophrenia, we draw upon a relatively new approach called independent vector analysis (IVA) to assess this variability in resting fMRI networks. IVA is a blind-source separation algorithm, which segregates fMRI data into temporally coherent but spatially independent networks and has been shown to be especially good at capturing spatial variability among subjects in the extracted networks. We introduce several new ways to quantify differences in variability of IVA-derived networks between schizophrenia patients (SZs = 82) and healthy controls (HCs = 89). Voxelwise amplitude analyses showed significant group differences in the spatial maps of auditory cortex, the basal ganglia, the sensorimotor network, and visual cortex. Tests for differences (HC-SZ) in the spatial variability maps suggest, that at rest, SZs exhibit more activity within externally focused sensory and integrative network and less activity in the default mode network thought to be related to internal reflection. Additionally, tests for difference of variance between groups further emphasize that SZs exhibit greater network variability. These results, consistent with our prediction of increased spatial variability within SZs, enhance our understanding of the disease and suggest that it is not just the amplitude of connectivity that is different in schizophrenia, but also the consistency in spatial connectivity patterns across subjects. PMID:26106217
Adjusting stream-sediment geochemical maps in the Austrian Bohemian Massif by analysis of variance
Davis, J.C.; Hausberger, G.; Schermann, O.; Bohling, G.
1995-01-01
The Austrian portion of the Bohemian Massif is a Precambrian terrane composed mostly of highly metamorphosed rocks intruded by a series of granitoids that are petrographically similar. Rocks are exposed poorly and the subtle variations in rock type are difficult to map in the field. A detailed geochemical survey of stream sediments in this region has been conducted and included as part of the Geochemischer Atlas der Republik O??sterreich, and the variations in stream sediment composition may help refine the geological interpretation. In an earlier study, multivariate analysis of variance (MANOVA) was applied to the stream-sediment data in order to minimize unwanted sampling variation and emphasize relationships between stream sediments and rock types in sample catchment areas. The estimated coefficients were used successfully to correct for the sampling effects throughout most of the region, but also introduced an overcorrection in some areas that seems to result from consistent but subtle differences in composition of specific rock types. By expanding the model to include an additional factor reflecting the presence of a major tectonic unit, the Rohrbach block, the overcorrection is removed. This iterative process simultaneously refines both the geochemical map by removing extraneous variation and the geological map by suggesting a more detailed classification of rock types. ?? 1995 International Association for Mathematical Geology.
Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods
NASA Astrophysics Data System (ADS)
Garbanzo-Salas, Marcial; Hocking, Wayne. K.
2015-09-01
In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.
Bhatia, Gaurav; Gusev, Alexander; Finucane, Hilary K; Bulik-Sullivan, Brendan K; Pollack, Samuela J; de Candia, Teresa R; Lee, Sang Hong; Wray, Naomi R; Kendler, Kenneth S; O’Donovan, Michael C; Neale, Benjamin M; Patterson, Nick
2015-01-01
Heritability analyses of GWAS cohorts have yielded important insights into complex disease architecture, and increasing sample sizes hold the promise of further discoveries. Here, we analyze the genetic architecture of schizophrenia in 49,806 samples from the PGC, and nine complex diseases in 54,734 samples from the GERA cohort. For schizophrenia, we infer an overwhelmingly polygenic disease architecture in which ≥71% of 1Mb genomic regions harbor ≥1 variant influencing schizophrenia risk. We also observe significant enrichment of heritability in GC-rich regions and in higher-frequency SNPs for both schizophrenia and GERA diseases. In bivariate analyses, we observe significant genetic correlations (ranging from 0.18 to 0.85) among several pairs of GERA diseases; genetic correlations were on average 1.3x stronger than correlations of overall disease liabilities. To accomplish these analyses, we developed a fast algorithm for multi-component, multi-trait variance components analysis that overcomes prior computational barriers that made such analyses intractable at this scale. PMID:26523775
New Variance-Reducing Methods for the PSD Analysis of Large Optical Surfaces
NASA Technical Reports Server (NTRS)
Sidick, Erkin
2010-01-01
Edge data of a measured surface map of a circular optic result in large variance or "spectral leakage" behavior in the corresponding Power Spectral Density (PSD) data. In this paper we present two new, alternative methods for reducing such variance in the PSD data by replacing the zeros outside the circular area of a surface map by non-zero values either obtained from a PSD fit (method 1) or taken from the inside of the circular area (method 2).
Methods to estimate the between-study variance and its uncertainty in meta-analysis.
Veroniki, Areti Angeliki; Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian P T; Langan, Dean; Salanti, Georgia
2016-03-01
Meta-analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between-study variability, which is typically modelled using a between-study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between-study variance, has been long challenged. Our aim is to identify known methods for estimation of the between-study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between-study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between-study variance. Based on the scenarios and results presented in the published studies, we recommend the Q-profile method and the alternative approach based on a 'generalised Cochran between-study variance statistic' to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence-based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. PMID:26332144
NASA Astrophysics Data System (ADS)
Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd
2003-07-01
Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2005-01-01
To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
ERIC Educational Resources Information Center
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
Obituary: Allan R. Sandage (1926-2010)
NASA Astrophysics Data System (ADS)
Devorkin, David
2011-12-01
Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer
Looney, B.B.; Scott, M.T.
1988-12-31
Recent field and laboratory data have confirmed that apparent dispersivity is a function of the flow distance of the measurement. This scale effect is not consistent with classical advection dispersion modeling often used to describe the transport of solutes in saturated porous media. Many investigators attribute this anomalous behavior to the fact that the spreading of solute is actually the result of the heterogeneity of subsurface materials and the wide distribution of flow paths and velocities available in such systems. An analysis using straightforward analytical equations confirms this hypothesis. An analytical equation based on a flow variance approach matches available field data when a variance description of approximately 0.4 is employed. Also, current field data provide a basis for statistical selection of the variance parameter based on the level of concern related to the resulting calculated concentration. While the advection dispersion approach often yielded reasonable predictions, continued development of statistical and stochastic techniques will provide more defendable and mechanistically descriptive models.
Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis
NASA Astrophysics Data System (ADS)
Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.
2012-04-01
In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi
NASA Astrophysics Data System (ADS)
Yang, Feng; Li, Kwok H.; Teh, Kah C.
2006-12-01
Carrier frequency offset (CFO) is a serious drawback in orthogonal frequency division multiplexing (OFDM) systems. It must be estimated and compensated before demodulation to guarantee the system performance. In this paper, we examine the performance of a blind minimum output variance (MOV) estimator. Based on the derived probability density function (PDF) of the output magnitude, its mean and variance are obtained and it is observed that the variance reaches the minimum when there is no frequency offset. This observation motivates the development of the proposed MOV estimator. The theoretical mean-square error (MSE) of the MOV estimator over an AWGN channel is obtained. The analytical results are in good agreement with the simulation results. The performance evaluation of the MOV estimator is extended to a frequency-selective fading channel and the maximal-ratio combining (MRC) technique is applied to enhance the MOV estimator's performance. Simulation results show that the MRC technique significantly improves the accuracy of the MOV estimator.
Uranium series dating of Allan Hills ice
NASA Technical Reports Server (NTRS)
Fireman, E. L.
1986-01-01
Uranium-238 decay series nuclides dissolved in Antarctic ice samples were measured in areas of both high and low concentrations of volcanic glass shards. Ice from the Allan Hills site (high shard content) had high Ra-226, Th-230 and U-234 activities but similarly low U-238 activities in comparison with Antarctic ice samples without shards. The Ra-226, Th-230 and U-234 excesses were found to be proportional to the shard content, while the U-238 decay series results were consistent with the assumption that alpha decay products recoiled into the ice from the shards. Through this method of uranium series dating, it was learned that the Allen Hills Cul de Sac ice is approximately 325,000 years old.
Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
The present investigation developed power curves for two parametric and two nonparametric procedures for testing the equality of population variances. Both normal and non-normal distributions were considered for the two group design with equal and unequal sample frequencies. The results indicated that when population distributions differed only in…
Technology Transfer Automated Retrieval System (TEKTRAN)
The genetic effects of long term random mating and natural selection aided by genetic male sterility were evaluated in two soybean [Glycine max (L.) Merr.] populations: RSII and RSIII. Population means, variances, and heritabilities were estimated to determine the effects of 26 generations of random...
32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE ...
32. SCIENTISTS ALLAN COX (SEATED), RICHARD DOELL, AND BRENT DALRYMPLE AT CONTROL PANEL, ABOUT 1965. - U.S. Geological Survey, Rock Magnetics Laboratory, 345 Middlefield Road, Menlo Park, San Mateo County, CA
Spectral variance of aeroacoustic data
NASA Technical Reports Server (NTRS)
Rao, K. V.; Preisser, J. S.
1981-01-01
An asymptotic technique for estimating the variance of power spectra is applied to aircraft flyover noise data. The results are compared with directly estimated variances and they are in reasonable agreement. The basic time series need not be Gaussian for asymptotic theory to apply. The asymptotic variance formulae can be useful tools both in the design and analysis phase of experiments of this type.
Quantitative Genetic Analysis of Temperature Regulation in MUS MUSCULUS. I. Partitioning of Variance
Lacy, Robert C.; Lynch, Carol Becker
1979-01-01
Heritabilities (from parent-offspring regression) and intraclass correlations of full sibs for a variety of traits were estimated from 225 litters of a heterogeneous stock (HS/Ibg) of laboratory mice. Initial variance partitioning suggested different adaptive functions for physiological, morphological and behavioral adjustments with respect to their thermoregulatory significance. Metabolic heat-production mechanisms appear to have reached their genetic limits, with little additive genetic variance remaining. This study provided no genetic evidence that body size has a close directional association with fitness in cold environments, since heritability estimates for weight gain and adult weight were similar and high, whether or not the animals were exposed to cold. Behavioral heat conservation mechanisms also displayed considerable amounts of genetic variability. However, due to strong evidence from numerous other studies that behavior serves an important adaptive role for temperature regulation in small mammals, we suggest that fluctuating selection pressures may have acted to maintain heritable variation in these traits. PMID:17248909
NASA Technical Reports Server (NTRS)
Kotob, S.; Kaufman, H.
1976-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
NASA Technical Reports Server (NTRS)
Alston, D. W.
1981-01-01
The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.
Budde, M.E.; Tappan, G.; Rowland, J.; Lewis, J.; Tieszen, L.L.
2004-01-01
The researchers calculated seasonal integrated normalized difference vegetation index (NDVI) for each of 7 years using a time-series of 1-km data from the Advanced Very High Resolution Radiometer (AVHRR) (1992-93, 1995) and SPOT Vegetation (1998-2001) sensors. We used a local variance technique to identify each pixel as normal or either positively or negatively anomalous when compared to its surroundings. We then summarized the number of years that a given pixel was identified as an anomaly. The resulting anomaly maps were analysed using Landsat TM imagery and extensive ground knowledge to assess the results. This technique identified anomalies that can be linked to numerous anthropogenic impacts including agricultural and urban expansion, maintenance of protected areas and increased fallow. Local variance analysis is a reliable method for assessing vegetation degradation resulting from human pressures or increased land productivity from natural resource management practices. ?? 2004 Published by Elsevier Ltd.
Allan Sandage and the distance scale
NASA Astrophysics Data System (ADS)
Tammann, G. A.; Reindl, B.
2013-02-01
Allan Sandage returned to the distance scale and the calibration of the Hubble constant again and again during his active life, experimenting with different distance indicators. In 1952 his proof of the high luminosity of Cepheids confirmed Baade's revision of the distance scale (H0 ~ 250 km s-1 Mpc-1). During the next 25 years, he lowered the value to 75 and 55. Upon the arrival of the Hubble Space Telescope, he observed Cepheids to calibrate the mean luminosity of nearby Type Ia supernovae (SNe Ia) which, used as standard candles, led to the cosmic value of H0 = 62.3 +/- 1.3 +/- 5.0 km s-1 Mpc-1. Eventually he turned to the tip of the red giant branch (TRGB) as a very powerful distance indicator. A compilation of 176 TRGB distances yielded a mean, very local value of H0 = 62.9 +/- 1.6 km s-1 Mpc-1 and shed light on the streaming velocities in the Local Supercluster. Moreover, TRGB distances are now available for six SNe Ia; if their mean luminosity is applied to distant SNe Ia, one obtains H0 = 64.6 +/- 1.6 +/- 2.0 km s-1 Mpc-1. The weighted mean of the two independent large-scale calibrations yields H0 = 64.1 km s-1 Mpc-1 within 3.6%.
Patient population management: taking the leap from variance analysis to outcomes measurement.
Allen, K M
1998-01-01
Case managers today at BCHS have a somewhat different role than at the onset of the Collaborative Practice Model. They are seen throughout the organization as: Leaders/participants on cross-functional teams. Systems change agents. Integrating/merging with quality services and utilization management. Outcomes managers. One of the major cross-functional teams is in the process of designing a Care Coordinator role. These individuals will, as one of their functions, assume responsibility for daily patient care management activities. A variance tracking program has come into the Utilization Management (UM) department as part of a software package purchased to automate UM work activities. This variance program could potentially be used by the new care coordinators as the role develops. The case managers are beginning to use a Decision Support software, (Transition Systems Inc.) in the collection of data that is based on a cost accounting system and linked to clinical events. Other clinical outcomes data bases are now being used by the case manager to help with the collection and measurement of outcomes information. Hoshin planning will continue to be a framework for defining and setting the targets for clinical and financial improvements throughout the organization. Case managers will continue to be involved in many of these system-wide initiatives. In the words of Galileo, 1579, "You need to count what's countable, measure what's measurable, and what's not measurable, make measurable." PMID:9601411
ERIC Educational Resources Information Center
Johnson, Colleen Cook; Rakow, Ernest A.
1994-01-01
This research is an empirical study, through Monte Carlo simulation, of the effects of violations of the assumptions for the oneway fixed-effects analysis of variance (ANOVA) and analysis of covariance (ANCOVA). Research reaffirms findings of previous studies that suggest that ANOVA and ANCOVA be avoided when group sizes are not equal. (SLD)
View-angle-dependent AIRS Cloudiness and Radiance Variance: Analysis and Interpretation
NASA Technical Reports Server (NTRS)
Gong, Jie; Wu, Dong L.
2013-01-01
Upper tropospheric clouds play an important role in the global energy budget and hydrological cycle. Significant view-angle asymmetry has been observed in upper-level tropical clouds derived from eight years of Atmospheric Infrared Sounder (AIRS) 15 um radiances. Here, we find that the asymmetry also exists in the extra-tropics. It is larger during day than that during night, more prominent near elevated terrain, and closely associated with deep convection and wind shear. The cloud radiance variance, a proxy for cloud inhomogeneity, has consistent characteristics of the asymmetry to those in the AIRS cloudiness. The leading causes of the view-dependent cloudiness asymmetry are the local time difference and small-scale organized cloud structures. The local time difference (1-1.5 hr) of upper-level (UL) clouds between two AIRS outermost views can create parts of the observed asymmetry. On the other hand, small-scale tilted and banded structures of the UL clouds can induce about half of the observed view-angle dependent differences in the AIRS cloud radiances and their variances. This estimate is inferred from analogous study using Microwave Humidity Sounder (MHS) radiances observed during the period of time when there were simultaneous measurements at two different view-angles from NOAA-18 and -19 satellites. The existence of tilted cloud structures and asymmetric 15 um and 6.7 um cloud radiances implies that cloud statistics would be view-angle dependent, and should be taken into account in radiative transfer calculations, measurement uncertainty evaluations and cloud climatology investigations. In addition, the momentum forcing in the upper troposphere from tilted clouds is also likely asymmetric, which can affect atmospheric circulation anisotropically.
Radial forcing and Edgar Allan Poe's lengthening pendulum
NASA Astrophysics Data System (ADS)
McMillan, Matthew; Blasing, David; Whitney, Heather M.
2013-09-01
Inspired by Edgar Allan Poe's The Pit and the Pendulum, we investigate a radially driven, lengthening pendulum. We first show that increasing the length of an undriven pendulum at a uniform rate does not amplify the oscillations in a manner consistent with the behavior of the scythe in Poe's story. We discuss parametric amplification and the transfer of energy (through the parameter of the pendulum's length) to the oscillating part of the system. In this manner, radial driving can easily and intuitively be understood, and the fundamental concept applied in many other areas. We propose and show by a numerical model that appropriately timed radial forcing can increase the oscillation amplitude in a manner consistent with Poe's story. Our analysis contributes a computational exploration of the complex harmonic motion that can result from radially driving a pendulum and sheds light on a mechanism by which oscillations can be amplified parametrically. These insights should prove especially valuable in the undergraduate physics classroom, where investigations into pendulums and oscillations are commonplace.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
Obituary: Allan R. Sandage (1926-2010)
NASA Astrophysics Data System (ADS)
Devorkin, David
2011-12-01
Allan Rex Sandage died of pancreatic cancer at his home in San Gabriel, California, in the shadow of Mount Wilson, on November 13, 2010. Born in Iowa City, Iowa, on June 18, 1926, he was 84 years old at his death, leaving his wife, former astronomer Mary Connelly Sandage, and two sons, David and John. He also left a legacy to the world of astronomical knowledge that has long been universally admired and appreciated, making his name synonymous with late 20th-Century observational cosmology. The only child of Charles Harold Sandage, a professor of advertising who helped establish that academic specialty after obtaining a PhD in business administration, and Dorothy Briggs Sandage, whose father was president of Graceland College in Iowa, Allan Sandage grew up in a thoroughly intellectual, university oriented atmosphere but also a peripatetic one taking him to Philadelphia and later to Illinois as his father rose in his career. During his 2 years in Philadelphia, at about age eleven, Allan developed a curiosity about astronomy stimulated by a friend's interest. His father bought him a telescope and he used it to systematically record sunspots, and later attempted to make a larger 6-inch reflector, a project left uncompleted. As a teenager Allan read widely, especially astronomy books of all kinds, recalling in particular The Glass Giant of Palomar as well as popular works by Eddington and Hubble (The Realm of the Nebulae) in the early 1940s. Although his family was Mormon, of the Reorganized Church, he was not practicing, though he later sporadically attended a Methodist church in Oxford, Iowa during his college years. Sandage knew by his high school years that he would engage in some form of intellectual life related to astronomy. He particularly recalls an influential science teacher at Miami University in Oxford, Ohio named Ray Edwards, who inspired him to think critically and "not settle for any hand-waving of any kind." [Interview of Allan Rex Sandage by Spencer
NASA Technical Reports Server (NTRS)
Wright, E. L.; Smoot, G. F.; Kogut, A.; Hinshaw, G.; Tenorio, L.; Lineweaver, C.; Bennett, C. L.; Lubin, P. M.
1994-01-01
Cosmic anisotrophy produces an excess variance sq sigma(sub sky) in the Delta maps produced by the Differential Microwave Radiometer (DMR) on cosmic background explorer (COBE) that is over and above the instrument noise. After smoothing to an effective resolution of 10 deg, this excess sigma(sub sky)(10 deg), provides an estimate for the amplitude of the primordial density perturbation power spectrum with a cosmic uncertainty of only 12%. We employ detailed Monte Carlo techniques to express the amplitude derived from this statistic in terms of the universal root mean square (rms) quadrupole amplitude, (Q sq/RMS)(exp 0.5). The effects of monopole and dipole subtraction and the non-Gaussian shape of the DMR beam cause the derived (Q sq/RMS)(exp 0.5) to be 5%-10% larger than would be derived using simplified analytic approximations. We also investigate the properties of two other map statistics: the actual quadrupole and the Boughn-Cottingham statistic. Both the sigma(sub sky)(10 deg) statistic and the Boughn-Cottingham statistic are consistent with the (Q sq/RMS)(exp 0.5) = 17 +/- 5 micro K reported by Smoot et al. (1992) and Wright et al. (1992).
Cahyadi, Muhammad; Park, Hee-Bok; Seo, Dong-Won; Jin, Shil; Choi, Nuri; Heo, Kang-Nyeong; Kang, Bo-Seok; Jo, Cheorun; Lee, Jun-Heon
2016-01-01
Quantitative trait locus (QTL) is a particular region of the genome containing one or more genes associated with economically important quantitative traits. This study was conducted to identify QTL regions for body weight and growth traits in purebred Korean native chicken (KNC). F1 samples (n = 595) were genotyped using 127 microsatellite markers and 8 single nucleotide polymorphisms that covered 2,616.1 centi Morgan (cM) of map length for 26 autosomal linkage groups. Body weight traits were measured every 2 weeks from hatch to 20 weeks of age. Weight of half carcass was also collected together with growth rate. A multipoint variance component linkage approach was used to identify QTLs for the body weight traits. Two significant QTLs for growth were identified on chicken chromosome 3 (GGA3) for growth 16 to18 weeks (logarithm of the odds [LOD] = 3.24, Nominal p value = 0.0001) and GGA4 for growth 6 to 8 weeks (LOD = 2.88, Nominal p value = 0.0003). Additionally, one significant QTL and three suggestive QTLs were detected for body weight traits in KNC; significant QTL for body weight at 4 weeks (LOD = 2.52, nominal p value = 0.0007) and suggestive QTL for 8 weeks (LOD = 1.96, Nominal p value = 0.0027) were detected on GGA4; QTLs were also detected for two different body weight traits: body weight at 16 weeks on GGA3 and body weight at 18 weeks on GGA19. Additionally, two suggestive QTLs for carcass weight were detected at 0 and 70 cM on GGA19. In conclusion, the current study identified several significant and suggestive QTLs that affect growth related traits in a unique resource pedigree in purebred KNC. This information will contribute to improving the body weight traits in native chicken breeds, especially for the Asian native chicken breeds. PMID:26732327
SU-E-T-41: Analysis of GI Dose Variability Due to Intrafraction Setup Variance
Phillips, J; Wolfgang, J
2014-06-01
Purpose: Proton SBRT (stereotactic body radiation therapy) can be an effective modality for treatment of gastrointestinal tumors, but limited in practice due to sensitivity with respect to variation in the RPL (radiological path length). Small, intrafractional shifts in patient anatomy can lead to significant changes in the dose distribution. This study describes a tool designed to visualize uncertainties in radiological depth in patient CT's and aid in treatment plan design. Methods: This project utilizes the Shadie toolkit, a GPU-based framework that allows for real-time interactive calculations for volume visualization. Current SBRT simulation practice consists of a serial CT acquisition for the assessment of inter- and intra-fractional motion utilizing patient specific immobilization systems. Shadie was used to visualize potential uncertainties, including RPL variance and changes in gastric content. Input for this procedure consisted of two patient CT sets, contours of the desired organ, and a pre-calculated dose. In this study, we performed rigid registrations between sets of 4DCT's obtained from a patient with varying setup conditions. Custom visualizations are written by the user in Shadie, permitting one to create color-coded displays derived from a calculation along each ray. Results: Serial CT data acquired on subsequent days was analyzed for variation in RPB and gastric content. Specific shaders were created to visualize clinically relevant features, including RPL (radiological path length) integrated up to organs of interest. Using pre-calculated dose distributions and utilizing segmentation masks as additional input allowed us to further refine the display output from Shadie and create tools suitable for clinical usage. Conclusion: We have demonstrated a method to visualize potential uncertainty for intrafractional proton radiotherapy. We believe this software could prove a useful tool to guide those looking to design treatment plans least insensitive
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy
2015-04-01
Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.
Noise and drift analysis of non-equally spaced timing data
NASA Technical Reports Server (NTRS)
Vernotte, F.; Zalamansky, G.; Lantz, E.
1994-01-01
Generally, it is possible to obtain equally spaced timing data from oscillators. The measurement of the drifts and noises affecting oscillators is then performed by using a variance (Allan variance, modified Allan variance, or time variance) or a system of several variances (multivariance method). However, in some cases, several samples, or even several sets of samples, are missing. In the case of millisecond pulsar timing data, for instance, observations are quite irregularly spaced in time. Nevertheless, since some observations are very close together (one minute) and since the timing data sequence is very long (more than ten years), information on both short-term and long-term stability is available. Unfortunately, a direct variance analysis is not possible without interpolating missing data. Different interpolation algorithms (linear interpolation, cubic spline) are used to calculate variances in order to verify that they neither lose information nor add erroneous information. A comparison of the results of the different algorithms is given. Finally, the multivariance method was adapted to the measurement sequence of the millisecond pulsar timing data: the responses of each variance of the system are calculated for each type of noise and drift, with the same missing samples as in the pulsar timing sequence. An estimation of precision, dynamics, and separability of this method is given.
ERIC Educational Resources Information Center
Williams, John D.; Lindem, Alfred C.
Four computer programs using the general purpose multiple linear regression program have been developed. Setwise regression analysis is a stepwise procedure for sets of variables; there will be as many steps as there are sets. Covarmlt allows a solution to the analysis of covariance design with multiple covariates. A third program has three…
Allan Houser (Haozous) Santa Fe Compound and Sculpture Garden.
ERIC Educational Resources Information Center
Herberholz, Barbara
1999-01-01
Summarizes the life of artist Allan Houser focusing on his childhood and his family life, the development of his artistic endeavors, and his career as an artist. Comments on the Alan Houser Compound that is a 104-acre compound and sculpture garden that houses over 30 of his sculptures. (CMK)
Biotechnology Symposium - In Memoriam, the Late Dr. Allan Zipf
Technology Transfer Automated Retrieval System (TEKTRAN)
A one-day biotechnology symposium was held at Alabama A&M University (AAMU), Normal, AL on June 4, 2004 in memory of the late Dr. Allan Zipf (Sept 1953-Jan 2004). Dr. Zipf was a Research Associate Professor at the Department of Plant and Soil Sciences, AAMU, who collaborated extensively with ARS/MS...
The Variance Analysis for Seismic Attributes in Oil-and-Gas detection at the Midlle of Tarim Basin
NASA Astrophysics Data System (ADS)
Yu, C.; Jingyi, F.
2011-12-01
seismic attributes of reflection data is important for oil-and-gas detection. It turns to be a new technique to identify oil and/or gas reservoirs with the information brought by seismic waves. Local changes and variances would be detected in the seismic waves at and around the area with oil or/and gas, which could be used to issue oil and gas reservoirs and provide basement for geophysical evaluation on hydrocarbon traps. The method has been tested in the middle of Tarim Basin, and the analysis of seismic attributes and its relationship with oil and gas reservoirs seems to have cast a new light for detecting hydrocarbon traps. Before exploring and drilling for oil and gas, data accessing and data mining, especially for the seismic attribute data, are suggested to be done for hydrocarbon detection and trap evaluation, in order to avoid risk and improve efficiency in oil-and-gas exploration.
Wang, Lianqi; Gilles, Luc; Ellerbroek, Brent
2011-06-20
The scientific utility of laser-guide-star-based multiconjugate adaptive optics systems depends upon high sky coverage. Previously we reported a high-fidelity sky coverage analysis of an ad hoc split tomography control algorithm and a postprocessing simulation technique. In this paper, we present the performance of a newer minimum variance split tomography algorithm, and we show that it brings a median improvement at zenith of 21 nm rms optical path difference error over the ad hoc split tomography control algorithm for our system, the Narrow Field Infrared Adaptive Optics System for the Thirty Meter Telescope. In order to make the comparison, we also validated our previously developed sky coverage postprocessing software using an integrated simulation of both high- (laser guide star) and low-order (natural guide star) loops. A new term in the noise model is also identified that improves the performance of both algorithms by more properly regularizing the reconstructor. PMID:21691367
Guo, Zifeng; Chen, Dijun; Schnurbusch, Thorsten
2015-06-01
Anther and ovary development play an important role in grain setting, a crucial factor determining wheat (Triticum aestivum L.) yield. One aim of this study was to determine the heritability of anther and ovary size at different positions within a spikelet at seven floral developmental stages and conduct a variance components analysis. Relationships between anther and ovary size and other traits were also assessed. The thirty central European winter wheat genotypes used in this study were based on reduced height (Rht) and photoperiod sensitivity (Ppd) genes with variable genetic backgrounds. Identical experimental designs were conducted in a greenhouse and field simultaneously. Heritability of anther and ovary size indicated strong genetic control. Variance components analysis revealed that anther and ovary sizes of floret 3 (i.e. F3, the third floret from the spikelet base) and floret 4 (F4) were more sensitive to the environment compared with those in floret 1 (F1). Good correlations were found between spike dry weight and anther and ovary size in both greenhouse and field, suggesting that anther and ovary size are good predictors of each other, as well as spike dry weight in both conditions. Relationships between spike dry weight and anther and ovary size at F3/4 positions were stronger than at F1, suggesting that F3/4 anther and ovary size are better predictors of spike dry weight. Generally, ovary size showed a closer relationship with spike dry weight than anther size, suggesting that ovary size is a more reliable predictor of spike dry weight. PMID:25821074
NASA Astrophysics Data System (ADS)
liniger, hanspeter; jucker riva, matteo; schwilch, gudrun
2016-04-01
Mapping and assessment of desertification is a primary basis for effective management of dryland ecosystems. Vegetation cover and biomass density are key elements for the ecological functioning of dry ecosystem, and at the same time an effective indicator of desertification, land degradation and sustainable land management. The Normalized Difference Vegetation Index (NDVI) is widely used to estimate the vegetation density and cover. However, the reflectance of vegetation and thus the NDVI values are influenced by several factors such as type of canopy, type of land use and seasonality. For example low NDVI values could be associated to a degraded forest, to a healthy forest under dry climatic condition, to an area used as pasture, or to an area managed to reduce the fuel load. We propose a simple method to analyse the variance of NDVI signal considering the main factors that shape the vegetation. This variance analysis enables us to detect and categorize degradation in a much more precise way than simple NDVI analysis. The methodology comprises identifying homogeneous landscape areas in terms of aspect, slope, land use and disturbance regime (if relevant). Secondly, the NDVI is calculated from Landsat multispectral images and the vegetation potential for each landscape is determined based on the percentile (highest 10% value). Thirdly, the difference between the NDVI value of each pixel and the potential is used to establish degradation categories . Through this methodology, we are able to identify realistic objectives for restoration, allowing a targeted choice of management options for degraded areas. For example, afforestation would only be done in areas that show potential for forest growth. Moreover, we can measure the effectiveness of management practices in terms of vegetation growth across different landscapes and conditions. Additionally, the same methodology can be applied to a time series of multispectral images, allowing detection and quantification of
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
Mean-variance portfolio analysis data for optimizing community-based photovoltaic investment.
Shakouri, Mahmoud; Lee, Hyun Woo
2016-03-01
The amount of electricity generated by Photovoltaic (PV) systems is affected by factors such as shading, building orientation and roof slope. To increase electricity generation and reduce volatility in generation of PV systems, a portfolio of PV systems can be made which takes advantages of the potential synergy among neighboring buildings. This paper contains data supporting the research article entitled: PACPIM: new decision-support model of optimized portfolio analysis for community-based photovoltaic investment [1]. We present a set of data relating to physical properties of 24 houses in Oregon, USA, along with simulated hourly electricity data for the installed PV systems. The developed Matlab code to construct optimized portfolios is also provided in . The application of these files can be generalized to variety of communities interested in investing on PV systems. PMID:26937458
Boccard, Julien; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Carbon-14 ages of Allan Hills meteorites and ice
NASA Technical Reports Server (NTRS)
Fireman, E. L.; Norris, T.
1982-01-01
Allan Hills is a blue ice region of approximately 100 sq km area in Antarctica where many meteorites have been found exposed on the ice. The terrestrial ages of the Allan Hills meteorites, which are obtained from their cosmogenic nuclide abundances are important time markers which can reflect the history of ice movement to the site. The principal purpose in studying the terrestrial ages of ALHA meteorites is to locate samples of ancient ice and analyze their trapped gas contents. Attention is given to the C-14 and Ar-39 terrestrial ages of ALHA meteorites, and C-14 ages and trapped gas compositions in ice samples. On the basis of the obtained C-14 terrestrial ages, and Cl-36 and Al-26 results reported by others, it is concluded that most ALHA meteorites fell between 20,000 and 200,000 years ago.
Exposure and terrestrial ages of four Allan Hills Antarctic meteorites
NASA Technical Reports Server (NTRS)
Kirsten, T.; Ries, D.; Fireman, E. L.
1978-01-01
Terrestrial ages of meteorites are based on the amount of cosmic-ray-produced radioactivity in the sample and the number of observed falls that have similar cosmic-ray exposure histories. The cosmic-ray exposures are obtained from the stable noble gas isotopes. Noble gas isotopes are measured by high-sensitivity mass spectrometry. In the present study, the noble gas contents were measured in four Allan Hill meteorites (No. 5, No. 6, No. 7, and No. 8), whose C-14, Al-26, and Mn-53 radioactivities are known. These meteorites are of particular interest because they belong to a large assemblage of distinct meteorites that lie exposed on a small (110 sq km) area of ice near the Allan Hills.
Senior, Alistair M; Gosby, Alison K; Lu, Jing; Simpson, Stephen J; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual's target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895
Variance-component analysis of obesity in type 2 diabetes confirms loci on chromosomes 1q and 11q.
van Tilburg, Jonathan H O; Sandkuijl, Lodewijk A; Strengman, Eric; Pearson, Peter L; van Haeften, Timon W; Wijmenga, Cisca
2003-11-01
To study genetic loci influencing obesity in nuclear families with type 2 diabetes, we performed a genome-wide screen with 325 microsatellite markers that had an average spacing of 11 cM and a mean heterozygosity of approximately 75% covering all 22 autosomes. Genotype data were obtained from 562 individuals from 178 families from the Breda Study Cohort. These families were determined to have at least two members with type 2 diabetes. As a measure of obesity, the BMI of each diabetes patient was determined. The genotypes were analyzed using variance components (VCs) analysis implemented in GENEHUNTER 2 to determine quantitative trait loci influencing BMI. The VC analysis revealed two genomic regions showing VC logarithm of odds (LOD) scores > or =1.0 on chromosome 1 and chromosome 11. The regions of interest on both chromosomes were further investigated by fine-mapping with additional markers, resulting in a VC LOD score of 1.5 on chromosome 1q and a VC LOD of 2.4 on chromosome 11q. The locus on chromosome 1 has been implicated previously in diabetes. The locus on chromosome 11 has been implicated previously in diabetes and obesity. Our study to determine linkage for BMI confirms the presence of quantitative trait loci influencing obesity in subjects with type 2 diabetes on chromosomes 1q31-q42 and 11q14-q24. PMID:14627748
Senior, Alistair M.; Gosby, Alison K.; Lu, Jing; Simpson, Stephen J.; Raubenheimer, David
2016-01-01
Meta-analysis, which drives evidence-based practice, typically focuses on the average response of subjects to a treatment. For instance in nutritional research the difference in average weight of participants on different diets is typically used to draw conclusions about the relative efficacy of interventions. As a result of their focus on the mean, meta-analyses largely overlook the effects of treatments on inter-subject variability. Recent tools from the study of biological evolution, where inter-individual variability is one of the key ingredients for evolution by natural selection, now allow us to study inter-subject variability using established meta-analytic models. Here we use meta-analysis to study how low carbohydrate (LC) ad libitum diets and calorie restricted diets affect variance in mass. We find that LC ad libitum diets may have a more variable outcome than diets that prescribe a reduced calorie intake. Our results suggest that whilst LC diets are effective in a large proportion of the population, for a subset of individuals, calorie restricted diets may be more effective. There is evidence that LC ad libitum diets rely on appetite suppression to drive weight loss. Extending this hypothesis, we suggest that between-individual variability in protein appetite may drive the trends that we report. A priori identification of an individual’s target intake for protein may help define the most effective dietary intervention to prescribe for weight loss. PMID:27491895
Nuclear Material Variance Calculation
Energy Science and Technology Software Center (ESTSC)
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Biclustering with heterogeneous variance.
Chen, Guanhua; Sullivan, Patrick F; Kosorok, Michael R
2013-07-23
In cancer research, as in all of medicine, it is important to classify patients into etiologically and therapeutically relevant subtypes to improve diagnosis and treatment. One way to do this is to use clustering methods to find subgroups of homogeneous individuals based on genetic profiles together with heuristic clinical analysis. A notable drawback of existing clustering methods is that they ignore the possibility that the variance of gene expression profile measurements can be heterogeneous across subgroups, and methods that do not consider heterogeneity of variance can lead to inaccurate subgroup prediction. Research has shown that hypervariability is a common feature among cancer subtypes. In this paper, we present a statistical approach that can capture both mean and variance structure in genetic data. We demonstrate the strength of our method in both synthetic data and in two cancer data sets. In particular, our method confirms the hypervariability of methylation level in cancer patients, and it detects clearer subgroup patterns in lung cancer data. PMID:23836637
Volstad, Nicola; Nemke, Brett; Muir, Peter
2016-01-01
Factors that contribute to variance in ground reaction forces (GRFs) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population, it may be preferable to minimize data variance and efficiently perform force platform gait analysis by evaluation of each individual dog at its preferred velocity, such that dogs are studied at a similar relative velocity (V*). Data from 27 normal dogs were obtained including withers and shoulder height. Each dog was trotted across a force platform at its preferred velocity, with controlled acceleration (±0.5 m/s(2)). V* ranges were created for withers and shoulder height. Variance effects from 12 trotting velocity ranges and associated V* ranges were examined using repeated-measures analysis-of-covariance. Mean bodyweight was 24.4 ± 7.4 kg. Individual dog, velocity, and V* significantly influenced GRF (P <0.001). Trial number significantly influenced thoracic limb peak vertical force (PVF) (P <0.001). Limb effects were not significant. The magnitude of variance effects was greatest for the dog effect. Withers height V* was associated with small GRF variance. Narrow velocity ranges typically captured a smaller percentage of trials and were not consistently associated with lower variance. The withers height V* range of 0.6-1.05 captured the largest proportion of trials (95.9 ± 5.9%) with no significant effects on PVF and vertical impulse. The use of individual velocity ranges derived from a withers height V* range of 0.6-1.05 will account for population heterogeneity while minimizing exacerbation of lameness in clinical trials studying lame dogs by efficient capture of valid trials. PMID:26631945
Hans, Eric C.; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter
2016-01-01
Factors that contribute to variance in ground reaction forces (GRF) include: dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, we hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from twenty normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5m/s2). Variance effects from twelve trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P<0.05. Mean dog body weight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P<0.001). Trial number significantly affected thoracic limb PVF (P<0.001). Limb (left or right) significantly affected thoracic limb VI (P=0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5–2.0 m/s, 1.8–2.2 m/s, and 1.9–2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5–2.2 m/s, captured a large percentage of trials per dog (84.2±21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. We conclude wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design. PMID:25457264
Nowak, J; Hagerman, I; Ylén, M; Nyquist, O; Sylvén, C
1993-09-01
Variance electrocardiography (variance ECG) is a new resting procedure for detection of coronary artery disease (CAD). The method measures variability in the electrical expression of the depolarization phase induced by this disease. The time-domain analysis is performed on 220 cardiac cycles using high-fidelity ECG signals from 24 leads, and the phase-locked temporal electrical heterogeneity is expressed as a nondimensional CAD index (CAD-I) with the values of 0-150. This study compares the diagnostic efficiency of variance ECG and exercise stress test in a high prevalence population. A total of 199 symptomatic patients evaluated with coronary angiography was subjected to variance ECG and exercise test on a bicycle ergometer as a continuous ramp. The discriminant accuracy of the two methods was assessed employing the receiver operating characteristic curves constructed by successive consideration of several CAD-I cutpoint values and various threshold criteria based on ST-segment depression exclusively or in combination with exertional chest pain. Of these patients, 175 with CAD (> or = 50% luminal stenosis in 1 + major epicardial arteries) presented a mean CAD-I of 88 +/- 22, compared with 70 +/- 21 in 24 nonaffected patients (p < 0.01). Variance ECG provided a stochastically significant discrimination (p < 0.01) which was matched by exercise test only when chest pain variable was added to ST-segment depression as a discriminating criterion. Even then, the exercise test diagnosed single-vessel disease with a significantly lower sensitivity. At a cutpoint of CAD-I > or = 70, compared with ST-segment depression > or = 1 mm combined with exertional chest pain, the overall sensitivity of variance ECG was significantly higher (p < 0.01) than that of exercise test (79 vs. 48%). When combined, the two methods identified 93% of coronary angiography positive cases. Variance ECG is an efficient diagnostic method which compares favorably with exercise test for detection of
Hans, Eric C; Zwarthoed, Berdien; Seliski, Joseph; Nemke, Brett; Muir, Peter
2014-12-01
Factors that contribute to variance in ground reaction forces (GRF) include dog morphology, velocity, and trial repetition. Narrow velocity ranges are recommended to minimize variance. In a heterogeneous population of clinically normal dogs, it was hypothesized that the dog subject effect would account for the majority of variance in peak vertical force (PVF) and vertical impulse (VI) at a trotting gait, and that narrow velocity ranges would be associated with less variance. Data from 20 normal dogs were obtained. Each dog was trotted across a force platform at its habitual velocity, with controlled acceleration (±0.5 m/s(2)). Variance effects from 12 trotting velocity ranges were examined using repeated-measures analysis-of-covariance. Significance was set at P <0.05. Mean dog bodyweight was 28.4 ± 7.4 kg. Individual dog and velocity significantly affected PVF and VI for thoracic and pelvic limbs (P <0.001). Trial number significantly affected thoracic limb PVF (P <0.001). Limb (left or right) significantly affected thoracic limb VI (P = 0.02). The magnitude of variance effects from largest to smallest was dog, velocity, trial repetition, and limb. Velocity ranges of 1.5-2.0 m/s, 1.8-2.2 m/s, and 1.9-2.2 m/s were associated with low variance and no significant effects on thoracic or pelvic limb PVF and VI. A combination of these ranges, 1.5-2.2 m/s, captured a large percentage of trials per dog (84.2 ± 21.4%) with no significant effects on thoracic or pelvic limb PVF or VI. It was concluded that wider velocity ranges facilitate capture of valid trials with little to no effect on GRF in normal trotting dogs. This concept is important for clinical trial design. PMID:25457264
ERIC Educational Resources Information Center
Trumpower, David L.
2015-01-01
Making inferences about population differences based on samples of data, that is, performing intuitive analysis of variance (IANOVA), is common in everyday life. However, the intuitive reasoning of individuals when making such inferences (even following statistics instruction), often differs from the normative logic of formal statistics. The…
On the measurement of frequency and of its sample variance with high-resolution counters
Rubiola, Enrico
2005-05-15
A frequency counter measures the input frequency {nu} averaged over a suitable time {tau}, versus the reference clock. High resolution is achieved by interpolating the clock signal. Further increased resolution is obtained by averaging multiple frequency measurements highly overlapped. In the presence of additive white noise or white phase noise, the square uncertainty improves from {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 2} to {sigma}{sub {nu}}{sup 2}{proportional_to}1/{tau}{sup 3}. Surprisingly, when a file of contiguous data is fed into the formula of the two-sample (Allan) variance {sigma}{sub y}{sup 2}({tau})=E{l_brace}(1/2)(y{sub k+1}-y{sub k}){sup 2}{r_brace} of the fractional frequency fluctuation y, the result is the modified Allan variance mod {sigma}{sub y}{sup 2}({tau}). But if a sufficient number of contiguous measures are averaged in order to get a longer {tau} and the data are fed into the same formula, the results is the (nonmodified) Allan variance. Of course interpretation mistakes are around the corner if the counter internal process is not well understood. The typical domain of interest is the the short-term stability measurement of oscillators.
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.
2008-01-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J
2008-06-01
Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655
NASA Technical Reports Server (NTRS)
Menard, Richard; Chang, Lang-Ping
1998-01-01
A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the
Pragmatics: The State of the Art: An Online Interview with Keith Allan
ERIC Educational Resources Information Center
Allan, Keith; Salmani Nodoushan, Mohammad Ali
2015-01-01
This interview was conducted with Professor Keith Allan with the aim of providing a brief but informative summary of the state of the art of pragmatics. In providing answers to the interview questions, Professor Allan begins with a definition of pragmatics as it is practiced today, i.e., the study of the meanings of utterances with attention to…
A unique type 3 ordinary chondrite containing graphite-magnetite aggregates - Allan Hills A77011
NASA Technical Reports Server (NTRS)
Mckinley, S. G.; Scott, E. R. D.; Taylor, G. J.; Keil, K.
1982-01-01
ALHA 77011, which is the object of study in the present investigation, is a chondrite of the 1977 meteorite collection from Allan Hills, Antarctica. It contains an opaque and recrystallized silicate matrix (Huss matrix) and numerous aggregates consisting of micron- and submicron-sized graphite and magnetite. It is pointed out that no abundant graphite-magnetite aggregates could be observed in other type 3 ordinary chondrites, except for Sharps. Attention is given to the results of a modal analysis, relations between ALHA 77011 and other type 3 ordinary chondrites, and the association of graphite-magnetite and metallic Fe, Ni. The discovery of graphite-magnetite aggregates in type 3 ordinary chondrites is found to suggest that this material may have been an important component in the formation of ordinary chondrites.
Borst, J G; Kits, K S; Bier, M
1994-01-01
We have studied the variance in the decay of large spontaneous gamma-aminobutyric acid (GABA)-ergic inhibitory postsynaptic currents (IPSCs) in melanotropes of Xenopus laevis to obtain information about the number of GABAA receptor channels that bind GABA during the IPSCs. The average decay of the IPSCs is well described by the sum of two exponential functions. This suggests that a three-state Markov model is sufficient to describe the decay phase, with one of the three states being an absorbing state, entered when GABA dissociates from the GABAA receptor. We have compared the variance in the decay of large spontaneous IPSCs with the variance calculated for two different three-state models: a model with one open state, one closed state, and one absorbing state (I), and a model with two open states and one absorbing state (II). The data were better described by the more efficient model II. This suggests that the efficacy of GABA at synaptic GABAA receptor channels is high and that only a small number of channels are involved in generating the GABA-ergic IPSCs. PMID:7918986
Milias-Argeitis, Andreas Khammash, Mustafa; Lygeros, John
2014-07-14
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1975-01-01
Difficulties arise in guiding a solar electric propulsion spacecraft due to nongravitational accelerations caused by random fluctuations in the magnitude and direction of the thrust vector. These difficulties may be handled by using a low thrust guidance law based on the linear-quadratic-Gaussian problem of stochastic control theory with a minimum terminal miss performance criterion. Explicit constraints are imposed on the variances of the control parameters, and an algorithm based on the Hilbert space extension of a parameter optimization method is presented for calculation of gains in the guidance law. The terminal navigation of a 1980 flyby mission to the comet Encke is used as an example.
NASA Astrophysics Data System (ADS)
Milias-Argeitis, Andreas; Lygeros, John; Khammash, Mustafa
2014-07-01
We address the problem of estimating steady-state quantities associated to systems of stochastic chemical kinetics. In most cases of interest, these systems are analytically intractable, and one has to resort to computational methods to estimate stationary values of cost functions. In this work, we introduce a novel variance reduction algorithm for stochastic chemical kinetics, inspired by related methods in queueing theory, in particular the use of shadow functions. Using two numerical examples, we demonstrate the efficiency of the method for the calculation of steady-state parametric sensitivities and evaluate its performance in comparison to other estimation methods.
Davis, Tyler; LaRocque, Karen F; Mumford, Jeanette A; Norman, Kenneth A; Wagner, Anthony D; Poldrack, Russell A
2014-08-15
Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930
Seizures in the life and works of Edgar Allan Poe.
Bazil, C W
1999-06-01
Edgar Allan Poe, one of the most celebrated of American storytellers, lived through and wrote descriptions of episodic unconsciousness, confusion, and paranoia. These symptoms have been attributed to alcohol or drug abuse but also could represent complex partial seizures, prolonged postictal states, or postictal psychosis. Complex partial seizures were not well described in Poe's time, which could explain a misdiagnosis. Alternatively, he may have suffered from complex partial epilepsy that was complicated or caused by substance abuse. Even today, persons who have epilepsy are mistaken for substance abusers and occasionally are arrested during postictal confusional states. Poe was able to use creative genius and experiences from illness to create memorable tales and poignant poems. PMID:10369317
Nannobacterial alteration of pyroxenes in martian meteorite Allan Hills 84001
NASA Astrophysics Data System (ADS)
Folk, Robert L.; Taylor, Lawrence A.
2002-08-01
In martian meteorite Allan Hills (ALH) 84001, this scanning electron microscope study was focused on the ferromagnesian minerals, which are extensively covered with nanometer-size bodies mainly 30-100 nm in diameter. These bodies range from spheres to ovoids to caterpillar shapes and resemble, both in size and shape, nannobacteria that attack weathered rocks on Earth and that can be cultured. Dense colonies alternate with clean, smooth cleavage surfaces, possibly formed later. Statistical study shows that the distribution of presumed nannobacteria is very clustered. In addition to the small bodies, there are a few occurrences of ellipsoidal 200-400 nm objects, that are within the lower size range of "normal" earthly bacteria. We conclude that the nanobodies so abundant in ALH 84001 are indeed nannobacteria, confirming the initial assertion of McKay et al. (1996). However, whether these bodies originated on Mars or are Antarctic contamination remains a valid question.
Petrogenetic relationship between Allan Hills 77005 and other achondrites
NASA Technical Reports Server (NTRS)
Mcsween, H. Y., Jr.; Taylor, L. A.; Stolper, E. M.; Muntean, R. A.; Okelley, G. D.; Eldridge, J. S.; Biswas, S.; Ngo, H. T.; Lipschutz, M. E.
1979-01-01
The paper presents chemical and petrologic data relating the Allan Hills (ALHA) 77005 achondrite from Antarctica and explores their petrogenetic relationship with the shergottites. Petrologic similarities with the latter in terms of mineralogy, oxidation state, inferred source region composition, and shock ages suggest a genetic relationship, also indicated by volatile to involatile element ratios and abundances of other trace elements. ALHA 77005 may be a cumulate crystallized from a liquid parental to materials from which the shergottites crystallized or a sample of peridotite from which shergottite parent liquids were derived. Chemical similarities with terrestrial ultramafic rocks suggest that it provides an additional sample of the only other solar system body with basalt source origins chemically similar to the upper earth mantle.
The History of Allan Hills 84001 Revised: Multiple Shock Events
NASA Technical Reports Server (NTRS)
Treiman, Allan H.
1998-01-01
The geologic history of Martian meteorite Allan Hills (ALH) 84001 is more complex than previously recognized, with evidence for four or five crater-forming impacts onto Mars. This history of repeated deformation and shock metamorphism appears to weaken some arguments that have been offered for and against the hypothesis of ancient Martian life in ALH 84001. Allan Hills 84001 formed originally from basaltic magma. Its first impact event (I1) is inferred from the deformation (D1) that produced the granular-textured bands ("crush zones") that transect the original igneous fabric. Deformation D1 is characterized by intense shear and may represent excavation or rebound flow of rock beneath a large impact crater. An intense thermal metamorphism followed D1 and may be related to it. The next impact (I2) produced fractures, (Fr2) in which carbonate "pancakes" were deposited and produced feldspathic glass from some of the igneous feldspars and silica. After I2, carbonate pancakes and globules were deposited in Fr2 fractures and replaced feldspathic glass and possibly crystalline silicates. Next, feldspars, feldspathic glass, and possibly some carbonates were mobilized and melted in the third impact (I3). Microfaulting, intense fracturing, and shear are also associated with 13. In the fourth impact (I4), the rock was fractured and deformed without significant heating, which permitted remnant magnetization directions to vary across fracture surfaces. Finally, ALH 84001 was ejected from Mars in event I5, which could be identical to I4. This history of multiple impacts is consistent with the photogeology of the Martian highlands and may help resolve some apparent contradictions among recent results on ALH 84001. For example, the submicron rounded magnetite grains in the carbonate globules could be contemporaneous with carbonate deposition, whereas the elongate magnetite grains, epitaxial on carbonates, could be ascribed to vapor-phase deposition during I3.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
NASA Astrophysics Data System (ADS)
Coutu, S.
2014-12-01
A sensitivity analysis was conducted on an existing parsimonious model aiming to reproduce flow in engineered urban catchments and sewer networks. The model is characterized by his parsimonious feature and is limited to seven calibration parameters. The objective of this study is to demonstrate how different levels of sensitivity analysis can have an influence on the interpretation of input parameter relevance in urban hydrology, even for light structure models. In this perspective, we applied a one-at-a-time (OAT) sensitivity analysis (SA) as well as a variance-based global and model independent method; the calculation of Sobol indexes. Sobol's first and total effect indexes were estimated using a Monte-Carlo approach. We present evidences of the irrelevance of calculating Sobol's second order indexes when uncertainty on index estimation is too high. Sobol's method results showed that two parameters drive model performance: the subsurface discharge rate and the root zone drainage coefficient (Clapp exponent). Interestingly, the surface discharge rate responsible flow in impervious area has no significant relevance, contrarily to what was expected considering only the one-at-a-time sensitivity analysis. This last statement is clearly not straightforward. It highlights the utility of carrying variance-based sensitivity analysis in the domain of urban hydrology, even when using a parsimonious model, in order to prevent misunderstandings in the system dynamics and consequent management mistakes.
Monte Carlo variance reduction
NASA Technical Reports Server (NTRS)
Byrn, N. R.
1980-01-01
Computer program incorporates technique that reduces variance of forward Monte Carlo method for given amount of computer time in determining radiation environment in complex organic and inorganic systems exposed to significant amounts of radiation.
Moster, Benjamin P.; Rix, Hans-Walter; Somerville, Rachel S.; Newman, Jeffrey A. E-mail: rix@mpia.de E-mail: janewman@pitt.edu
2011-04-20
Deep pencil beam surveys (<1 deg{sup 2}) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by 'cosmic variance'. This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift z-bar and redshift bin size {Delta}z. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, z-bar , {Delta}z, and stellar mass m{sub *}. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates ({delta}{sigma}{sub v}/{sigma}{sub v}) is shown to be better than 20%. We find that for GOODS at z-bar =2 and with {Delta}z = 0.5, the relative cosmic variance of galaxies with m{sub *}>10{sup 11} M{sub sun} is {approx}38%, while it is {approx}27% for GEMS and {approx}12% for COSMOS. For galaxies of m{sub *} {approx} 10{sup 10} M{sub sun}, the relative cosmic variance is {approx}19% for GOODS, {approx}13% for GEMS, and {approx}6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at z
NASA Astrophysics Data System (ADS)
Moster, Benjamin P.; Somerville, Rachel S.; Newman, Jeffrey A.; Rix, Hans-Walter
2011-04-01
Deep pencil beam surveys (<1 deg2) are of fundamental importance for studying the high-redshift universe. However, inferences about galaxy population properties (e.g., the abundance of objects) are in practice limited by "cosmic variance." This is the uncertainty in observational estimates of the number density of galaxies arising from the underlying large-scale density fluctuations. This source of uncertainty can be significant, especially for surveys which cover only small areas and for massive high-redshift galaxies. Cosmic variance for a given galaxy population can be determined using predictions from cold dark matter theory and the galaxy bias. In this paper, we provide tools for experiment design and interpretation. For a given survey geometry, we present the cosmic variance of dark matter as a function of mean redshift \\bar{z} and redshift bin size Δz. Using a halo occupation model to predict galaxy clustering, we derive the galaxy bias as a function of mean redshift for galaxy samples of a given stellar mass range. In the linear regime, the cosmic variance of these galaxy samples is the product of the galaxy bias and the dark matter cosmic variance. We present a simple recipe using a fitting function to compute cosmic variance as a function of the angular dimensions of the field, \\bar{z}, Δz, and stellar mass m *. We also provide tabulated values and a software tool. The accuracy of the resulting cosmic variance estimates (δσ v /σ v ) is shown to be better than 20%. We find that for GOODS at \\bar{z}=2 and with Δz = 0.5, the relative cosmic variance of galaxies with m *>1011 M sun is ~38%, while it is ~27% for GEMS and ~12% for COSMOS. For galaxies of m * ~ 1010 M sun, the relative cosmic variance is ~19% for GOODS, ~13% for GEMS, and ~6% for COSMOS. This implies that cosmic variance is a significant source of uncertainty at \\bar{z}=2 for small fields and massive galaxies, while for larger fields and intermediate mass galaxies, cosmic variance is
ERIC Educational Resources Information Center
Johnson, Colleen Cook
This study integrates into one comprehensive Monte Carlo simulation a vast array of previously defined and substantively interrelated research studies of the robustness of analysis of variance (ANOVA) and analysis of covariance (ANCOVA) statistical procedures. Three sets of balanced ANOVA and ANCOVA designs (group sizes of 15, 30, and 45) and one…
Shirali, M; Nielsen, V H; Møller, S H; Jensen, J
2015-10-01
The aim of this study was to determine the genetic background of longitudinal residual feed intake (RFI) and BW gain in farmed mink using random regression methods considering heterogeneous residual variances. The individual BW was measured every 3 weeks from 63 to 210 days of age for 2139 male+female pairs of juvenile mink during the growing-furring period. Cumulative feed intake was calculated six times with 3-week intervals based on daily feed consumption between weighing's from 105 to 210 days of age. Genetic parameters for RFI and BW gain in males and females were obtained using univariate random regression with Legendre polynomials containing an animal genetic effect and permanent environmental effect of litter along with heterogeneous residual variances. Heritability estimates for RFI increased with age from 0.18 (0.03, posterior standard deviation (PSD)) at 105 days of age to 0.49 (0.03, PSD) and 0.46 (0.03, PSD) at 210 days of age in male and female mink, respectively. The heritability estimates for BW gain increased with age and had moderate to high range for males (0.33 (0.02, PSD) to 0.84 (0.02, PSD)) and females (0.35 (0.03, PSD) to 0.85 (0.02, PSD)). RFI estimates during the growing period (105 to 126 days of age) showed high positive genetic correlations with the pelting RFI (210 days of age) in male (0.86 to 0.97) and female (0.92 to 0.98). However, phenotypic correlations were lower from 0.47 to 0.76 in males and 0.61 to 0.75 in females. Furthermore, BW records in the growing period (63 to 126 days of age) had moderate (male: 0.39, female: 0.53) to high (male: 0.87, female: 0.94) genetic correlations with pelting BW (210 days of age). The result of current study showed that RFI and BW in mink are highly heritable, especially at the late furring period, suggesting potential for large genetic gains for these traits. The genetic correlations suggested that substantial genetic gain can be obtained by only considering the RFI estimate and BW at pelting
NASA Astrophysics Data System (ADS)
Shi, Guo-Liang; Zhou, Xiao-Yu; Feng, Yin-Chang; Tian, Ying-Ze; Liu, Gui-Rong; Zheng, Mei; Zhou, Yang; Zhang, Yuan-Hang
2015-01-01
The CMB (Chemical Mass Balance) 8.2 model released by the USEPA is a commonly used receptor model that can determine estimated source contributions and their uncertainties (called default uncertainty). In this study, we propose an improved CMB uncertainty for the modeled contributions (called EV-LS uncertainty) by adding the difference between the modeled and measured values for ambient species concentrations to the default CMB uncertainty, based on the effective variance least squares (EV-LS) solution. This correction reconciles the uncertainty estimates for EV and OLS regression. To verify the formula for the EV-LS CMB uncertainty, the same ambient datasets were analyzed using the equation we developed for EV-LS CMB uncertainty and a standard statistical package, SPSS 16.0. The same results were obtained by both ways indicate that the equation for EV-LS CMB uncertainty proposed here is acceptable. In addition, four ambient datasets were studies by CMB 8.2 and the source contributions as well as the associated uncertainties were obtained accordingly.
Smith, D A; Steffen, W; Simmons, R M; Sleep, J
2001-01-01
In single-molecule experiments on the interaction between myosin and actin, mechanical events are embedded in Brownian noise. Methods of detecting events have progressed from simple manual detection of shifts in the position record to threshold-based selection of intermittent periods of reduction in noise. However, none of these methods provides a "best fit" to the data. We have developed a Hidden-Markov algorithm that assumes a simple kinetic model for the actin-myosin interaction and provides automatic, threshold-free, maximum-likelihood detection of events. The method is developed for the case of a weakly trapped actin-bead dumbbell interacting with a stationary myosin molecule (Finer, J. T., R. M. Simmons, and J. A. Spudich. 1994. Nature. 368:113-119). The algorithm operates on the variance of bead position signals in a running window, and is tested using Monte Carlo simulations to formulate ways of determining the optimum window width. The working stroke is derived and corrected for actin-bead link compliance. With experimental data, we find that modulation of myosin binding by the helical structure of the actin filament complicates the determination of the working stroke; however, under conditions that produce a Gaussian distribution of bound levels (cf. Molloy, J. E., J. E. Burns, J. Kendrick-Jones, R. T. Tregear, and D. C. S. White. 1995. Nature. 378:209-212), four experiments gave working strokes in the range 5.4-6.3 nm for rabbit skeletal muscle myosin S1. PMID:11606292
NASA Astrophysics Data System (ADS)
Larrinaga, Asier R.
2010-01-01
I consider statistical problems in the analysis of multiple-choice food-preference experiments, and propose a univariate analysis of variance design for experiments of this type. I present an example experimental design, for a hypothetical comparison of fruit colour preferences between two frugivorous bird species. In each fictitious trial, four trays each containing a known weight of artificial fruits (red, blue, black, or green) are introduced into the cage, while four equivalent trays are left outside the cage, to control for tray weight loss due to other factors (notably desiccation). The proposed univariate approach allows data from such designs to be analysed with adequate power and no major violations of statistical assumptions. Nevertheless, there is no single "best" approach for experiments of this type: the best analysis in each case will depend on the particular aims and nature of the experiments.
Getting around cosmic variance
Kamionkowski, M.; Loeb, A.
1997-10-01
Cosmic microwave background (CMB) anisotropies probe the primordial density field at the edge of the observable Universe. There is a limiting precision ({open_quotes}cosmic variance{close_quotes}) with which anisotropies can determine the amplitude of primordial mass fluctuations. This arises because the surface of last scatter (SLS) probes only a finite two-dimensional slice of the Universe. Probing other SLS{close_quote}s observed from different locations in the Universe would reduce the cosmic variance. In particular, the polarization of CMB photons scattered by the electron gas in a cluster of galaxies provides a measurement of the CMB quadrupole moment seen by the cluster. Therefore, CMB polarization measurements toward many clusters would probe the anisotropy on a variety of SLS{close_quote}s within the observable Universe, and hence reduce the cosmic-variance uncertainty. {copyright} {ital 1997} {ital The American Physical Society}
Mirzaei, H R; Verbyla, A P; Pitchford, W S
2011-01-01
A joint growth-carcass model using random regression was used to estimate the (co)variance components of beef cattle body weights and carcass quality traits and correlations between them. During a four-year period (1994-1997) of the Australian "southern crossbreeding project", mature Hereford cows (N = 581) were mated to 97 sires of Jersey, Wagyu, Angus, Hereford, South Devon, Limousin, and Belgian Blue breeds, resulting in 1141 calves. Data included 13 (for steers) and 8 (for heifers) body weight measurements approximately every 50 days from birth until slaughter and four carcass quality traits: hot standard carcass weight, rump fat depth, rib eye muscle area, and intramuscular fat content. The mixed model included fixed effects of sex, sire breed, age (linear, quadratic and cubic), and their interactions between sex and sire breed with age. Random effects were sire, dam, management (birth location, year, post-weaning groups), and permanent environmental effects, and their interactions with linear, quadratic and cubic growth, when possible. Phenotypic, sire and dam correlations between body weights and hot standard carcass weight and rib eye muscle area were positive and moderate to high from birth to feedlot period. Management variation accounted for the largest proportion of total variation in both growth and carcass traits. Management correlations between carcass traits were high, except between rump fat depth and intramuscular fat (r = 0.26). Management correlations between body weight and carcass traits during the pre-weaning period were positive except for intramuscular fat. The correlations were low from birth to weaning, then increased dramatically and were high during the feedlot period. PMID:21425094
Parolini, Giuditta
2015-01-01
During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them. PMID:25311906
Variance Anisotropy in Kinetic Plasmas
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Oughton, Sean; Matthaeus, William H.; Wan, Minping
2016-06-01
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solar wind observations.
Conversations across Meaning Variance
ERIC Educational Resources Information Center
Cordero, Alberto
2013-01-01
Progressive interpretations of scientific theories have long been denounced as naive, because of the inescapability of meaning variance. The charge reportedly applies to recent realist moves that focus on theory-parts rather than whole theories. This paper considers the question of what "theory-parts" of epistemic significance (if any) relevantly…
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
NASA Astrophysics Data System (ADS)
Zhao, J.; Tiede, C.
2011-05-01
An implementation of uncertainty analysis (UA) and quantitative global sensitivity analysis (SA) is applied to the non-linear inversion of gravity changes and three-dimensional displacement data which were measured in and active volcanic area. A didactic example is included to illustrate the computational procedure. The main emphasis is placed on the problem of extended Fourier amplitude sensitivity test (E-FAST). This method produces the total sensitivity indices (TSIs), so that all interactions between the unknown input parameters are taken into account. The possible correlations between the output an the input parameters can be evaluated by uncertainty analysis. Uncertainty analysis results indicate the general fit between the physical model and the measurements. Results of the sensitivity analysis show quite different sensitivities for the measured changes as they relate to the unknown parameters of a physical model for an elastic-gravitational source. Assuming a fixed number of executions, thirty different seeds are observed to determine the stability of this method.
Mitra, Vikram; Govorukhina, Natalia; Zwanenburg, Gooitzen; Hoefsloot, Huub; Westra, Inge; Smilde, Age; Reijmers, Theo; van der Zee, Ate G J; Suits, Frank; Bischoff, Rainer; Horvatovich, Péter
2016-04-19
Complex shotgun proteomics peptide profiles obtained in quantitative differential protein expression studies, such as in biomarker discovery, may be affected by multiple experimental factors. These preanalytical factors may affect the measured protein abundances which in turn influence the outcome of the associated statistical analysis and validation. It is therefore important to determine which factors influence the abundance of peptides in a complex proteomics experiment and to identify those peptides that are most influenced by these factors. In the current study we analyzed depleted human serum samples to evaluate experimental factors that may influence the resulting peptide profile such as the residence time in the autosampler at 4 °C, stopping or not stopping the trypsin digestion with acid, the type of blood collection tube, different hemolysis levels, differences in clotting times, the number of freeze-thaw cycles, and different trypsin/protein ratios. To this end we used a two-level fractional factorial design of resolution IV (2(IV)(7-3)). The design required analysis of 16 samples in which the main effects were not confounded by two-factor interactions. Data preprocessing using the Threshold Avoiding Proteomics Pipeline (Suits, F.; Hoekman, B.; Rosenling, T.; Bischoff, R.; Horvatovich, P. Anal. Chem. 2011, 83, 7786-7794, ref 1) produced a data-matrix containing quantitative information on 2,559 peaks. The intensity of the peaks was log-transformed, and peaks having intensities of a low t-test significance (p-value > 0.05) and a low absolute fold ratio (<2) between the two levels of each factor were removed. The remaining peaks were subjected to analysis of variance (ANOVA)-simultaneous component analysis (ASCA). Permutation tests were used to identify which of the preanalytical factors influenced the abundance of the measured peptides most significantly. The most important preanalytical factors affecting peptide intensity were (1) the hemolysis level
NASA Astrophysics Data System (ADS)
Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Garambois, P. A.; Decharme, B.; Rochoux, M. C.
2015-12-01
Land Surface Models (LSM) coupled with River Routing schemes (RRM), are used in Global Climate Models (GCM) to simulate the continental part of the water cycle. They are key component of GCM as they provide boundary conditions to atmospheric and oceanic models. However, at global scale, errors arise mainly from simplified physics, atmospheric forcing, and input parameters. More particularly, those used in RRM, such as river width, depth and friction coefficients, are difficult to calibrate and are mostly derived from geomorphologic relationships, which may not always be realistic. In situ measurements are then used to calibrate these relationships and validate the model, but global in situ data are very sparse. Additionally, due to the lack of existing global river geomorphology database and accurate forcing, models are run at coarse resolution. This is typically the case of the ISBA-TRIP model used in this study.A complementary alternative to in-situ data are satellite observations. In this regard, the Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA/CNES/CSA/UKSA and scheduled for launch around 2020, should be very valuable to calibrate RRM parameters. It will provide maps of water surface elevation for rivers wider than 100 meters over continental surfaces in between 78°S and 78°N and also direct observation of river geomorphological parameters such as width ans slope.Yet, before assimilating such kind of data, it is needed to analyze RRM temporal sensitivity to time-constant parameters. This study presents such analysis over large river basins for the TRIP RRM. Model output uncertainty, represented by unconditional variance, is decomposed into ordered contribution from each parameter. Doing a time-dependent analysis allows then to identify to which parameters modeled water level and discharge are the most sensitive along a hydrological year. The results show that local parameters directly impact water levels, while
THE DEAD-LIVING-MOTHER: MARIE BONAPARTE'S INTERPRETATION OF EDGAR ALLAN POE'S SHORT STORIES.
Obaid, Francisco Pizarro
2016-06-01
Princess Marie Bonaparte is an important figure in the history of psychoanalysis, remembered for her crucial role in arranging Freud's escape to safety in London from Nazi Vienna, in 1938. This paper connects us to Bonaparte's work on Poe's short stories. Founded on concepts of Freudian theory and an exhaustive review of the biographical facts, Marie Bonaparte concluded that the works of Edgar Allan Poe drew their most powerful inspirational force from the psychological consequences of the early death of the poet's mother. In Bonaparte's approach, which was powerfully influenced by her recognition of the impact of the death of her own mother when she was born-an understanding she gained in her analysis with Freud-the thesis of the dead-living-mother achieved the status of a paradigmatic key to analyze and understand Poe's literary legacy. This paper explores the background and support of this hypothesis and reviews Bonaparte's interpretation of Poe's most notable short stories, in which extraordinary female figures feature in the narrative. PMID:27194275
Lee, Ji Seong; Kim, Eun Young; Choi, Younyoung; Koo, Ja Hyouk
2014-01-01
Children's reasoning about the afterlife emerges naturally as a developmental regularity. Although a biological understanding of death increases in accordance with cognitive development, biological and supernatural explanations of death may coexist in a complementary manner, being deeply imbedded in cultural contexts. This study conducted a content analysis of 40 children's death-themed picture books in Western Europe and East Asia. It can be inferred that causality and non-functionality are highly integrated with the naturalistic and supernatural understanding of death in Western Europe, whereas the literature in East Asia seems to rely on naturalistic aspects of death and focuses on causal explanations. PMID:24738761
The final days of Edgar Allan Poe: clues to an old mystery using 21st century medical science.
Francis, Roger A
This study examines all documented information regarding the final days and death of Edgar Allan Poe (1809-1849), in an attempt to determine the most likely cause of death of the American poet, short story writer, and literary critic. Information was gathered from letters, newspaper accounts, and magazine articles written during the period after Poe's death, and also from biographies and medical journal articles written up until the present. A chronology of Poe's final days was constructed, and this was used to form a differential diagnosis of possible causes of death. Death theories over the last 160 years were analyzed using this information. This analysis, along with a review of Poe's past medical history, would seem to support an alcohol-related cause of death. PMID:20222235
An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.
Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia
2016-04-01
The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable. PMID:26540679
NASA Astrophysics Data System (ADS)
Rexer, Moritz; Hirt, Christian
2015-09-01
Classical degree variance models (such as Kaula's rule or the Tscherning-Rapp model) often rely on low-resolution gravity data and so are subject to extrapolation when used to describe the decay of the gravity field at short spatial scales. This paper presents a new degree variance model based on the recently published GGMplus near-global land areas 220 m resolution gravity maps (Geophys Res Lett 40(16):4279-4283, 2013). We investigate and use a 2D-DFT (discrete Fourier transform) approach to transform GGMplus gravity grids into degree variances. The method is described in detail and its approximation errors are studied using closed-loop experiments. Focus is placed on tiling, azimuth averaging, and windowing effects in the 2D-DFT method and on analytical fitting of degree variances. Approximation errors of the 2D-DFT procedure on the (spherical harmonic) degree variance are found to be at the 10-20 % level. The importance of the reference surface (sphere, ellipsoid or topography) of the gravity data for correct interpretation of degree variance spectra is highlighted. The effect of the underlying mass arrangement (spherical or ellipsoidal approximation) on the degree variances is found to be crucial at short spatial scales. A rule-of-thumb for transformation of spectra between spherical and ellipsoidal approximation is derived. Application of the 2D-DFT on GGMplus gravity maps yields a new degree variance model to degree 90,000. The model is supported by GRACE, GOCE, EGM2008 and forward-modelled gravity at 3 billion land points over all land areas within the SRTM data coverage and provides gravity signal variances at the surface of the topography. The model yields omission errors of 9 mGal for gravity (1.5 cm for geoid effects) at scales of 10 km, 4 mGal (1 mm) at 2-km scales, and 2 mGal (0.2 mm) at 1-km scales.
The Art of George Morrison and Allan Houser: The Development and Impact of Native Modernism
ERIC Educational Resources Information Center
Montiel, Anya
2005-01-01
The idea for a retrospective on George Morrison and Allan Houser as one of the inaugural exhibitions at the National Museum of the American Indian (NMAI) came from the NMAI curator of contemporary art, Truman Lowe. An artist and sculptor himself, Lowe knew both artists personally and saw them as mentors and visionaries. Lowe advised an exhibition…
Where Were the Whistleblowers? The Case of Allan McDonald and Roger Boisjoly.
ERIC Educational Resources Information Center
Stewart, Lea P.
Employees who "blow the whistle" on their company because they believe it is engaged in practices that are illegal, immoral, or harmful to the public, often face grave consequences for their actions, including demotion, harassment, forced resignation, or termination. The case of Allan McDonald and Roger Boisjoly, engineers who blew the whistle on…
Horror from the Soul--Gothic Style in Allan Poe's Horror Fictions
ERIC Educational Resources Information Center
Sun, Chunyan
2015-01-01
Edgar Allan Poe made tremendous contribution to horror fiction. Poe's inheritance of gothic fiction and American literature tradition combined with his living experience forms the background of his horror fictions. He inherited the tradition of the gothic fictions and made innovations on it, so as to penetrate to subconsciousness. Poe's horror…
European Studies as Answer to Allan Bloom's "The Closing of the American Mind."
ERIC Educational Resources Information Center
Macdonald, Michael H.
European studies can provide a solution to several of the issues raised in Allan Bloom's "The Closing of the American Mind." European studies pursue the academic quest for what is truth, what is goodness, and what is beauty. In seeking to answer these questions, the Greeks were among the first to explore many of humanity's problems and their…
Allan M. Freedman, LLB: a lawyer’s gift to Canadian chiropractors
Brown, Douglas M.
2007-01-01
This paper reviews the leadership role, contributions, accolades, and impact of Professor Allan Freedman through a 30 year history of service to CMCC and the chiropractic profession in Canada. Professor Freedman has served as an educator, philanthropist and also as legal counsel. His influence on chiropractic organizations and chiropractors during this significant period in the profession is discussed. PMID:18060008
Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science
ERIC Educational Resources Information Center
Gelfert, Axel
2014-01-01
Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling "prose-poem" "Eureka" (1848), in particular, has…
ERIC Educational Resources Information Center
Bembenutty, Hefer
2012-01-01
This article presents an interview with Allan Wigfield, professor and chair of the Department of Human Development and distinguished scholar-teacher at the University of Maryland. He has authored more than 100 peer-reviewed journal articles and book chapters on children's motivation and other topics. He is a fellow of Division 15 (Educational…
Gong, Yunye; Veesler, David; Doerschuk, Peter C; Johnson, John E
2016-03-01
Cryo EM structures of maturation-intermediate Prohead I of bacteriophage HK97 with (PhI(Pro+)) and without (PhI(Pro-)) the viral protease packaged have been reported (Veesler et al., 2014). In spite of PhI(Pro+) containing an additional ∼ 100 × 24 kD of protein, the two structures appeared identical although the two particles have substantially different biochemical properties, e.g., PhI(Pro-) is less stable to disassembly conditions such as urea. Here the same cryo EM images are used to characterize the spatial heterogeneity of the particles at 17Å resolution by variance analysis and show that PhI(Pro-) has roughly twice the standard deviation of PhI(Pro+). Furthermore, the greatest differences in standard deviation are present in the region where the δ-domain, not seen in X-ray crystallographic structures or fully seen in cryo EM, is expected to be located. Thus presence of the protease appears to stabilize the δ-domain which the protease will eventually digest. PMID:26724602
Zoffoli, Hugo José Oliveira; Varella, Carlos Alberto Alves; do Amaral-Sobrinho, Nelson Moura Brasil; Zonta, Everaldo; Tolón-Becerra, Alfredo
2013-11-01
In environmental monitoring, variables with analytically non-detected values are commonly encountered. For the statistical evaluation of these data, most of the methods that produce a less biased performance require specific computer programs. In this paper, a statistical method based on the median semi-variance (SemiV) is proposed to estimate the position and spread statistics in a dataset with single left-censoring. The performances of the SemiV method and 12 other statistical methods are evaluated using real and complete datasets. The performances of all the methods are influenced by the percentage of censored data. In general, the simple substitution and deletion methods showed biased performance, with exceptions for L/2, Inter and L/√2 methods that can be used with caution under specific conditions. In general, the SemiV method and other parametric methods showed similar performances and were less biased than other methods. The SemiV method is a simple and accurate procedure that can be used in the analysis of datasets with less than 50% of left-censored data. PMID:23830887
NASA Astrophysics Data System (ADS)
Hackl, M.; Malservisi, R.; Hugentobler, U.; Wonnacott, R.
2011-11-01
We present a method to derive velocity uncertainties from GPS position time series that are affected by time-correlated noise. This method is based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation (MLE). The Allan variance of the rate (AVR) is calculated in the time domain and hence is not too sensitive to gaps in the time series. We derived analytical expressions of the AVR for different kinds of noises like power law noise, white noise, flicker noise, and random walk and found an expression for the variance produced by an annual signal. These functional relations form the basis of error models that have to be fitted to the AVR in order to estimate the velocity uncertainty. Finally, we applied the method to the South Africa GPS network TrigNet. Most time series show noise characteristics that can be modeled by a power law noise plus an annual signal. The method is computationally very cheap, and the results are in good agreement with the ones obtained by methods based on MLE.
Variational bayesian method of estimating variance components.
Arakawa, Aisaku; Taniguchi, Masaaki; Hayashi, Takeshi; Mikawa, Satoshi
2016-07-01
We developed a Bayesian analysis approach by using a variational inference method, a so-called variational Bayesian method, to determine the posterior distributions of variance components. This variational Bayesian method and an alternative Bayesian method using Gibbs sampling were compared in estimating genetic and residual variance components from both simulated data and publically available real pig data. In the simulated data set, we observed strong bias toward overestimation of genetic variance for the variational Bayesian method in the case of low heritability and low population size, and less bias was detected with larger population sizes in both methods examined. The differences in the estimates of variance components between the variational Bayesian and the Gibbs sampling were not found in the real pig data. However, the posterior distributions of the variance components obtained with the variational Bayesian method had shorter tails than those obtained with the Gibbs sampling. Consequently, the posterior standard deviations of the genetic and residual variances of the variational Bayesian method were lower than those of the method using Gibbs sampling. The computing time required was much shorter with the variational Bayesian method than with the method using Gibbs sampling. PMID:26877207
Kersseboom, Simone; Kremers, Gert-Jan; Friesema, Edith C H; Visser, W Edward; Klootwijk, Wim; Peeters, Robin P; Visser, Theo J
2013-05-01
Monocarboxylate transporter 8 (MCT8) is a thyroid hormone (TH)-specific transporter. Mutations in the MCT8 gene are associated with Allan-Herndon-Dudley Syndrome (AHDS), consisting of severe psychomotor retardation and disturbed TH parameters. To study the functional consequences of different MCT8 mutations in detail, we combined functional analysis in different cell types with live-cell imaging of the cellular distribution of seven mutations that we identified in patients with AHDS. We used two cell models to study the mutations in vitro: 1) transiently transfected COS1 and JEG3 cells, and 2) stably transfected Flp-in 293 cells expressing a MCT8-cyan fluorescent protein construct. All seven mutants were expressed at the protein level and showed a defect in T3 and T4 transport in uptake and metabolism studies. Three mutants (G282C, P537L, and G558D) had residual uptake activity in Flp-in 293 and COS1 cells, but not in JEG3 cells. Four mutants (G221R, P321L, D453V, P537L) were expressed at the plasma membrane. The mobility in the plasma membrane of P537L was similar to WT, but the mobility of P321L was altered. The other mutants studied (insV236, G282C, G558D) were predominantly localized in the endoplasmic reticulum. In essence, loss of function by MCT8 mutations can be divided in two groups: mutations that result in partial or complete loss of transport activity (G221R, P321L, D453V, P537L) and mutations that mainly disturb protein expression and trafficking (insV236, G282C, G558D). The cell type-dependent results suggest that MCT8 mutations in AHDS patients may have tissue-specific effects on TH transport probably caused by tissue-specific expression of yet unknown MCT8-interacting proteins. PMID:23550058
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing the number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.
Cosmology without cosmic variance
Bernstein, Gary M.; Cai, Yan -Chuan
2011-10-01
The growth of structures in the Universe is described by a function G that is predicted by the combination of the expansion history of the Universe and the laws of gravity within it. We examine the improvements in constraints on G that are available from the combination of a large-scale galaxy redshift survey with a weak gravitational lensing survey of background sources. We describe a new combination of such observations that in principle this yields a measure of the growth rate that is free of sample variance, i.e. the uncertainty in G can be reduced without bound by increasing themore » number of redshifts obtained within a finite survey volume. The addition of background weak lensing data to a redshift survey increases information on G by an amount equivalent to a 10-fold increase in the volume of a standard redshift-space distortion measurement - if the lensing signal can be measured to sub-per cent accuracy. This argues that a combined lensing and redshift survey over a common low-redshift volume of the Universe is a more powerful test of general relativity than an isolated redshift survey over larger volume at high redshift, especially as surveys begin to cover most of the available sky.« less
Minimum variance beamformer weights revisited.
Moiseev, Alexander; Doesburg, Sam M; Grunau, Ruth E; Ribary, Urs
2015-10-15
Adaptive minimum variance beamformers are widely used analysis tools in MEG and EEG. When the target brain activity presents in the form of spatially localized responses, the procedure usually involves two steps. First, positions and orientations of the sources of interest are determined. Second, the filter weights are calculated and source time courses reconstructed. This last step is the object of the current study. Despite different approaches utilized at the source localization stage, basic expressions for the weights have the same form, dictated by the minimum variance condition. These classic expressions involve covariance matrix of the measured field, which includes contributions from both the sources of interest and the noise background. We show analytically that the same weights can alternatively be obtained, if the full field covariance is replaced with that of the noise, provided the beamformer points to the true sources precisely. In practice, however, a certain mismatch is always inevitable. We show that such mismatch results in partial suppression of the true sources if the traditional weights are used. To avoid this effect, the "alternative" weights based on properly estimated noise covariance should be applied at the second, source time course reconstruction step. We demonstrate mathematically and using simulated and real data that in many situations the alternative weights provide significantly better time course reconstruction quality than the traditional ones. In particular, they a) improve source-level SNR and yield more accurately reconstructed waveforms; b) provide more accurate estimates of inter-source correlations; and c) reduce the adverse influence of the source correlations on the performance of single-source beamformers, which are used most often. Importantly, the alternative weights come at no additional computational cost, as the structure of the expressions remains the same. PMID:26143207
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-01
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. PMID:26995641
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorus sorption data for soil of the Pembroke classification are recorded at high replication — 10 experiments at each of 7 initial concentrations — for characterizing the data error structure through variance function estimation. The results permit the assignment of reliable weights for the su...
Different individuals will take different approaches to the analysis and interpretation of data. his study attempted to quantify the effect of such individual differences on the quality of geostatistical spatial estimates. dentical spatial data sets were sent to 12 investigators,...
Cosmic-ray-produced Cl-36 and Mn-53 in Allan Hills-77 meteorites
NASA Astrophysics Data System (ADS)
Nishiizumi, K.; Murrell, M. T.; Arnold, J. R.; Elmore, D.; Ferraro, R. D.; Gove, H. E.; Finkel, R. C.
1981-01-01
Cosmic-ray-produced Mn-53 has been determined by neutron activation in nine Allan Hills-77 meteorites. Additionally, Cl-36 has been measured in seven of these objects using tandem accelerator mass spectrometry. These results, along with C-14 and Al-26 concentrations determined elsewhere, yield terrestrial ages ranging from 10,000 to 700,000 years. Weathering was not found to result in Mn-53 loss.
Mehta, S; Grabowski, T J; Razavi, M; Eaton, B; Bolinger, L
2006-02-15
Speech production introduces signal changes in fMRI data that can mimic or mask the task-induced BOLD response. Rapid event-related designs with variable ISIs address these concerns by minimizing the correlation of task and speech-related signal changes without sacrificing efficiency; however, the increase in residual variance due to speech still decreases statistical power and must be explicitly addressed primarily through post-processing techniques. We investigated the timing, magnitude, and location of speech-related variance in an overt picture naming fMRI study with a rapid event-related design, using a data acquisition system that time-stamped image acquisitions, speech, and a pneumatic belt signal on the same clock. Using a spectral subtraction algorithm to remove scanner gradient noise from recorded speech, we related the timing of speech, stimulus presentation, chest wall movement, and image acquisition. We explored the relationship of an extended speech event time course and respiration on signal variance by performing a series of voxelwise regression analyses. Our results demonstrate that these effects are spatially heterogeneous, but their anatomic locations converge across subjects. Affected locations included basal areas (orbitofrontal, mesial temporal, brainstem), areas adjacent to CSF spaces, and lateral frontal areas. If left unmodeled, speech-related variance can result in regional detection bias that affects some areas critically implicated in language function. The results establish the feasibility of detecting and mitigating speech-related variance in rapid event-related fMRI experiments with single word utterances. They further demonstrate the utility of precise timing information about speech and respiration for this purpose. PMID:16412665
Yoon, Grace; Visser, Theo J.
2015-01-01
Background Mutations in the thyroid hormone (TH) transporter MCT8 have been identified as the cause for Allan-Herndon-Dudley Syndrome (AHDS), characterized by severe psychomotor retardation and altered TH serum levels. Here we report a novel MCT8 mutation identified in 4 generations of one family, and its functional characterization. Methods Proband and family members were screened for 60 genes involved in X-linked cognitive impairment and the MCT8 mutation was confirmed. Functional consequences of MCT8 mutations were studied by analysis of [125I]TH transport in fibroblasts and transiently transfected JEG3 and COS1 cells, and by subcellular localization of the transporter. Results The proband and a male cousin demonstrated clinical findings characteristic of AHDS. Serum analysis showed high T3, low rT3, and normal T4 and TSH levels in the proband. A MCT8 mutation (c.869C>T; p.S290F) was identified in the proband, his cousin, and several female carriers. Functional analysis of the S290F mutant showed decreased TH transport, metabolism and protein expression in the three cell types, whereas the S290A mutation had no effect. Interestingly, both uptake and efflux of T3 and T4 was impaired in fibroblasts of the proband, compared to his healthy brother. However, no effect of the S290F mutation was observed on TH efflux from COS1 and JEG3 cells. Immunocytochemistry showed plasma membrane localization of wild-type MCT8 and the S290A and S290F mutants in JEG3 cells. Conclusions We describe a novel MCT8 mutation (S290F) in 4 generations of a family with Allan-Herndon-Dudley Syndrome. Functional analysis demonstrates loss-of-function of the MCT8 transporter. Furthermore, our results indicate that the function of the S290F mutant is dependent on cell context. Comparison of the S290F and S290A mutants indicates that it is not the loss of Ser but its substitution with Phe, which leads to S290F dysfunction. PMID:26426690
Measurements of Ultra-Stable Oscillator (USO) Allan Deviations in Space
NASA Technical Reports Server (NTRS)
Enzer, Daphna G.; Klipstein, William M.; Wang, Rabi T.; Dunn, Charles E.
2013-01-01
Researchers have used data from the GRAIL mission to the Moon to make the first in-flight verification of ultra-stable oscillators (USOs) with Allan deviation below 10 13 for 1-to-100-second averaging times. USOs are flown in space to provide stable timing and/or navigation signals for a variety of different science and programmatic missions. The Gravity Recovery and Interior Laboratory (GRAIL) mission is flying twin spacecraft, each with its own USO and with a Ka-band crosslink used to measure range fluctuations. Data from this crosslink can be combined in such a way as to give the relative time offsets of the two spacecrafts USOs and to calculate the Allan deviation to describe the USOs combined performance while orbiting the Moon. Researchers find the first direct in-space Allan deviations below 10(exp -13) for 1-to-100-second averaging times comparable to pre-launch data, and better than measurements from ground tracking of an X-band carrier coherent with the USO. Fluctuations in Earth s atmosphere limit measurement performance in direct-to-Earth links. Inflight USO performance verification was also performed for GRAIL s parent mission, the Gravity Recovery and Climate Experiment (GRACE), using both Kband and Ka-band crosslinks.
Yang Kai; Huang, Shih-Ying; Packard, Nathan J.; Boone, John M.
2010-07-15
Purpose: A simplified linear model approach was proposed to accurately model the response of a flat panel detector used for breast CT (bCT). Methods: Individual detector pixel mean and variance were measured from bCT projection images acquired both in air and with a polyethylene cylinder, with the detector operating in both fixed low gain and dynamic gain mode. Once the coefficients of the linear model are determined, the fractional additive noise can be used as a quantitative metric to evaluate the system's efficiency in utilizing x-ray photons, including the performance of different gain modes of the detector. Results: Fractional additive noise increases as the object thickness increases or as the radiation dose to the detector decreases. For bCT scan techniques on the UC Davis prototype scanner (80 kVp, 500 views total, 30 frames/s), in the low gain mode, additive noise contributes 21% of the total pixel noise variance for a 10 cm object and 44% for a 17 cm object. With the dynamic gain mode, additive noise only represents approximately 2.6% of the total pixel noise variance for a 10 cm object and 7.3% for a 17 cm object. Conclusions: The existence of the signal-independent additive noise is the primary cause for a quadratic relationship between bCT noise variance and the inverse of radiation dose at the detector. With the knowledge of the additive noise contribution to experimentally acquired images, system modifications can be made to reduce the impact of additive noise and improve the quantum noise efficiency of the bCT system.