On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction
NASA Astrophysics Data System (ADS)
Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo
This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.
Static Scene Statistical Non-Uniformity Correction
2015-03-01
Error NUC Non-Uniformity Correction RMSE Root Mean Squared Error RSD Relative Standard Deviation S3NUC Static Scene Statistical Non-Uniformity...Deviation ( RSD ) which normalizes the standard deviation, σ, to the mean estimated value, µ using the equation RS D = σ µ × 100. The RSD plot of the gain...estimates is shown in Figure 4.1(b). The RSD plot shows that after a sample size of approximately 10, the different photocount values and the inclusion
LD-SPatt: large deviations statistics for patterns on Markov chains.
Nuel, G
2004-01-01
Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.
Lee, Hee Jin; Lee, Sungeun; Lee, Eun Joo; Song, In Ja; Kang, Byung-Cheol; Lee, Jae-Seo; Lim, Hoi-Jeong
2016-01-01
Purpose Facial asymmetry has been measured by the severity of deviation of the menton (Me) on posteroanterior (PA) cephalograms and three-dimensional (3D) computed tomography (CT). This study aimed to compare PA cephalograms and 3D CT regarding the severity of Me deviation and the direction of the Me. Materials and Methods PA cephalograms and 3D CT images of 35 patients who underwent orthognathic surgery (19 males and 16 females, with an average age of 22.1±3.3 years) were retrospectively reviewed in this study. By measuring the distance and direction of the Me from the midfacial reference line and the midsagittal plane in the cephalograms and 3D CT, respectively, the x-coordinates (x1 and x2) of the Me were obtained in each image. The difference between the x-coordinates was calculated and statistical analysis was performed to compare the severity of Me deviation and the direction of the Me in the two imaging modalities. Results A statistically significant difference in the severity of Me deviation was found between the two imaging modalities (Δx=2.45±2.03 mm, p<0.05) using the one-sample t-test. Statistically significant agreement was observed in the presence of deviation (k=0.64, p<0.05) and in the severity of Me deviation (k=0.27, p<0.05). A difference in the direction of the Me was detected in three patients (8.6%). The severity of the Me deviation was found to vary according to the imaging modality in 16 patients (45.7%). Conclusion The measurement of Me deviation may be different between PA cephalograms and 3D CT in some patients. PMID:27051637
Raico Gallardo, Yolanda Natali; da Silva-Olivio, Isabela Rodrigues Teixeira; Mukai, Eduardo; Morimoto, Susana; Sesma, Newton; Cordaro, Luca
2017-05-01
To systematically assess the current dental literature comparing the accuracy of computer-aided implant surgery when using different supporting tissues (tooth, mucosa, or bone). Two reviewers searched PubMed (1972 to January 2015) and the Cochrane Central Register of Controlled Trials (Central) (2002 to January 2015). For the assessment of accuracy, studies were included with the following outcome measures: (i) angle deviation, (ii) deviation at the entry point, and (iii) deviation at the apex. Eight clinical studies from the 1602 articles initially identified met the inclusion criteria for the qualitative analysis. Four studies (n = 599 implants) were evaluated using meta-analysis. The bone-supported guides showed a statistically significant greater deviation in angle (P < 0.001), entry point (P = 0.01), and the apex (P = 0.001) when compared to the tooth-supported guides. Conversely, when only retrospective studies were analyzed, not significant differences are revealed in the deviation of the entry point and apex. The mucosa-supported guides indicated a statistically significant greater reduction in angle deviation (P = 0.02), deviation at the entry point (P = 0.002), and deviation at the apex (P = 0.04) when compared to the bone-supported guides. Between the mucosa- and tooth-supported guides, there were no statistically significant differences for any of the outcome measures. It can be concluded that the tissue of the guide support influences the accuracy of computer-aided implant surgery. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Exploring Students' Conceptions of the Standard Deviation
ERIC Educational Resources Information Center
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
WASP (Write a Scientific Paper) using Excel -5: Quartiles and standard deviation.
Grech, Victor
2018-03-01
The almost inevitable descriptive statistics exercise that is undergone once data collection is complete, prior to inferential statistics, requires the acquisition of basic descriptors which may include standard deviation and quartiles. This paper provides pointers as to how to do this in Microsoft Excel™ and explains the relationship between the two. Copyright © 2018 Elsevier B.V. All rights reserved.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
allantools: Allan deviation calculation
NASA Astrophysics Data System (ADS)
Wallin, Anders E. E.; Price, Danny C.; Carson, Cantwell G.; Meynadier, Frédéric
2018-04-01
allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.
Asymptotics of small deviations of the Bogoliubov processes with respect to a quadratic norm
NASA Astrophysics Data System (ADS)
Pusev, R. S.
2010-10-01
We obtain results on small deviations of Bogoliubov’s Gaussian measure occurring in the theory of the statistical equilibrium of quantum systems. For some random processes related to Bogoliubov processes, we find the exact asymptotic probability of their small deviations with respect to a Hilbert norm.
Linear maps preserving maximal deviation and the Jordan structure of quantum systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamhalter, Jan
2012-12-15
In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only onemore » numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnar.« less
Quantifying economic fluctuations by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki
2001-09-01
The first focus of this thesis is the investigation of cross-correlations between the price fluctuations of different stocks using the conceptual framework of random matrix theory (RMT), developed in physics to describe the statistical properties of energy-level spectra of complex nuclei. RMT makes predictions for the statistical properties of matrices that are universal, i.e., do not depend on the interactions between the elements comprising the system. In physical systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system so this framework is of potential value if applied to economic systems. This thesis compares the statistics of cross-correlation matrix
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Not a Copernican observer: biased peculiar velocity statistics in the local Universe
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej
2017-05-01
We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.
Perception of midline deviations in smile esthetics by laypersons.
Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.
Hoeffding Type Inequalities and their Applications in Statistics and Operations Research
NASA Astrophysics Data System (ADS)
Daras, Tryfon
2007-09-01
Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Fish: A New Computer Program for Friendly Introductory Statistics Help
ERIC Educational Resources Information Center
Brooks, Gordon P.; Raffle, Holly
2005-01-01
All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…
2015-01-01
The goal of this study was to analyse perceptually and acoustically the voices of patients with Unilateral Vocal Fold Paralysis (UVFP) and compare them to the voices of normal subjects. These voices were analysed perceptually with the GRBAS scale and acoustically using the following parameters: mean fundamental frequency (F0), standard-deviation of F0, jitter (ppq5), shimmer (apq11), mean harmonics-to-noise ratio (HNR), mean first (F1) and second (F2) formants frequency, and standard-deviation of F1 and F2 frequencies. Statistically significant differences were found in all of the perceptual parameters. Also the jitter, shimmer, HNR, standard-deviation of F0, and standard-deviation of the frequency of F2 were statistically different between groups, for both genders. In the male data differences were also found in F1 and F2 frequencies values and in the standard-deviation of the frequency of F1. This study allowed the documentation of the alterations resulting from UVFP and addressed the exploration of parameters with limited information for this pathology. PMID:26557690
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Perception of midline deviations in smile esthetics by laypersons
Ferreira, Jamille Barros; da Silva, Licínio Esmeraldo; Caetano, Márcia Tereza de Oliveira; da Motta, Andrea Fonseca Jardim; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation. PMID:28125140
Hurricane track forecast cones from fluctuations
Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.
2012-01-01
Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
Generic Feature Selection with Short Fat Data
Clarke, B.; Chu, J.-H.
2014-01-01
SUMMARY Consider a regression problem in which there are many more explanatory variables than data points, i.e., p ≫ n. Essentially, without reducing the number of variables inference is impossible. So, we group the p explanatory variables into blocks by clustering, evaluate statistics on the blocks and then regress the response on these statistics under a penalized error criterion to obtain estimates of the regression coefficients. We examine the performance of this approach for a variety of choices of n, p, classes of statistics, clustering algorithms, penalty terms, and data types. When n is not large, the discrimination over number of statistics is weak, but computations suggest regressing on approximately [n/K] statistics where K is the number of blocks formed by a clustering algorithm. Small deviations from this are observed when the blocks of variables are of very different sizes. Larger deviations are observed when the penalty term is an Lq norm with high enough q. PMID:25346546
NASA Astrophysics Data System (ADS)
Monaghan, Kari L.
The problem addressed was the concern for aircraft safety rates as they relate to the rate of maintenance outsourcing. Data gathered from 14 passenger airlines: AirTran, Alaska, America West, American, Continental, Delta, Frontier, Hawaiian, JetBlue, Midwest, Northwest, Southwest, United, and USAir covered the years 1996 through 2008. A quantitative correlational design, utilizing Pearson's correlation coefficient, and the coefficient of determination were used in the present study to measure the correlation between variables. Elements of passenger airline aircraft maintenance outsourcing and aircraft accidents, incidents, and pilot deviations within domestic passenger airline operations were analyzed, examined, and evaluated. Rates of maintenance outsourcing were analyzed to determine the association with accident, incident, and pilot deviation rates. Maintenance outsourcing rates used in the evaluation were the yearly dollar expenditure of passenger airlines for aircraft maintenance outsourcing as they relate to the total airline aircraft maintenance expenditures. Aircraft accident, incident, and pilot deviation rates used in the evaluation were the yearly number of accidents, incidents, and pilot deviations per miles flown. The Pearson r-values were calculated to measure the linear relationship strength between the variables. There were no statistically significant correlation findings for accidents, r(174)=0.065, p=0.393, and incidents, r(174)=0.020, p=0.793. However, there was a statistically significant correlation for pilot deviation rates, r(174)=0.204, p=0.007 thus indicating a statistically significant correlation between maintenance outsourcing rates and pilot deviation rates. The calculated R square value of 0.042 represents the variance that can be accounted for in aircraft pilot deviation rates by examining the variance in aircraft maintenance outsourcing rates; accordingly, 95.8% of the variance is unexplained. Suggestions for future research include replication of the present study with the inclusion of maintenance outsourcing rate data for all airlines differentiated between domestic and foreign repair station utilization. Replication of the present study every five years is also encouraged to continue evaluating the impact of maintenance outsourcing practices on passenger airline safety.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Tunali, Ilke; Stringfield, Olya; Guvenis, Albert; Wang, Hua; Liu, Ying; Balagurunathan, Yoganand; Lambin, Philippe; Gillies, Robert J; Schabath, Matthew B
2017-11-10
The goal of this study was to extract features from radial deviation and radial gradient maps which were derived from thoracic CT scans of patients diagnosed with lung adenocarcinoma and assess whether these features are associated with overall survival. We used two independent cohorts from different institutions for training (n= 61) and test (n= 47) and focused our analyses on features that were non-redundant and highly reproducible. To reduce the number of features and covariates into a single parsimonious model, a backward elimination approach was applied. Out of 48 features that were extracted, 31 were eliminated because they were not reproducible or were redundant. We considered 17 features for statistical analysis and identified a final model containing the two most highly informative features that were associated with lung cancer survival. One of the two features, radial deviation outside-border separation standard deviation, was replicated in a test cohort exhibiting a statistically significant association with lung cancer survival (multivariable hazard ratio = 0.40; 95% confidence interval 0.17-0.97). Additionally, we explored the biological underpinnings of these features and found radial gradient and radial deviation image features were significantly associated with semantic radiological features.
ERIC Educational Resources Information Center
Cook, Samuel A.; Fukawa-Connelly, Timothy
2016-01-01
Studies have shown that at the end of an introductory statistics course, students struggle with building block concepts, such as mean and standard deviation, and rely on procedural understandings of the concepts. This study aims to investigate the understandings entering freshman of a department of mathematics and statistics (including mathematics…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardin, M; To, D; Giaddui, T
2016-06-15
Purpose: To investigate the significance of using pinpoint ionization chambers (IC) and RadCalc (RC) in determining the quality of lung SBRT VMAT plans with low dose deviation pass percentage (DDPP) as reported by ScandiDos Delta4 (D4). To quantify the relationship between DDPP and point dose deviations determined by IC (ICDD), RadCalc (RCDD), and median dose deviation reported by D4 (D4DD). Methods: Point dose deviations and D4 DDPP were compiled for 45 SBRT VMAT plans. Eighteen patients were treated on Varian Truebeam linear accelerators (linacs); the remaining 27 were treated on Elekta Synergy linacs with Agility collimators. A one-way analysis ofmore » variance (ANOVA) was performed to determine if there were any statistically significant differences between D4DD, ICDD, and RCDD. Tukey’s test was used to determine which pair of means was statistically different from each other. Multiple regression analysis was performed to determine if D4DD, ICDD, or RCDD are statistically significant predictors of DDPP. Results: Median DDPP, D4DD, ICDD, and RCDD were 80.5% (47.6%–99.2%), −0.3% (−2.0%–1.6%), 0.2% (−7.5%–6.3%), and 2.9% (−4.0%–19.7%), respectively. The ANOVA showed a statistically significant difference between D4DD, ICDD, and RCDD for a 95% confidence interval (p < 0.001). Tukey’s test revealed a statistically significant difference between two pairs of groups, RCDD-D4DD and RCDD-ICDD (p < 0.001), but no difference between ICDD-D4DD (p = 0.485). Multiple regression analysis revealed that ICDD (p = 0.04) and D4DD (p = 0.03) are statistically significant predictors of DDPP with an adjusted r{sup 2} of 0.115. Conclusion: This study shows ICDD predicts trends in D4 DDPP; however this trend is highly variable as shown by our low r{sup 2}. This work suggests that ICDD can be used as a method to verify DDPP in delivery of lung SBRT VMAT plans. RCDD may not validate low DDPP discovered in D4 QA for small field SBRT treatments.« less
Effect of multizone refractive multifocal contact lenses on standard automated perimetry.
Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa
2012-09-01
The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.
Performance of digital RGB reflectance color extraction for plaque lesion
NASA Astrophysics Data System (ADS)
Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah
2005-01-01
Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.
NASA Astrophysics Data System (ADS)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-01
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Dong; Campos, Edwin; Liu, Yangang
2014-09-17
Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less
Tests for qualitative treatment-by-centre interaction using a 'pushback' procedure.
Ciminera, J L; Heyse, J F; Nguyen, H H; Tukey, J W
1993-06-15
In multicentre clinical trials using a common protocol, the centres are usually regarded as being a fixed factor, thus allowing any treatment-by-centre interaction to be omitted from the error term for the effect of treatment. However, we feel it necessary to use the treatment-by-centre interaction as the error term if there is substantial evidence that the interaction with centres is qualitative instead of quantitative. To make allowance for the estimated uncertainties of the centre means, we propose choosing a reference value (for example, the median of the ordered array of centre means) and converting the individual centre results into standardized deviations from the reference value. The deviations are then reordered, and the results 'pushed back' by amounts appropriate for the corresponding order statistics in a sample from the relevant distribution. The pushed-back standardized deviations are then restored to the original scale. The appearance of opposite signs among the destandardized values for the various centres is then taken as 'substantial evidence' of qualitative interaction. Procedures are presented using, in any combination: (i) Gaussian, or Student's t-distribution; (ii) order-statistic medians or outward 90 per cent points of the corresponding order statistic distributions; (iii) pooling or grouping and pooling the internally estimated standard deviations of the centre means. The use of the least conservative combination--Student's t, outward 90 per cent points, grouping and pooling--is recommended.
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
Gambling as a teaching aid in the introductory physics laboratory
NASA Astrophysics Data System (ADS)
Horodynski-Matsushigue, L. B.; Pascholati, P. R.; Vanin, V. R.; Dias, J. F.; Yoneama, M.-L.; Siqueira, P. T. D.; Amaku, M.; Duarte, J. L. M.
1998-07-01
Dice throwing is used to illustrate relevant concepts of the statistical theory of uncertainties, in particular the meaning of a limiting distribution, the standard deviation, and the standard deviation of the mean. It is an important part in a sequence of especially programmed laboratory activities, developed for freshmen, at the Institute of Physics of the University of São Paulo. It is shown how this activity is employed within a constructive teaching approach, which aims at a growing understanding of the measuring processes and of the fundamentals of correct statistical handling of experimental data.
A product Pearson-type VII density distribution
NASA Astrophysics Data System (ADS)
Nadarajah, Saralees; Kotz, Samuel
2008-01-01
The Pearson-type VII distributions (containing the Student's t distributions) are becoming increasing prominent and are being considered as competitors to the normal distribution. Motivated by real examples in decision sciences, Bayesian statistics, probability theory and Physics, a new Pearson-type VII distribution is introduced by taking the product of two Pearson-type VII pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.
Note onset deviations as musical piece signatures.
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.
Modeling failure in brittle porous ceramics
NASA Astrophysics Data System (ADS)
Keles, Ozgur
Brittle porous materials (BPMs) are used for battery, fuel cell, catalyst, membrane, filter, bone graft, and pharmacy applications due to the multi-functionality of their underlying porosity. However, in spite of its technological benefits the effects of porosity on BPM fracture strength and Weibull statistics are not fully understood--limiting a wider use. In this context, classical fracture mechanics was combined with two-dimensional finite element simulations not only to account for pore-pore stress interactions, but also to numerically quantify the relationship between the local pore volume fraction and fracture statistics. Simulations show that even the microstructures with the same porosity level and size of pores differ substantially in fracture strength. The maximum reliability of BPMs was shown to be limited by the underlying pore--pore interactions. Fracture strength of BMPs decreases at a faster rate under biaxial loading than under uniaxial loading. Three different types of deviation from classic Weibull behavior are identified: P-type corresponding to a positive lower tail deviation, N-type corresponding to a negative lower tail deviation, and S-type corresponding to both positive upper and lower tail deviations. Pore-pore interactions result in either P-type or N-type deviation in the limit of low porosity, whereas S-type behavior occurs when clusters of low and high fracture strengths coexist in a fracture data.
Tunali, Ilke; Stringfield, Olya; Guvenis, Albert; Wang, Hua; Liu, Ying; Balagurunathan, Yoganand; Lambin, Philippe; Gillies, Robert J.; Schabath, Matthew B.
2017-01-01
The goal of this study was to extract features from radial deviation and radial gradient maps which were derived from thoracic CT scans of patients diagnosed with lung adenocarcinoma and assess whether these features are associated with overall survival. We used two independent cohorts from different institutions for training (n= 61) and test (n= 47) and focused our analyses on features that were non-redundant and highly reproducible. To reduce the number of features and covariates into a single parsimonious model, a backward elimination approach was applied. Out of 48 features that were extracted, 31 were eliminated because they were not reproducible or were redundant. We considered 17 features for statistical analysis and identified a final model containing the two most highly informative features that were associated with lung cancer survival. One of the two features, radial deviation outside-border separation standard deviation, was replicated in a test cohort exhibiting a statistically significant association with lung cancer survival (multivariable hazard ratio = 0.40; 95% confidence interval 0.17-0.97). Additionally, we explored the biological underpinnings of these features and found radial gradient and radial deviation image features were significantly associated with semantic radiological features. PMID:29221183
Two Computer Programs for the Statistical Evaluation of a Weighted Linear Composite.
ERIC Educational Resources Information Center
Sands, William A.
1978-01-01
Two computer programs (one batch, one interactive) are designed to provide statistics for a weighted linear combination of several component variables. Both programs provide mean, variance, standard deviation, and a validity coefficient. (Author/JKS)
PERFORMANCE OF TRICKLING FILTER PLANTS: RELIABILITY, STABILITY, VARIABILITY
Effluent quality variability from trickling filters was examined in this study by statistically analyzing daily effluent BOD5 and suspended solids data from 11 treatment plants. Summary statistics (mean, standard deviation, etc.) were examined to determine the general characteris...
Specification of ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2004-01-01
Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.
Deviations from LTE in a stellar atmosphere
NASA Technical Reports Server (NTRS)
Kalkofen, W.; Klein, R. I.; Stein, R. F.
1979-01-01
Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.
Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.
Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos
To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.
Effects of special composite stretching on the swing of amateur golf players
Lee, Joong-chul; Lee, Sung-wan; Yeo, Yun-ghi; Park, Gi Duck
2015-01-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed. PMID:25995553
Effects of special composite stretching on the swing of amateur golf players.
Lee, Joong-Chul; Lee, Sung-Wan; Yeo, Yun-Ghi; Park, Gi Duck
2015-04-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed.
A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.
Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi
2016-10-01
Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe
2017-06-01
Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p
Explorations in Statistics: Standard Deviations and Standard Errors
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2008-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
ERIC Educational Resources Information Center
Turegun, Mikhail
2011-01-01
Traditional curricular materials and pedagogical strategies have not been effective in developing conceptual understanding of statistics topics and statistical reasoning abilities of students. Much of the changes proposed by statistics education research and the reform movement over the past decade have supported efforts to transform teaching…
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity
NASA Astrophysics Data System (ADS)
Montangie, Lisandro; Montani, Fernando
2018-06-01
Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.
Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
2017-11-21
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C
2013-12-01
A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.
Note Onset Deviations as Musical Piece Signatures
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields. PMID:23935971
Sanov and central limit theorems for output statistics of quantum Markov chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk
2015-02-15
In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Severity of Illness Scores May Misclassify Critically Ill Obese Patients.
Deliberato, Rodrigo Octávio; Ko, Stephanie; Komorowski, Matthieu; Armengol de La Hoz, M A; Frushicheva, Maria P; Raffa, Jesse D; Johnson, Alistair E W; Celi, Leo Anthony; Stone, David J
2018-03-01
Severity of illness scores rest on the assumption that patients have normal physiologic values at baseline and that patients with similar severity of illness scores have the same degree of deviation from their usual state. Prior studies have reported differences in baseline physiology, including laboratory markers, between obese and normal weight individuals, but these differences have not been analyzed in the ICU. We compared deviation from baseline of pertinent ICU laboratory test results between obese and normal weight patients, adjusted for the severity of illness. Retrospective cohort study in a large ICU database. Tertiary teaching hospital. Obese and normal weight patients who had laboratory results documented between 3 days and 1 year prior to hospital admission. None. Seven hundred sixty-nine normal weight patients were compared with 1,258 obese patients. After adjusting for the severity of illness score, age, comorbidity index, baseline laboratory result, and ICU type, the following deviations were found to be statistically significant: WBC 0.80 (95% CI, 0.27-1.33) × 10/L; p = 0.003; log (blood urea nitrogen) 0.01 (95% CI, 0.00-0.02); p = 0.014; log (creatinine) 0.03 (95% CI, 0.02-0.05), p < 0.001; with all deviations higher in obese patients. A logistic regression analysis suggested that after adjusting for age and severity of illness at least one of these deviations had a statistically significant effect on hospital mortality (p = 0.009). Among patients with the same severity of illness score, we detected clinically small but significant deviations in WBC, creatinine, and blood urea nitrogen from baseline in obese compared with normal weight patients. These small deviations are likely to be increasingly important as bigger data are analyzed in increasingly precise ways. Recognition of the extent to which all critically ill patients may deviate from their own baseline may improve the objectivity, precision, and generalizability of ICU mortality prediction and severity adjustment models.
Artmann, L; Larsen, H J; Sørensen, H B; Christensen, I J; Kjaer, I
2010-06-01
To analyze the interrelationship between incisor width, deviations in the dentition and available space in the dental arch in palatally and labially located maxillary ectopic canine cases. Size: On dental casts from 69 patients (mean age 13 years 6 months) the mesiodistal widths of each premolar, canine and incisor were measured and compared with normal standards. Dental deviations: Based on panoramic radiographs from the same patients the dentitions were grouped accordingly: Group I: normal morphology; Group IIa: deviations in the dentition within the maxillary incisors only; Group IIb: deviations in the dentition in general. Descriptive statistics for the tooth sizes and dental deviations were presented by the mean and 95% confidence limits for the mean and the p-value for the T-statistic. Space: Space was expresses by subtracting the total tooth sizes of incisors, canines and premolars from the length of the arch segments. Size of lateral maxillary incisor: The widths of the lateral incisors were significantly different in groups I, IIa and IIb (p=0.016) and in cases with labially located ectopic canines on average 0.65 (95% CI:0.25-1.05, p=0.0019) broader than lateral incisors in cases with palatally located ectopic canines. Space: Least available space was observed in cases with labially located canines. The linear model did show a difference between palatally and labially located ectopic canines (p=0.03). Space related to deviations in the dentition: When space in the dental arch was related to dental deviations (groups I, IIa and IIb), the cases in group IIb with palatally located canines had significantly more space compared with I and IIa. Two subgroups of palatally located ectopic maxillary canine cases based on registration of space, incisor width and deviations in the morphology of the dentition were identified.
Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y
2018-01-01
To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.
Evolution of statistical properties for a nonlinearly propagating sinusoid.
Shepherd, Micah R; Gee, Kent L; Hanford, Amanda D
2011-07-01
The nonlinear propagation of a pure sinusoid is considered using time domain statistics. The probability density function, standard deviation, skewness, kurtosis, and crest factor are computed for both the amplitude and amplitude time derivatives as a function of distance. The amplitude statistics vary only in the postshock realm, while the amplitude derivative statistics vary rapidly in the preshock realm. The statistical analysis also suggests that the sawtooth onset distance can be considered to be earlier than previously realized. © 2011 Acoustical Society of America
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners
O'Neill, Thomas A.
2017-01-01
Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
A Wave Chaotic Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan
Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Analysis of Variance with Summary Statistics in Microsoft® Excel®
ERIC Educational Resources Information Center
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Jiménez-Castellanos, Emilio; Orozco-Varo, Ana; Arroyo-Cruz, Gema; Iglesias-Linares, Alejandro
2016-06-01
Deviation from the facial midline and inclination of the dental midline or occlusal plane has been described as extremely influential in the layperson's perceptions of the overall esthetics of the smile. The purpose of this study was to determine the prevalence of deviation from the facial midline and inclination of the dental midline or occlusal plane in a selected sample. White participants from a European population (N=158; 93 women, 65 men) who met specific inclusion criteria were selected for the present study. Standardized 1:1 scale frontal photographs were made, and 3 variables of all participants were measured: midline deviation, midline inclination, and inclination of the occlusal plane. Software was used to measure midline deviation and inclination, taking the bipupillary line and the facial midline as references. Tests for normality of the sample were explored and descriptive statistics (means ±SD) were calculated. The chi-square test was used to evaluate differences in midline deviation, midline inclination, and occlusion plane (α=.05) RESULTS: Frequencies of midline deviation (>2 mm), midline inclination (>3.5 degrees), and occlusal plane inclination (>2 degrees) were 31.64% (mean 2.7±1.23 mm), 10.75% (mean 7.9 degrees ±3.57), and 25.9% (mean 9.07 degrees ±3.16), respectively. No statistically significant differences (P>.05) were found between sex and any of the esthetic smile values. The incidence of alterations with at least 1 altered parameter that affected smile esthetics was 51.9% in a population from southern Europe. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Statistical Tests Black swans or dragon-kings? A simple test for deviations from the power law★
NASA Astrophysics Data System (ADS)
Janczura, J.; Weron, R.
2012-05-01
We develop a simple test for deviations from power law tails. Actually, from the tails of any distribution. We use this test - which is based on the asymptotic properties of the empirical distribution function - to answer the question whether great natural disasters, financial crashes or electricity price spikes should be classified as dragon-kings or `only' as black swans.
1982-02-15
function of the doping density at 300 and 77 K for the classical Boltzmann statistics or depletion approximation (solid line) and for the approximate...Fermi-Dirac statistics (equation (19) dotted line)• This comparison demonstrates that the deviation from Boltzmann statistics is quite noticeable...tunneling Schottky barriers cannot be obtained at these doping levels. The dotted lines are obtained when Boltzmann statistics are used in the Al Ga
Goudriaan, Marije; Van den Hauwe, Marleen; Simon-Martinez, Cristina; Huenaerts, Catherine; Molenaers, Guy; Goemans, Nathalie; Desloovere, Kaat
2018-04-30
Prolonged ambulation is considered important in children with Duchenne muscular dystrophy (DMD). However, previous studies analyzing DMD gait were sensitive to false positive outcomes, caused by uncorrected multiple comparisons, regional focus bias, and inter-component covariance bias. Also, while muscle weakness is often suggested to be the main cause for the altered gait pattern in DMD, this was never verified. Our research question was twofold: 1) are we able to confirm the sagittal kinematic and kinetic gait alterations described in a previous review with statistical non-parametric mapping (SnPM)? And 2) are these gait deviations related to lower limb weakness? We compared gait kinematics and kinetics of 15 children with DMD and 15 typical developing (TD) children (5-17 years), with a two sample Hotelling's T 2 test and post-hoc two-tailed, two-sample t-test. We used canonical correlation analyses to study the relationship between weakness and altered gait parameters. For all analyses, α-level was corrected for multiple comparisons, resulting in α = 0.005. We only found one of the previously reported kinematic deviations: the children with DMD had an increased knee flexion angle during swing (p = 0.0006). Observed gait deviations that were not reported in the review were an increased hip flexion angle during stance (p = 0.0009) and swing (p = 0.0001), altered combined knee and ankle torques (p = 0.0002), and decreased power absorption during stance (p = 0.0001). No relationships between weakness and these gait deviations were found. We were not able to replicate the gait deviations in DMD previously reported in literature, thus DMD gait remains undefined. Further, weakness does not seem to be linearly related to altered gait features. The progressive nature of the disease requires larger study populations and longitudinal analyses to gain more insight into DMD gait and its underlying causes. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical analysis of the 70 meter antenna surface distortions
NASA Technical Reports Server (NTRS)
Kiedron, K.; Chian, C. T.; Chuang, K. L.
1987-01-01
Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.
What to use to express the variability of data: Standard deviation or standard error of mean?
Barde, Mohini P; Barde, Prajakt J
2012-07-01
Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.
McClure, Foster D; Lee, Jung K
2005-01-01
Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, R; Bai, W
Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less
Analysis of statistical misconception in terms of statistical reasoning
NASA Astrophysics Data System (ADS)
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
Kim, Kyung-Seon; Park, Soo-Byung; Kim, Seong-Sik; Kim, Yong-Il
2013-01-01
Objective In this study, we aimed to examine the relationship between chin deviation and the positional and morphological features of the mandible and to determine the factors that contributed to chin deviation in individuals with a unilateral cleft lip and palate (UCLP). Methods Cone-beam computed tomography (CBCT) images of 28 adults with UCLP were analyzed in this study. Segmented three-dimensional temporomandibular fossa and mandible images were reconstructed, and angular, linear, and volumetric parameters were measured. Results For all 28 individuals, the chin was found to deviate to the cleft side by 1.59 mm. Moreover, among these 28 individuals, only 7 showed distinct (more than 4 mm) chin deviation, which was toward the cleft side. Compared to the non-cleft side, the mandibular body length, frontal ramal inclination, and vertical position of the condyle were lower and inclination of the temporomandibular fossa was steeper on the cleft side. Furthermore, the differences in inclination of the temporomandibular fossa, mandibular body length, ramus length, and condylar volume ratio (non-deviated/deviated) were positively correlated with chin deviation. Conclusions UCLP individuals show mild chin deviation to the cleft side. Statistical differences were noted in the parameters that represented positional and morphological asymmetries of the mandible and temporomandibular fossa; however, these differences were too small to indicate clinical significance. PMID:24015386
A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)
2001-01-01
With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.
Aguiar, Carlos M; Câmara, Andréa C
2008-12-01
This study evaluated, by means of the radiography examination, the occurrence of deviations in the apical third of root canals shaped with hand and rotary instruments. Sixty mandibular human molars were divided into three groups. The root canals in group 1 were instrumented with ProTaper (Dentsply/Maillefer, Ballaigues, Switzerland) for hand use, group 2 with ProTaper and group 3 with RaCe. The images obtained by double superimposition of the pre- and postoperative radiographs were evaluated by two endodontists with the aid of a magnifier-viewer and a fivefold magnifier. Statistical analysis was performed using the Fisher-Freeman-Halton. The instrumentation using the ProTaper for hand use showed 25% of the canals with a deviation in the apical third, as did the ProTaper, while the corresponding figure for the RaCe (FKG Dentaire, La-Chaux-de-Fonds, Switzerland) was 20%, but these results were not statistically significant. There was no correlation between the occurrence of deviations in the apical third and the systems used.
Residual symptoms after surgery for unilateral congenital superior oblique palsy.
Caca, Ihsan; Sahin, Alparslan; Cingu, Abdullah; Ari, Seyhmus; Akbas, Umut
2012-01-01
To establish the surgical results and residual symptoms in 48 cases with unilateral congenital superior oblique muscle palsy that had surgical intervention to the vertical muscles alone. Myectomy and concomitant disinsertion of the inferior oblique (IO) muscle was performed in 38 cases and myectomy and concomitant IO disinsertion and recession of the superior rectus muscle in the ipsilateral eye was performed in 10 cases. The preoperative and postoperative vertical deviation values and surgical results were compared. Of the patients who had myectomy and concomitant IO disinsertion, 74% achieved an "excellent" result, 21% a "good" result, and 5% a "poor" result postoperatively. The difference in deviation between preoperative and postoperative values was statistically significant (P < .001). Of the patients who had myectomy and concomitant inferior oblique disinsertion and ipsilateral superior rectus recession, 50% achieved an "excellent" result, 20% a "good" result, and 30% a "poor" result postoperatively. The difference in deviation between preoperative and postoperative values was statistically significant (P < .001). Both procedures are effective and successful in patients with superior oblique muscle palsy, but a secondary surgery may be required. Copyright 2012, SLACK Incorporated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
Gutman, Shawn; Kim, Daniel; Tarafder, Solaiman; Velez, Sergio; Jeong, Julia; Lee, Chang H
2018-02-01
To determine the regionally variant quality of collagen alignment in human TMJ discs and its statistical correlation with viscoelastic properties. For quantitative analysis of the quality of collagen alignment, horizontal sections of human TMJ discs with Pricrosirius Red staining were imaged under circularly polarized microscopy. Mean angle and angular deviation of collagen fibers in each region were analyzed using a well-established automated image-processing for angular gradient. Instantaneous and relaxation moduli of each disc region were measured under stress-relaxation test both in tensile and compression. Then Spearman correlation analysis was performed between the angular deviation and the moduli. To understand the effect of glycosaminoglycans on the correlation, TMJ disc samples were treated by chondroitinase ABC (C-ABC). Our imaging processing analysis showed the region-variant direction of collagen alignment, consistently with previous findings. Interestingly, the quality of collagen alignment, not only the directions, was significantly different in between the regions. The angular deviation of fiber alignment in the anterior and intermediate regions were significantly smaller than the posterior region. Medial and lateral regions showed significantly bigger angular deviation than all the other regions. The regionally variant angular deviation values showed statistically significant correlation with the tensile instantaneous modulus and the relaxation modulus, partially dependent on C-ABC treatment. Our findings suggest the region-variant degree of collagen fiber alignment is likely attributed to the heterogeneous viscoelastic properties of TMJ disc that may have significant implications in development of regenerative therapy for TMJ disc. Copyright © 2017 Elsevier Ltd. All rights reserved.
Specifying the ISS Plasma Environment
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Diekmann, Anne; Neergaard, Linda; Bui, Them; Mikatarian, Ronald; Barsamian, Hagop; Koontz, Steven
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IN) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output.
Hung, Kuo-Feng; Wang, Feng; Wang, Hao-Wei; Zhou, Wen-Jie; Huang, Wei; Wu, Yi-Qun
2017-06-01
A real-time surgical navigation system potentially increases the accuracy when used for quad-zygomatic implant placement. To evaluate the accuracy of a real-time surgical navigation system when used for quad zygomatic implant placement. Patients with severely atrophic maxillae were prospectively recruited. Four trajectories for implants were planned, and zygomatic implants were placed using a real-time surgical navigation system. The planned-placed distance deviations at entry (entry deviation)points, exit (exit deviation) points, and angle deviation of axes (angle deviation) were measured on fused operation images. The differences of all the deviations between different groups, classified based on the lengths and locations of implants, were analysed. A P value of < 0.05 indicated statistical significance. Forty zygomatic implants were placed as planned in 10 patients. The entry deviation, exit deviation and angle deviation were 1.35 ± 0.75 mm, 2.15 mm ± 0.95 mm, and 2.05 ± 1.02 degrees, respectively. The differences of all deviations were not significant, irrespective of the lengths (P = .259, .158, and .914, respectively) or locations of the placed implants (P = .698, .072, and .602, respectively). A real-time surgical navigation system used for the placement of quad zygomatic implants demonstrated a high level of accuracy with only minimal planned-placed deviations, irrespective of the lengths or locations of the implants. © 2017 Wiley Periodicals, Inc.
The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups
ERIC Educational Resources Information Center
Pero-Cebollero, Maribel; Guardia-Olmos, Joan
2013-01-01
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…
Chun, Bo Young; Kwon, Soon Jae; Chae, Sun Hwa; Kwon, Jung Yoon
2007-09-01
To evaluate changes in ocular alignment in partially accommodative esotropic children age ranged from 3 to 8 years during occlusion therapy for amblyopia. Angle measurements of twenty-two partially accommodative esotropic patients with moderate amblyopia were evaluated before and at 2 years after occlusion therapy. Mean deviation angle with glasses at the start of occlusion treatment was 19.45+/-5.97 PD and decreased to 12.14+/-12.96 PD at 2 years after occlusion therapy (p<0.01). After occlusion therapy, 9 (41%) cases were indications of surgery for residual deviation but if we had planned surgery before occlusion treatment, 18 (82%) of patients would have had surgery. There was a statistical relationship between increase of visual acuity ratio and decrease of deviation angle (r=-0.479, p=0.024). There was a significant reduction of deviation angle of partially accommodative esotropic patients at 2 years after occlusion therapy. Our results suggest that occlusion therapy has an influence on ocular alignment in partially accommodative esotropic patients with amblyopia.
NASA Astrophysics Data System (ADS)
Beaudet, Robert A.
2013-06-01
NASA Planetary Protection Policy requires that Category IV missions such as those going to the surface of Mars include detailed assessment and documentation of the bioburden on the spacecraft at launch. In the prior missions to Mars, the approaches used to estimate the bioburden could easily be conservative without penalizing the project because spacecraft elements such as the descent and landing stages had relatively small surface areas and volumes. With the advent of a large spacecraft such as Mars Science Laboratory (MSL), it became necessary for a modified—still conservative but more pragmatic—statistical treatment be used to obtain the standard deviations and the bioburden densities at about the 99.9% confidence limits. This article describes both the Gaussian and Poisson statistics that were implemented to analyze the bioburden data from the MSL spacecraft prior to launch. The standard deviations were weighted by the areas sampled with each swab or wipe. Some typical cases are given and discussed.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Assessment of variations in thermal cycle life data of thermal barrier coated rods
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1981-01-01
An analysis of thermal cycle life data for 22 thermal barrier coated (TBC) specimens was conducted. The Zr02-8Y203/NiCrAlY plasma spray coated Rene 41 rods were tested in a Mach 0.3 Jet A/air burner flame. All specimens were subjected to the same coating and subsequent test procedures in an effort to control three parametric groups; material properties, geometry and heat flux. Statistically, the data sample space had a mean of 1330 cycles with a standard deviation of 520 cycles. The data were described by normal or log-normal distributions, but other models could also apply; the sample size must be increased to clearly delineate a statistical failure model. The statistical methods were also applied to adhesive/cohesive strength data for 20 TBC discs of the same composition, with similar results. The sample space had a mean of 9 MPa with a standard deviation of 4.2 MPa.
Numerical solutions for patterns statistics on Markov chains.
Nuel, Gregory
2006-01-01
We propose here a review of the methods available to compute pattern statistics on text generated by a Markov source. Theoretical, but also numerical aspects are detailed for a wide range of techniques (exact, Gaussian, large deviations, binomial and compound Poisson). The SPatt package (Statistics for Pattern, free software available at http://stat.genopole.cnrs.fr/spatt) implementing all these methods is then used to compare all these approaches in terms of computational time and reliability in the most complete pattern statistics benchmark available at the present time.
Wind speed statistics for Goldstone, California, anemometer sites
NASA Technical Reports Server (NTRS)
Berg, M.; Levy, R.; Mcginness, H.; Strain, D.
1981-01-01
An exploratory wind survey at an antenna complex was summarized statistically for application to future windmill designs. Data were collected at six locations from a total of 10 anemometers. Statistics include means, standard deviations, cubes, pattern factors, correlation coefficients, and exponents for power law profile of wind speed. Curves presented include: mean monthly wind speeds, moving averages, and diurnal variation patterns. It is concluded that three of the locations have sufficiently strong winds to justify consideration for windmill sites.
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard
2017-11-01
Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Sharma, Kshitij; Chavez-Demoulin, Valérie; Dillenbourg, Pierre
2017-01-01
The statistics used in education research are based on central trends such as the mean or standard deviation, discarding outliers. This paper adopts another viewpoint that has emerged in statistics, called extreme value theory (EVT). EVT claims that the bulk of normal distribution is comprised mainly of uninteresting variations while the most…
Summary Report on NRL Participation in the Microwave Landing System Program.
1980-08-19
shifters were measured and statistically analyzed. Several research contracts for promising phased array techniques were awarded to industrial contractors...program was written for compiling statistical data on the measurements, which reads out inser- sertion phase characteristics and standard deviation...GLOSSARY OF TERMS ALPA Airline Pilots’ Association ATA Air Transport Association AWA Australiasian Wireless Amalgamated AWOP All-weather Operations
Özler, Gül Soylu
2016-01-01
The author conducted a prospective study of patients who underwent septoplasty for nasal obstruction secondary to a septal deviation to determine if the location of the deviation had any association with the degree of postoperative pain. Patients with an anteroposterior deviation were not included in this study, nor were patients with vasomotor rhinitis, allergic rhinitis, nasal polyposis, turbinate pathologies, or a systemic disease; also excluded were patients who were taking any medication and those who had undergone any previous nasal surgery. The final study population included 140 patients, who were divided into two groups on the basis of the location of their deviation. A total of 64 patients (35 men and 29 women; mean age: 29.8 yr) had an anterior deviation, and 76 patients (35 men and 41 women; mean age: 30.3 yr) had a posterior deviation; there were no statistically significant differences between the two groups in terms of sex (p = 0.309) or age (p = 0.848). During the postoperative period, pain intensity in both groups was self-evaluated on a visual analog scale on days 1, 3, and 7 and again at 3 and 6 months. The mean postoperative pain scores on days 1, 3, and 7 were significantly higher in the posterior deviation group than in the anterior group; scores in the two groups were similar at 3 and 6 months.
Son, Ji Y; Ramos, Priscilla; DeWolf, Melissa; Loftus, William; Stigler, James W
2018-01-01
In this article, we begin to lay out a framework and approach for studying how students come to understand complex concepts in rich domains. Grounded in theories of embodied cognition, we advance the view that understanding of complex concepts requires students to practice, over time, the coordination of multiple concepts, and the connection of this system of concepts to situations in the world. Specifically, we explore the role that a teacher's gesture might play in supporting students' coordination of two concepts central to understanding in the domain of statistics: mean and standard deviation. In Study 1 we show that university students who have just taken a statistics course nevertheless have difficulty taking both mean and standard deviation into account when thinking about a statistical scenario. In Study 2 we show that presenting the same scenario with an accompanying gesture to represent variation significantly impacts students' interpretation of the scenario. Finally, in Study 3 we present evidence that instructional videos on the internet fail to leverage gesture as a means of facilitating understanding of complex concepts. Taken together, these studies illustrate an approach to translating current theories of cognition into principles that can guide instructional design.
JAN transistor and diode characterization test program
NASA Technical Reports Server (NTRS)
Takeda, H.
1977-01-01
A statistical summary of electrical characterization was performed on JAN diodes and transistors. Parameters are presented with test conditions, mean, standard deviation, lowest reading, 10% point, 90% point and highest reading.
NASA Astrophysics Data System (ADS)
Lu, Xian; Chu, Xinzhao; Li, Haoyu; Chen, Cao; Smith, John A.; Vadas, Sharon L.
2017-09-01
We present the first statistical study of gravity waves with periods of 0.3-2.5 h that are persistent and dominant in the vertical winds measured with the University of Colorado STAR Na Doppler lidar in Boulder, CO (40.1°N, 105.2°W). The probability density functions of the wave amplitudes in temperature and vertical wind, ratios of these two amplitudes, phase differences between them, and vertical wavelengths are derived directly from the observations. The intrinsic period and horizontal wavelength of each wave are inferred from its vertical wavelength, amplitude ratio, and a designated eddy viscosity by applying the gravity wave polarization and dispersion relations. The amplitude ratios are positively correlated with the ground-based periods with a coefficient of 0.76. The phase differences between the vertical winds and temperatures (φW -φT) follow a Gaussian distribution with 84.2±26.7°, which has a much larger standard deviation than that predicted for non-dissipative waves ( 3.3°). The deviations of the observed phase differences from their predicted values for non-dissipative waves may indicate wave dissipation. The shorter-vertical-wavelength waves tend to have larger phase difference deviations, implying that the dissipative effects are more significant for shorter waves. The majority of these waves have the vertical wavelengths ranging from 5 to 40 km with a mean and standard deviation of 18.6 and 7.2 km, respectively. For waves with similar periods, multiple peaks in the vertical wavelengths are identified frequently and the ones peaking in the vertical wind are statistically longer than those peaking in the temperature. The horizontal wavelengths range mostly from 50 to 500 km with a mean and median of 180 and 125 km, respectively. Therefore, these waves are mesoscale waves with high-to-medium frequencies. Since they have recently become resolvable in high-resolution general circulation models (GCMs), this statistical study provides an important and timely reference for them.
In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.
Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J
2004-01-01
In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.
Methods of editing cloud and atmospheric layer affected pixels from satellite data
NASA Technical Reports Server (NTRS)
Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)
1982-01-01
Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
NASA Technical Reports Server (NTRS)
Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.
1979-01-01
Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.
Deviations from Rayleigh statistics in ultrasonic speckle.
Tuthill, T A; Sperry, R H; Parker, K J
1988-04-01
The statistics of speckle patterns in ultrasound images have potential for tissue characterization. In "fully developed speckle" from many random scatterers, the amplitude is widely recognized as possessing a Rayleigh distribution. This study examines how scattering populations and signal processing can produce non-Rayleigh distributions. The first order speckle statistics are shown to depend on random scatterer density and the amplitude and spacing of added periodic scatterers. Envelope detection, amplifier compression, and signal bandwidth are also shown to cause distinct changes in the signal distribution.
Pope, Larry M.; Diaz, A.M.
1982-01-01
Quality-of-water data, collected October 21-23, 1980, and a statistical summary are presented for 42 coal-mined strip pits in Crawford and Cherokee Counties, Southeastern Kansas. The statistical summary includes minimum and maximum observed values , mean, and standard deviation. Simple linear regression equations relating specific conductance, dissolved solids, and acidity to concentrations of dissolved solids, sulfate, calcium, and magnesium, potassium, aluminum, and iron are also presented. (USGS)
Statistical studies of animal response data from USF toxicity screening test method
NASA Technical Reports Server (NTRS)
Hilado, C. J.; Machado, A. M.
1978-01-01
Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.
Assessment issues in the testing of children at school entry.
Rock, Donald A; Stenner, A Jackson
2005-01-01
The authors introduce readers to the research documenting racial and ethnic gaps in school readiness. They describe the key tests, including the Peabody Picture Vocabulary Test (PPVT), the Early Childhood Longitudinal Study (ECLS), and several intelligence tests, and describe how they have been administered to several important national samples of children. Next, the authors review the different estimates of the gaps and discuss how to interpret these differences. In interpreting test results, researchers use the statistical term "standard deviation" to compare scores across the tests. On average, the tests find a gap of about 1 standard deviation. The ECLS-K estimate is the lowest, about half a standard deviation. The PPVT estimate is the highest, sometimes more than 1 standard deviation. When researchers adjust those gaps statistically to take into account different outside factors that might affect children's test scores, such as family income or home environment, the gap narrows but does not disappear. Why such different estimates of the gap? The authors consider explanations such as differences in the samples, racial or ethnic bias in the tests, and whether the tests reflect different aspects of school "readiness," and conclude that none is likely to explain the varying estimates. Another possible explanation is the Spearman Hypothesis-that all tests are imperfect measures of a general ability construct, g; the more highly a given test correlates with g, the larger the gap will be. But the Spearman Hypothesis, too, leaves questions to be investigated. A gap of 1 standard deviation may not seem large, but the authors show clearly how it results in striking disparities in the performance of black and white students and why it should be of serious concern to policymakers.
Beyond δ: Tailoring marked statistics to reveal modified gravity
NASA Astrophysics Data System (ADS)
Valogiannis, Georgios; Bean, Rachel
2018-01-01
Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Xin, E-mail: xinshih86029@gmail.com; Zhao, Xiangmo, E-mail: xinshih86029@gmail.com; Hui, Fei, E-mail: xinshih86029@gmail.com
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations ismore » constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.« less
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
Accuracy of Digital vs. Conventional Implant Impressions
Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.
2015-01-01
The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423
Variance reduction for Fokker–Planck based particle Monte Carlo schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorji, M. Hossein, E-mail: gorjih@ifd.mavt.ethz.ch; Andric, Nemanja; Jenny, Patrick
Recently, Fokker–Planck based particle Monte Carlo schemes have been proposed and evaluated for simulations of rarefied gas flows [1–3]. In this paper, the variance reduction for particle Monte Carlo simulations based on the Fokker–Planck model is considered. First, deviational based schemes were derived and reviewed, and it is shown that these deviational methods are not appropriate for practical Fokker–Planck based rarefied gas flow simulations. This is due to the fact that the deviational schemes considered in this study lead either to instabilities in the case of two-weight methods or to large statistical errors if the direct sampling method is applied.more » Motivated by this conclusion, we developed a novel scheme based on correlated stochastic processes. The main idea here is to synthesize an additional stochastic process with a known solution, which is simultaneously solved together with the main one. By correlating the two processes, the statistical errors can dramatically be reduced; especially for low Mach numbers. To assess the methods, homogeneous relaxation, planar Couette and lid-driven cavity flows were considered. For these test cases, it could be demonstrated that variance reduction based on parallel processes is very robust and effective.« less
Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A
2013-01-01
When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.
Clinical information systems for the management of tuberculosis in primary health care.
Medeiros, Eliabe Rodrigues de; Silva, Sandy Yasmine Bezerra E; Ataide, Cáthia Alessandra Varela; Pinto, Erika Simone Galvão; Silva, Maria de Lourdes Costa da; Villa, Tereza Cristina Scatena
2017-12-11
to analyze the clinical information systems used in the management of tuberculosis in Primary Health Care. descriptive, quantitative cross-sectional study with 100 health professionals with data collected through a questionnaire to assess local institutional capacity for the model of attention to chronic conditions, as adapted for tuberculosis care. The analysis was performed through descriptive and inferential statistics. Nurses and the Community Health Agents were classified as having fair capacity with a mean of 6.4 and 6.3, respectively. The city was classified as having fair capacity, with a mean of 6.0 and standard deviation of 1.5. Family Health Units had higher capacity than Basic Health Units and Mixed Units, although not statistically relevant. Clinical records and data on tuberculosis patients, items of the clinical information systems, had a higher classification than the other items, classified as having fair capacity, with a mean of 7.3 and standard deviation of 1.6, and the registry of TB patients had a mean of 6.6 and standard deviation of 2.0. clinical information systems are present in the city, mainly in clinical records and patient data, and they have the contribution of professionals linked with tuberculosis patients.
NASA Technical Reports Server (NTRS)
Clark, P. E.; Andre, C. G.; Adler, I.; Weidner, J.; Podwysocki, M.
1976-01-01
The positive correlation between Al/Si X-ray fluorescence intensity ratios determined during the Apollo 15 lunar mission and a broad-spectrum visible albedo of the moon is quantitatively established. Linear regression analysis performed on 246 1 degree geographic cells of X-ray fluorescence intensity and visible albedo data points produced a statistically significant correlation coefficient of .78. Three distinct distributions of data were identified as (1) within one standard deviation of the regression line, (2) greater than one standard deviation below the line, and (3) greater than one standard deviation above the line. The latter two distributions of data were found to occupy distinct geographic areas in the Palus Somni region.
Chun, Bo Young; Kwon, Soon Jae; Chae, Sun Hwa
2007-01-01
Purpose To evaluate changes in ocular alignment in partially accommodative esotropic children age ranged from 3 to 8 years during occlusion therapy for amblyopia. Methods Angle measurements of twenty-two partially accommodative esotropic patients with moderate amblyopia were evaluated before and at 2 years after occlusion therapy. Results Mean deviation angle with glasses at the start of occlusion treatment was 19.45±5.97 PD and decreased to 12.14±12.96 PD at 2 years after occlusion therapy (p<0.01). After occlusion therapy, 9 (41%) cases were indications of surgery for residual deviation but if we had planned surgery before occlusion treatment, 18 (82%) of patients would have had surgery. There was a statistical relationship between increase of visual acuity ratio and decrease of deviation angle (r=-0.479, p=0.024). Conclusions There was a significant reduction of deviation angle of partially accommodative esotropic patients at 2 years after occlusion therapy. Our results suggest that occlusion therapy has an influence on ocular alignment in partially accommodative esotropic patients with amblyopia. PMID:17804922
Statistical wind analysis for near-space applications
NASA Astrophysics Data System (ADS)
Roney, Jason A.
2007-09-01
Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.
Crossing statistics of laser light scattered through a nanofluid.
Arshadi Pirlar, M; Movahed, S M S; Razzaghi, D; Karimzadeh, R
2017-09-01
In this paper, we investigate the crossing statistics of speckle patterns formed in the Fresnel diffraction region by a laser beam scattering through a nanofluid. We extend zero-crossing statistics to assess the dynamical properties of the nanofluid. According to the joint probability density function of laser beam fluctuation and its time derivative, the theoretical frameworks for Gaussian and non-Gaussian regimes are revisited. We count the number of crossings not only at zero level but also for all available thresholds to determine the average speed of moving particles. Using a probabilistic framework in determining crossing statistics, a priori Gaussianity is not essentially considered; therefore, even in the presence of deviation from Gaussian fluctuation, this modified approach is capable of computing relevant quantities, such as mean value of speed, more precisely. Generalized total crossing, which represents the weighted summation of crossings for all thresholds to quantify small deviation from Gaussian statistics, is introduced. This criterion can also manipulate the contribution of noises and trends to infer reliable physical quantities. The characteristic time scale for having successive crossings at a given threshold is defined. In our experimental setup, we find that increasing sample temperature leads to more consistency between Gaussian and perturbative non-Gaussian predictions. The maximum number of crossings does not necessarily occur at mean level, indicating that we should take into account other levels in addition to zero level to achieve more accurate assessments.
Yan, Binjun; Fang, Zhonghua; Shen, Lijuan; Qu, Haibin
2015-01-01
The batch-to-batch quality consistency of herbal drugs has always been an important issue. To propose a methodology for batch-to-batch quality control based on HPLC-MS fingerprints and process knowledgebase. The extraction process of Compound E-jiao Oral Liquid was taken as a case study. After establishing the HPLC-MS fingerprint analysis method, the fingerprints of the extract solutions produced under normal and abnormal operation conditions were obtained. Multivariate statistical models were built for fault detection and a discriminant analysis model was built using the probabilistic discriminant partial-least-squares method for fault diagnosis. Based on multivariate statistical analysis, process knowledge was acquired and the cause-effect relationship between process deviations and quality defects was revealed. The quality defects were detected successfully by multivariate statistical control charts and the type of process deviations were diagnosed correctly by discriminant analysis. This work has demonstrated the benefits of combining HPLC-MS fingerprints, process knowledge and multivariate analysis for the quality control of herbal drugs. Copyright © 2015 John Wiley & Sons, Ltd.
Long-term changes (1980-2003) in total ozone time series over Northern Hemisphere midlatitudes
NASA Astrophysics Data System (ADS)
Białek, Małgorzata
2006-03-01
Long-term changes in total ozone time series for Arosa, Belsk, Boulder and Sapporo stations are examined. For each station we analyze time series of the following statistical characteristics of the distribution of daily ozone data: seasonal mean, standard deviation, maximum and minimum of total daily ozone values for all seasons. The iterative statistical model is proposed to estimate trends and long-term changes in the statistical distribution of the daily total ozone data. The trends are calculated for the period 1980-2003. We observe lessening of negative trends in the seasonal means as compared to those calculated by WMO for 1980-2000. We discuss a possibility of a change of the distribution shape of ozone daily data using the Kolmogorov-Smirnov test and comparing trend values in the seasonal mean, standard deviation, maximum and minimum time series for the selected stations and seasons. The distribution shift toward lower values without a change in the distribution shape is suggested with the following exceptions: the spreading of the distribution toward lower values for Belsk during winter and no decisive result for Sapporo and Boulder in summer.
Multi-year slant path rain fade statistics at 28.56 and 19.04 GHz for Wallops Island, Virginia
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Multiyear rain fade statistics at 28.56 GHz and 19.04 GHz were compiled for the region of Wallops Island, Virginia covering the time periods, 1 April 1977 through 31 March 1978, and 1 September 1978 through 31 August 1979. The 28.56 GHz attenuations were derived by monitoring the beacon signals from the COMSTAR geosynchronous satellite, D sub 2 during the first year, and satellite, D sub 3, during the second year. Although 19.04 GHz beacons exist aboard these satellites, statistics at this frequency were predicted using the 28 GHz fade data, the measured rain rate distribution, and effective path length concepts. The prediction method used was tested against radar derived fade distributions and excellent comparisons were noted. For example, the rms deviations between the predicted and test distributions were less than or equal to 0.2dB or 4% at 19.04 GHz. The average ratio between the 28.56 GHz and 19.04 GHz fades were also derived for equal percentages of time resulting in a factor of 2.1 with a .05 standard deviation.
Naseri, Mandana; Safi, Yaser; Akbarzadeh Baghban, Alireza; Khayat, Akbar; Eftekhar, Leila
2016-01-01
Introduction: The purpose of this study was to investigate the root and canal morphology of maxillary first molars with regards to patients’ age and gender with cone-beam computed tomography (CBCT). Methods and Materials: A total of 149 CBCT scans from 92 (67.1%) female and 57 (31.3%) male patients with mean age of 40.5 years were evaluated. Tooth length, presence of root fusion, number of the roots and canals, canal types based on Vertucci’s classification, deviation of root and apical foramen in coronal and sagittal planes and the correlation of all items with gender and age were recorded. The Mann Whitney U, Kruskal Wallis and Fisher’s exact tests were used to analyze these items. Results: The rate of root fusion was 1.3%. Multiple canals were present in the following frequencies: four canals 78.5%, five canals 11.4% and three canals 10.1%. Additional canal was detected in 86.6% of mesiobuccal roots in which Vertucci’s type VI configuration was the most prevalent followed by type II and I. Type I was the most common one in distobuccal and palatal roots. There was no statistically significant difference in the canal configurations in relation to gender and age as well as the incidence root or canal numbers (P>0.05). The mean tooth length was 19.3 and 20.3 mm in female and male patients, respectively which was statistically significant (P<0.05). Evaluation of root deviation showed that most commonly, a general pattern of straight-distal in the mesiobuccal and straight-straight for distobuccal and palatal roots occurred. In mesiobuccal roots, straight and distal deviations were more dominant in male and female, respectively (P<0.05). The prevalence of apical foramen deviation in mesiobuccal and palatal roots statistically differed with gender. Conclusion: The root and canal configuration of Iranian population showed different features from those of other populations. PMID:27790259
Large-deviation joint statistics of the finite-time Lyapunov spectrum in isotropic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry L., E-mail: pjohns86@jhu.edu; Meneveau, Charles
2015-08-15
One of the hallmarks of turbulent flows is the chaotic behavior of fluid particle paths with exponentially growing separation among them while their distance does not exceed the viscous range. The maximal (positive) Lyapunov exponent represents the average strength of the exponential growth rate, while fluctuations in the rate of growth are characterized by the finite-time Lyapunov exponents (FTLEs). In the last decade or so, the notion of Lagrangian coherent structures (which are often computed using FTLEs) has gained attention as a tool for visualizing coherent trajectory patterns in a flow and distinguishing regions of the flow with different mixingmore » properties. A quantitative statistical characterization of FTLEs can be accomplished using the statistical theory of large deviations, based on the so-called Cramér function. To obtain the Cramér function from data, we use both the method based on measuring moments and measuring histograms and introduce a finite-size correction to the histogram-based method. We generalize the existing univariate formalism to the joint distributions of the two FTLEs needed to fully specify the Lyapunov spectrum in 3D flows. The joint Cramér function of turbulence is measured from two direct numerical simulation datasets of isotropic turbulence. Results are compared with joint statistics of FTLEs computed using only the symmetric part of the velocity gradient tensor, as well as with joint statistics of instantaneous strain-rate eigenvalues. When using only the strain contribution of the velocity gradient, the maximal FTLE nearly doubles in magnitude, highlighting the role of rotation in de-correlating the fluid deformations along particle paths. We also extend the large-deviation theory to study the statistics of the ratio of FTLEs. The most likely ratio of the FTLEs λ{sub 1} : λ{sub 2} : λ{sub 3} is shown to be about 4:1:−5, compared to about 8:3:−11 when using only the strain-rate tensor for calculating fluid volume deformations. The results serve to characterize the fundamental statistical and geometric structure of turbulence at small scales including cumulative, time integrated effects. These are important for deformable particles such as droplets and polymers advected by turbulence.« less
1981-09-15
Deviation Standard deviation of detrended phase component is calculated as 2 - 1/2 in radians, as measured at the receiver output, and not corrected for...next section, were calculated they were corrected for the finite receiver reference frequency of;. f-2 402 MI~z in the following manner. Assuming a...for quiet and disturbed times. The position of the geometrica ,’. enhancement for individual cases is between 60-61°A rather than betwee.o 63-640 A as
Discrete disorder models for many-body localization
NASA Astrophysics Data System (ADS)
Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub
2018-04-01
Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur
Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less
JAN transistor and diode characterization test program, JANTX diode 1N5619
NASA Technical Reports Server (NTRS)
Takeda, H.
1977-01-01
A statistical summary of electrical characterization was performed on JANTX 1N5619 silicon diodes. Parameters are presented with test conditions, mean, standard deviation, lowest reading, 10% point, 90% point, and highest reading.
A two-component rain model for the prediction of attenuation and diversity improvement
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A new model was developed to predict attenuation statistics for a single Earth-satellite or terrestrial propagation path. The model was extended to provide predictions of the joint occurrences of specified or higher attenuation values on two closely spaced Earth-satellite paths. The joint statistics provide the information required to obtain diversity gain or diversity advantage estimates. The new model is meteorologically based. It was tested against available Earth-satellite beacon observations and terrestrial path measurements. The model employs the rain climate region descriptions of the Global rain model. The rms deviation between the predicted and observed attenuation values for the terrestrial path data was 35 percent, a result consistent with the expectations of the Global model when the rain rate distribution for the path is not used in the calculation. Within the United States the rms deviation between measurement and prediction was 36 percent but worldwide it was 79 percent.
A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability
Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.
2012-01-01
Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793
Cape Canaveral, Florida range reference atmosphere 0-70 km altitude
NASA Technical Reports Server (NTRS)
Tingle, A. (Editor)
1983-01-01
The RRA contains tabulations for monthly and annual means, standard deviations, skewness coefficients for wind speed, pressure temperature, density, water vapor pressure, virtual temperature, dew-point temperature, and the means and standard deviations for the zonal and meridional wind components and the linear (product moment) correlation coefficient between the wind components. These statistical parameters are tabulated at the station elevation and at 1 km intervals from sea level to 30 km and at 2 km intervals from 30 to 90 km altitude. The wind statistics are given at approximately 10 m above the station elevations and at altitudes with respect to mean sea level thereafter. For those range sites without rocketsonde measurements, the RRAs terminate at 30 km altitude or they are extended, if required, when rocketsonde data from a nearby launch site are available. There are four sets of tables for each of the 12 monthly reference periods and the annual reference period.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Levine, Judah
2016-01-01
A method is presented for synchronizing the time of a clock to a remote time standard when the channel connecting the two has significant delay variation that can be described only statistically. The method compares the Allan deviation of the channel fluctuations to the free-running stability of the local clock, and computes the optimum interval between requests based on one of three selectable requirements: (1) choosing the highest possible accuracy, (2) choosing the best tradeoff of cost vs. accuracy, or (3) minimizing the number of requests to realize a specific accuracy. Once the interval between requests is chosen, the final step is to steer the local clock based on the received data. A typical adjustment algorithm, which supports both the statistical considerations based on the Allan deviation comparison and the timely detection of errors is included as an example. PMID:26529759
NASA Technical Reports Server (NTRS)
Nastrom, G. D.; Jasperson, W. H.
1983-01-01
Temperature data obtained by the Global Atmospheric Sampling Program (GASP) during the period March 1975 to July 1979 are compiled to form flight summaries of static air temperature and a geographic temperature climatology. The flight summaries include the height and location of the coldest observed temperature and the mean flight level, temperature and the standard deviation of temperature for each flight as well as for flight segments. These summaries are ordered by route and month. The temperature climatology was computed for all statistically independent temperture data for each flight. The grid used consists of 5 deg latitude, 30 deg longitude and 2000 feet vertical resolution from FL270 to FL430 for each month of the year. The number of statistically independent observations, their mean, standard deviation and the empirical 98, 50, 16, 2 and .3 probability percentiles are presented.
Ku-band radar threshold analysis
NASA Technical Reports Server (NTRS)
Weber, C. L.; Polydoros, A.
1979-01-01
The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.
Hart, John
2011-03-01
This study describes a model for statistically analyzing follow-up numeric-based chiropractic spinal assessments for an individual patient based on his or her own baseline. Ten mastoid fossa temperature differential readings (MFTD) obtained from a chiropractic patient were used in the study. The first eight readings served as baseline and were compared to post-adjustment readings. One of the two post-adjustment MFTD readings fell outside two standard deviations of the baseline mean and therefore theoretically represents improvement according to pattern analysis theory. This study showed how standard deviation analysis may be used to identify future outliers for an individual patient based on his or her own baseline data. Copyright © 2011 National University of Health Sciences. Published by Elsevier Inc. All rights reserved.
Lensing corrections to the Eg(z) statistics from large scale structure
NASA Astrophysics Data System (ADS)
Moradinezhad Dizgah, Azadeh; Durrer, Ruth
2016-09-01
We study the impact of the often neglected lensing contribution to galaxy number counts on the Eg statistics which is used to constrain deviations from GR. This contribution affects both the galaxy-galaxy and the convergence-galaxy spectra, while it is larger for the latter. At higher redshifts probed by upcoming surveys, for instance at z = 1.5, neglecting this term induces an error of (25-40)% in the spectra and therefore on the Eg statistics which is constructed from the combination of the two. Moreover, including it, renders the Eg statistics scale and bias-dependent and hence puts into question its very objective.
Probability distributions of linear statistics in chaotic cavities and associated phase transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol
2010-03-01
We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less
Ionospheric Anomalies on the day of the Devastating Earthquakes during 2000-2012
NASA Astrophysics Data System (ADS)
Su, Fanfan; Zhou, Yiyan; Zhu, Fuying
2013-04-01
The study of the ionospheric abnormal changes during the large earthquakes has attracted much attention for many years. Many papers have reported the deviations of Total Electron Content (TEC) around the epicenter. The statistical analysis concludes that the anomalous behavior of TEC is related with the earthquakes with high probability[1]. But the special cases have different features[2][3]. In this study, we carry out a new statistical analysis to investigate the nature of the ionospheric anomalies during the devastating earthquakes. To demonstrate the abnormal changes of the ionospheric TEC, we have examined the TEC database from the Global Ionosphere Map (GIM). The GIM ( ftp://cddisa.gsfc.nasa.gov/pub/gps/products/ionex) includes about 200 of worldwide ground-based receivers of the GPS. The TEC data with resolution of 5° longitude and 2.5° latitude are routinely published in a 2-h time interval. The information of earthquakes is obtained from the USGS ( http://earthquake.usgs.gov/earthquakes/eqarchives/epic/). To avoid the interference of the magnetic storm, the days with Dst≤-20 nT are excluded. Finally, a total of 13 M≥8.0 earthquakes in the global area during 2000-2012 are selected. The 27 days before the main shock are treated as the background days. Here, 27-day TEC median (Me) and the standard deviation (σ) are used to detect the variation of TEC. We set the upper bound BU = Me + 3*σ, and the lower bound BL = Me - 3*σ. Therefore the probability of a new TEC in the interval (BL, BU) is approximately 99.7%. If TEC varies between BU and BL, the deviation (DTEC) equals zero. Otherwise, the deviations between TEC and bounds are calculated as DTEC = BU/BL - TEC. From the deviations, the positive and negative abnormal changes of TEC can be evaluated. We investigate temporal and spatial signatures of the ionospheric anomalies on the day of the devastating earthquakes(M≥8.0). The results show that the occurrence rates of positive anomaly and negative anomaly are almost equal. The most significant anomaly on the day may occur at the time very close to the main shock, but sometimes it is not the case. The positions of the maximal deviations always deviate from the epicenter. The direction may be southeast, southwest, northeast or northwest with the almost equal probability. The anomalies may move to the epicenter, deviate to any direction, or stay at the same position and gradually fade out. There is no significant feature, such as occurrence time, position, or motion, and so on, which can indicate the source of the anomalies. References: [1].Le, H., J. Y. Liu, et al. (2011). "A statistical analysis of ionospheric anomalies before 736 M6.0+earthquakes during 2002-2010." J. Geophys. Res. 116. [2].Liu, J. Y., Y. I. Chen, et al. (2009). "Seismoionospheric GPS total electron content anomalies observed before the 12 May 2008 Mw7.9 Wenchuan earthquake." J. Geophys. Res. 114. [3].Rolland, L. M., P. Lognonne, et al. (2011). "Detection and modeling of Rayleigh wave induced patterns in the ionosphere." J. Geophys. Res. 116.
Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion
NASA Astrophysics Data System (ADS)
Lazarescu, Alexandre
2017-06-01
Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-01-01
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.
Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan
2016-09-01
Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Scenarios for Motivating the Learning of Variability: An Example in Finances
ERIC Educational Resources Information Center
Cordani, Lisbeth K.
2013-01-01
This article explores an example in finances in order to motivate the random variable learning to the very beginners in statistics. In addition, it offers a relationship between standard deviation and range in a very specific situation.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach.
Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem
2013-01-01
This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff.
Gauging Skills of Hospital Security Personnel: a Statistically-driven, Questionnaire-based Approach
Rinkoo, Arvind Vashishta; Mishra, Shubhra; Rahesuddin; Nabi, Tauqeer; Chandra, Vidha; Chandra, Hem
2013-01-01
Objectives This study aims to gauge the technical and soft skills of the hospital security personnel so as to enable prioritization of their training needs. Methodology A cross sectional questionnaire based study was conducted in December 2011. Two separate predesigned and pretested questionnaires were used for gauging soft skills and technical skills of the security personnel. Extensive statistical analysis, including Multivariate Analysis (Pillai-Bartlett trace along with Multi-factorial ANOVA) and Post-hoc Tests (Bonferroni Test) was applied. Results The 143 participants performed better on the soft skills front with an average score of 6.43 and standard deviation of 1.40. The average technical skills score was 5.09 with a standard deviation of 1.44. The study avowed a need for formal hands on training with greater emphasis on technical skills. Multivariate analysis of the available data further helped in identifying 20 security personnel who should be prioritized for soft skills training and a group of 36 security personnel who should receive maximum attention during technical skills training. Conclusion This statistically driven approach can be used as a prototype by healthcare delivery institutions worldwide, after situation specific customizations, to identify the training needs of any category of healthcare staff. PMID:23559904
Silva, A F; Sarraguça, M C; Fonteyne, M; Vercruysse, J; De Leersnyder, F; Vanhoorne, V; Bostijn, N; Verstraeten, M; Vervaet, C; Remon, J P; De Beer, T; Lopes, J A
2017-08-07
A multivariate statistical process control (MSPC) strategy was developed for the monitoring of the ConsiGma™-25 continuous tablet manufacturing line. Thirty-five logged variables encompassing three major units, being a twin screw high shear granulator, a fluid bed dryer and a product control unit, were used to monitor the process. The MSPC strategy was based on principal component analysis of data acquired under normal operating conditions using a series of four process runs. Runs with imposed disturbances in the dryer air flow and temperature, in the granulator barrel temperature, speed and liquid mass flow and in the powder dosing unit mass flow were utilized to evaluate the model's monitoring performance. The impact of the imposed deviations to the process continuity was also evaluated using Hotelling's T 2 and Q residuals statistics control charts. The influence of the individual process variables was assessed by analyzing contribution plots at specific time points. Results show that the imposed disturbances were all detected in both control charts. Overall, the MSPC strategy was successfully developed and applied. Additionally, deviations not associated with the imposed changes were detected, mainly in the granulator barrel temperature control. Copyright © 2017 Elsevier B.V. All rights reserved.
Distributional properties of relative phase in bimanual coordination.
James, Eric; Layne, Charles S; Newell, Karl M
2010-10-01
Studies of bimanual coordination have typically estimated the stability of coordination patterns through the use of the circular standard deviation of relative phase. The interpretation of this statistic depends upon the assumption of a von Mises distribution. The present study tested this assumption by examining the distributional properties of relative phase in three bimanual coordination patterns. There were significant deviations from the von Mises distribution due to differences in the kurtosis of distributions. The kurtosis depended upon the relative phase pattern performed, with leptokurtic distributions occurring in the in-phase and antiphase patterns and platykurtic distributions occurring in the 30° pattern. Thus, the distributional assumptions needed to validly and reliably use the standard deviation are not necessarily present in relative phase data though they are qualitatively consistent with the landscape properties of the intrinsic dynamics.
Preliminary analysis of hot spot factors in an advanced reactor for space electric power systems
NASA Technical Reports Server (NTRS)
Lustig, P. H.; Holms, A. G.; Davison, H. W.
1973-01-01
The maximum fuel pin temperature for nominal operation in an advanced power reactor is 1370 K. Because of possible nitrogen embrittlement of the clad, the fuel temperature was limited to 1622 K. Assuming simultaneous occurrence of the most adverse conditions a deterministic analysis gave a maximum fuel temperature of 1610 K. A statistical analysis, using a synthesized estimate of the standard deviation for the highest fuel pin temperature, showed probabilities of 0.015 of that pin exceeding the temperature limit by the distribution free Chebyshev inequality and virtually nil assuming a normal distribution. The latter assumption gives a 1463 K maximum temperature at 3 standard deviations, the usually assumed cutoff. Further, the distribution and standard deviation of the fuel-clad gap are the most significant contributions to the uncertainty in the fuel temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.
2012-09-10
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; /Fermilab; Ford, Eric B.
2012-01-01
We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through Quarter six (Q6) of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.
NASA Technical Reports Server (NTRS)
Parrish, R. S.; Carter, M. C.
1974-01-01
This analysis utilizes computer simulation and statistical estimation. Realizations of stationary gaussian stochastic processes with selected autocorrelation functions are computer simulated. Analysis of the simulated data revealed that the mean and the variance of a process were functionally dependent upon the autocorrelation parameter and crossing level. Using predicted values for the mean and standard deviation, by the method of moments, the distribution parameters was estimated. Thus, given the autocorrelation parameter, crossing level, mean, and standard deviation of a process, the probability of exceeding the crossing level for a particular length of time was calculated.
On the thermopower and thermomagnetic properties of Er{sub x}Sn{sub 1–x}Se solid solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huseynov, J. I., E-mail: cih-58@mail.ru; Murguzov, M. I.; Ismayilov, Sh. S.
2017-02-15
The Er{sub x}Sn{sub 1–x}Se system is characterized by a significant deviation of the temperature dependence of the differential thermopower from linearity at temperatures below room temperature and a change in the sign of the thermomagnetic coefficient. The deviation of the thermopower of Er{sub x}Sn{sub 1–x}Se samples in the nonequilibrium state from linearity is found to be caused mainly by the entrainment of charge carriers by phonons α{sub ph}. The statistical forces of electronic entrainment, A{sub ph}(ε), are estimated.
Spatial trends in Pearson Type III statistical parameters
Lichty, R.W.; Karlinger, M.R.
1995-01-01
Spatial trends in the statistical parameters (mean, standard deviation, and skewness coefficient) of a Pearson Type III distribution of the logarithms of annual flood peaks for small rural basins (less than 90 km2) are delineated using a climate factor CT, (T=2-, 25-, and 100-yr recurrence intervals), which quantifies the effects of long-term climatic data (rainfall and pan evaporation) on observed T-yr floods. Maps showing trends in average parameter values demonstrate the geographically varying influence of climate on the magnitude of Pearson Type III statistical parameters. The spatial trends in variability of the parameter values characterize the sensitivity of statistical parameters to the interaction of basin-runoff characteristics (hydrology) and climate. -from Authors
2013-06-01
collection are the facts that devices the lack encryption or compression methods and that the log file must be saved on the host system prior to transfer...time. Statistical correlation utilizes numerical algorithms to detect deviations from normal event levels and other routine activities (Chuvakin...can also assist in detecting low volume threats. Although easy and logical to implement, the implementation of statistical correlation algorithms
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
Visual field progression in glaucoma: total versus pattern deviation analyses.
Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C
2005-12-01
To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.
Baryon-antibaryon dynamics in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Seifert, E.; Cassing, W.
2018-04-01
The dynamics of baryon-antibaryon annihilation and reproduction (B B ¯↔3 M ) is studied within the Parton-Hadron-String Dynamics (PHSD) transport approach for Pb+Pb and Au+Au collisions as a function of centrality from lower Super Proton Synchrotron (SPS) up to Large Hadron Collider (LHC) energies on the basis of the quark rearrangement model. At Relativistic Heavy-Ion Collider (RHIC) energies we find a small net reduction of baryon-antibaryon (B B ¯ ) pairs while for the LHC energy of √{sN N}=2.76 TeV a small net enhancement is found relative to calculations without annihilation (and reproduction) channels. Accordingly, the sizable difference between data and statistical calculations in Pb+Pb collisions at √{sN N}=2.76 TeV for proton and antiproton yields [ALICE Collaboration, B. Abelev et al., Phys. Rev. C 88, 044910 (2013), 10.1103/PhysRevC.88.044910], where a deviation of 2.7 σ was claimed by the ALICE Collaboration, should not be attributed to a net antiproton annihilation. This is in line with the observation that no substantial deviation between the data and statistical hadronization model (SHM) calculations is seen for antihyperons, since according to the PHSD analysis the antihyperons should be modified by the same amount as antiprotons. As the PHSD results for particle ratios are in line with the ALICE data (within error bars) this might point towards a deviation from statistical equilibrium in the hadronization (at least for protons and antiprotons). Furthermore, we find that the B B ¯↔3 M reactions are more effective at lower SPS energies where a net suppression for antiprotons and antihyperons up to a factor of 2-2.5 can be extracted from the PHSD calculations for central Au+Au collisions.
Specification of the ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2002-01-01
Quantifying the spacecraft charging risks and corresponding hazards for the International Space Station (ISS) requires a plasma environment specification describing the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide estimates of long term (seasonal) mean Te and Ne values for the low Earth orbit environment. Knowledge of the Te and Ne variability as well as the likelihood of extreme deviations from the mean values are required to estimate both the magnitude and frequency of occurrence of potentially hazardous spacecraft charging environments for a given ISS construction stage and flight configuration. This paper describes the statistical analysis of historical ionospheric low Earth orbit plasma measurements used to estimate Ne, Te variability in the ISS flight environment. The statistical variability analysis of Ne and Te enables calculation of the expected frequency of Occurrence of any particular values of Ne and Te, especially those that correspond to possibly hazardous spacecraft charging environments. The database used in the original analysis included measurements from the AE-C, AE-D, and DE-2 satellites. Recent work on the database has added additional satellites to the database and ground based incoherent scatter radar observations as well. Deviations of the data values from the IRI estimated Ne, Te parameters for each data point provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. This technique, while developed specifically for the Space Station analysis, can also be generalized to provide ionospheric plasma environment risk specification models for low Earth orbit over an altitude range of 200 km through approximately 1000 km.
Describing temporal variation in reticuloruminal pH using continuous monitoring data.
Denwood, M J; Kleen, J L; Jensen, D B; Jonsson, N N
2018-01-01
Reticuloruminal pH has been linked to subclinical disease in dairy cattle, leading to considerable interest in identifying pH observations below a given threshold. The relatively recent availability of continuously monitored data from pH boluses gives new opportunities for characterizing the normal patterns of pH over time and distinguishing these from abnormal patterns using more sensitive and specific methods than simple thresholds. We fitted a series of statistical models to continuously monitored data from 93 animals on 13 farms to characterize normal variation within and between animals. We used a subset of the data to relate deviations from the normal pattern to the productivity of 24 dairy cows from a single herd. Our findings show substantial variation in pH characteristics between animals, although animals within the same farm tended to show more consistent patterns. There was strong evidence for a predictable diurnal variation in all animals, and up to 70% of the observed variation in pH could be explained using a simple statistical model. For the 24 animals with available production information, there was also a strong association between productivity (as measured by both milk yield and dry matter intake) and deviations from the expected diurnal pattern of pH 2 d before the productivity observation. In contrast, there was no association between productivity and the occurrence of observations below a threshold pH. We conclude that statistical models can be used to account for a substantial proportion of the observed variability in pH and that future work with continuously monitored pH data should focus on deviations from a predictable pattern rather than the frequency of observations below an arbitrary pH threshold. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
2017-01-01
Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003–2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford’s law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford’s law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford’s law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community’s pressure on countries to put combatting money laundering on the top of their national agenda. PMID:28122058
Deleanu, Ioana Sorina
2017-01-01
Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003-2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford's law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford's law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford's law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community's pressure on countries to put combatting money laundering on the top of their national agenda.
NASA Astrophysics Data System (ADS)
Takabayashi, Sadao; Klein, William P.; Onodera, Craig; Rapp, Blake; Flores-Estrada, Juan; Lindau, Elias; Snowball, Lejmarc; Sam, Joseph T.; Padilla, Jennifer E.; Lee, Jeunghoon; Knowlton, William B.; Graugnard, Elton; Yurke, Bernard; Kuang, Wan; Hughes, William L.
2014-10-01
High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five tethers. The interparticle distance was within 2 nm of all design specifications and the nanoparticle spatial deviations decreased with interparticle spacing. Modified geometric, binomial, and trinomial distributions indicate that site-bridging, steric hindrance, and electrostatic repulsion were not dominant barriers to self-assembly and both tethers and binding sites were statistically independent at high particle densities.High precision, high yield, and high density self-assembly of nanoparticles into arrays is essential for nanophotonics. Spatial deviations as small as a few nanometers can alter the properties of near-field coupled optical nanostructures. Several studies have reported assemblies of few nanoparticle structures with controlled spacing using DNA nanostructures with variable yield. Here, we report multi-tether design strategies and attachment yields for homo- and hetero-nanoparticle arrays templated by DNA origami nanotubes. Nanoparticle attachment yield via DNA hybridization is comparable with streptavidin-biotin binding. Independent of the number of binding sites, >97% site-occupation was achieved with four tethers and 99.2% site-occupation is theoretically possible with five tethers. The interparticle distance was within 2 nm of all design specifications and the nanoparticle spatial deviations decreased with interparticle spacing. Modified geometric, binomial, and trinomial distributions indicate that site-bridging, steric hindrance, and electrostatic repulsion were not dominant barriers to self-assembly and both tethers and binding sites were statistically independent at high particle densities. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03069a
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E
2018-05-01
The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI, respectively. Patient-specific control charts using NCC evaluated daily variation and identified statistically significant deviations. This study also showed that subjective evaluations of the images were not always consistent. Population control charts identified a patient whose tracking metrics were significantly lower than those of other patients. The patient-specific action limits identified registrations that warranted immediate evaluation by an expert. When effective displacements in the anterior-posterior direction were compared to 3DoF couch displacements, the agreement was ±1 mm for seven of 10 patients for both C-spine and mandible RTVs. Qualitative review alone of IGRT images can result in inconsistent feedback to the IGRT process. Registration tracking using NCC objectively identifies statistically significant deviations. When used in conjunction with the current image review process, this tool can assist in improving the safety and consistency of the IGRT process. © 2018 American Association of Physicists in Medicine.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups.
Hasija, Narender; Bala, Madhu; Goyal, Virender
2014-05-01
Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton's ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton's study. After measuring the teeth on all 100 patients, Bolton's analysis was performed. Results were compared with Bolton's means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85.
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
How does the past of a soccer match influence its future? Concepts and statistical analysis.
Heuer, Andreas; Rubner, Oliver
2012-01-01
Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified.
How Does the Past of a Soccer Match Influence Its Future? Concepts and Statistical Analysis
Heuer, Andreas; Rubner, Oliver
2012-01-01
Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified. PMID:23226200
Sketching Curves for Normal Distributions--Geometric Connections
ERIC Educational Resources Information Center
Bosse, Michael J.
2006-01-01
Within statistics instruction, students are often requested to sketch the curve representing a normal distribution with a given mean and standard deviation. Unfortunately, these sketches are often notoriously imprecise. Poor sketches are usually the result of missing mathematical knowledge. This paper considers relationships which exist among…
Fracture of mandibular condyle—to open or not to open: an attempt to settle the controversy.
Rastogi, Sanjay; Sharma, Siddharth; Kumar, Sanjeev; Reddy, Mahendra P; Niranjanaprasad Indra, B
2015-06-01
To compare the outcome of the open method versus the closed method of treatment for mandibular condylar fracture. Fifty patients with fractures of the mandibular condylar processes were evaluated. All fractures were displaced, with a degree of deviation between the condylar fragment and the ascending ramus of 10 to 45 degrees (mediolaterally). The patients were randomly divided into two groups, with group 1 receiving open reduction internal fixation and group 2 receiving closed reduction. The follow-up was done over the period of 6 months. Statistically significant improvement was seen in group 1 compared with group 2 in terms of anatomic reduction of the condyle, shortening of the ascending ramus, occlusal status, and deviation on mouth opening. A statistically significant difference was seen in the patients treated with the open method, with improved temporomandibular joint functions and fewer short- and long-term complications compared with those treated with the closed method. Copyright © 2015 Elsevier Inc. All rights reserved.
Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island
NASA Astrophysics Data System (ADS)
E Komalasari, K.; Pawitan, H.; Faqih, A.
2017-03-01
This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Eustachian tube diameter: Is it associated with chronic otitis media development?
Paltura, Ceki; Can, Tuba Selçuk; Yilmaz, Behice Kaniye; Dinç, Mehmet Emre; Develioğlu, Ömer Necati; Külekçi, Mehmet
To evaluate the effect of ET diameter on Chronic Otitis Media (COM) pathogenesis. Retrospective. Patients with unilateral COM disease are included in the study. The connection between fibrocartilaginous and osseous segments of the Eustachian Tube (ET) on axial Computed Tomography (CT) images was defined and the diameter of this segment is measured. The measurements were carried out bilaterally and statistically compared. 154 (76 (49%) male, 78 (51%) female patients were diagnosed with unilateral COM and included in the study. The mean diameter of ET was 1947mm (Std. deviation±0.5247) for healthy ears and 1788mm (Std. deviation±0.5306) for diseased ears. The statistical analysis showed a significantly narrow ET diameter in diseased ear side (p<0.01). The dysfunction or anatomical anomalies of ET are correlated with COM. Measuring of the bony diameter of ET during routine Temporal CT examination is recommended for our colleagues. Copyright © 2017 Elsevier Inc. All rights reserved.
Statistical behavior of post-shock overpressure past grid turbulence
NASA Astrophysics Data System (ADS)
Sasoh, Akihiro; Harasaki, Tatsuya; Kitamura, Takuya; Takagi, Daisuke; Ito, Shigeyoshi; Matsuda, Atsushi; Nagata, Kouji; Sakai, Yasuhiko
2014-09-01
When a shock wave ejected from the exit of a 5.4-mm inner diameter, stainless steel tube propagated through grid turbulence across a distance of 215 mm, which is 5-15 times larger than its integral length scale , and was normally incident onto a flat surface; the peak value of post-shock overpressure, , at a shock Mach number of 1.0009 on the flat surface experienced a standard deviation of up to about 9 % of its ensemble average. This value was more than 40 times larger than the dynamic pressure fluctuation corresponding to the maximum value of the root-mean-square velocity fluctuation, . By varying and , the statistical behavior of was obtained after at least 500 runs were performed for each condition. The standard deviation of due to the turbulence was almost proportional to . Although the overpressure modulations at two points 200 mm apart were independent of each other, we observed a weak positive correlation between the peak overpressure difference and the relative arrival time difference.
NASA Astrophysics Data System (ADS)
Klimenko, V. V.
2017-12-01
We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.
Application of statistical process control to qualitative molecular diagnostic assays.
O'Brien, Cathal P; Finn, Stephen P
2014-01-01
Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control (SPC). Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply SPC to an assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater sample numbers to mitigate a protracted time to detection. Modeled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of SPC to qualitative laboratory data.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
NASA Astrophysics Data System (ADS)
da Silva Oliveira, C. I.; Martinez-Martinez, D.; Al-Rjoub, A.; Rebouta, L.; Menezes, R.; Cunha, L.
2018-04-01
In this paper, we present a statistical method that allows evaluating the degree of a transparency of a thin film. To do so, the color coordinates are measured on different substrates, and the standard deviation is evaluated. In case of low values, the color depends on the film and not on the substrate, and intrinsic colors are obtained. In contrast, transparent films lead to high values of standard deviation, since the value of the color coordinates depends on the substrate. Between both extremes, colored films with a certain degree of transparency can be found. This method allows an objective and simple evaluation of the transparency of any film, improving the subjective visual inspection and avoiding the thickness problems related to optical spectroscopy evaluation. Zirconium oxynitride films deposited on three different substrates (Si, steel and glass) are used for testing the validity of this method, whose results have been validated with optical spectroscopy, and agree with the visual impression of the samples.
Speaks, Crystal; McGlynn, Katherine A; Cook, Michael B
2012-10-01
The current working model of type II testicular germ cell tumor (TGCT) pathogenesis states that carcinoma in situ arises during embryogenesis, is a necessary precursor, and always progresses to cancer. An implicit condition of this model is that only in utero exposures affect the development of TGCT in later life. In an age-period-cohort analysis, this working model contends an absence of calendar period deviations. We tested this contention using data from the SEER registries of the United States. We assessed age-period-cohort models of TGCTs, seminomas, and nonseminomas for the period 1973-2008. Analyses were restricted to whites diagnosed at ages 15-74 years. We tested whether calendar period deviations were significant in TGCT incidence trends adjusted for age deviations and cohort effects. This analysis included 32,250 TGCTs (18,475 seminomas and 13,775 nonseminomas). Seminoma incidence trends have increased with an average annual percentage change in log-linear rates (net drift) of 1.25 %, relative to just 0.14 % for nonseminoma. In more recent time periods, TGCT incidence trends have plateaued and then undergone a slight decrease. Calendar period deviations were highly statistically significant in models of TGCT (p = 1.24(-9)) and seminoma (p = 3.99(-14)), after adjustment for age deviations and cohort effects; results for nonseminoma (p = 0.02) indicated that the effects of calendar period were much more muted. Calendar period deviations play a significant role in incidence trends of TGCT, which indicates that postnatal exposures are etiologically relevant.
Das, Biswajit; Gangopadhyay, Gautam
2018-05-07
In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.
NASA Astrophysics Data System (ADS)
Das, Biswajit; Gangopadhyay, Gautam
2018-05-01
In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.
Utrillas, María P; Marín, María J; Esteve, Anna R; Estellés, Victor; Tena, Fernando; Cañada, Javier; Martínez-Lozano, José A
2009-01-01
Values of measured and modeled diffuse UV erythemal irradiance (UVER) for all sky conditions are compared on planes inclined at 40 degrees and oriented north, south, east and west. The models used for simulating diffuse UVER are of the geometric-type, mainly the Isotropic, Klucher, Hay, Muneer, Reindl and Schauberger models. To analyze the precision of the models, some statistical estimators were used such as root mean square deviation, mean absolute deviation and mean bias deviation. It was seen that all the analyzed models reproduce adequately the diffuse UVER on the south-facing plane, with greater discrepancies for the other inclined planes. When the models are applied to cloud-free conditions, the errors obtained are higher because the anisotropy of the sky dome acquires more importance and the models do not provide the estimation of diffuse UVER accurately.
MUSiC - Model-independent search for deviations from Standard Model predictions in CMS
NASA Astrophysics Data System (ADS)
Pieta, Holger
2010-02-01
We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukovskii, Yu.M.; Luksha, O.P.; Nenarokomov, E.A.
1988-03-01
We have derived a statistical model for the dissolution of uranium dioxide tablets for the 6 to 12 M concentration range and temperatures from 80/sup 0/C to the boiling point. The model differs qualitatively from the dissolution model for ground uranium dioxide. In the indicated range of experimental conditions, the mean-square deviation of the curves for the model from the experimental curves is not greater than 6%.
NASA Astrophysics Data System (ADS)
Dai, Guangyao; Wu, Songhua; Song, Xiaoquan; Zhai, Xiaochun
2018-04-01
Cirrus clouds affect the energy budget and hydrological cycle of the earth's atmosphere. The Tibetan Plateau (TP) plays a significant role in the global and regional climate. Optical and geometrical properties of cirrus clouds in the TP were measured in July-August 2014 by lidar and radiosonde. The statistics and temperature dependences of the corresponding properties are analyzed. The cirrus cloud formations are discussed with respect to temperature deviation and dynamic processes.
Photon counting statistics analysis of biophotons from hands.
Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup
2003-05-01
The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-05-15
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.
Sharma, P; Bhargava, M; Sukhachev, D; Datta, S; Wattal, C
2014-02-01
Tropical febrile illnesses such as malaria and dengue are challenging to differentiate clinically. Automated cellular indices from hematology analyzers may afford a preliminary rapid distinction. Blood count and VCS parameters from 114 malaria patients, 105 dengue patients, and 105 febrile controls without dengue or malaria were analyzed. Statistical discriminant functions were generated, and their diagnostic performances were assessed by ROC curve analysis. Three statistical functions were generated: (i) malaria-vs.-controls factor incorporating platelet count and standard deviations of lymphocyte volume and conductivity that identified malaria with 90.4% sensitivity, 88.6% specificity; (ii) dengue-vs.-controls factor incorporating platelet count, lymphocyte percentage and standard deviation of lymphocyte conductivity that identified dengue with 81.0% sensitivity and 77.1% specificity; and (iii) febrile-controls-vs.-malaria/dengue factor incorporating mean corpuscular hemoglobin concentration, neutrophil percentage, mean lymphocyte and monocyte volumes, and standard deviation of monocyte volume that distinguished malaria and dengue from other febrile illnesses with 85.1% sensitivity and 91.4% specificity. Leukocyte abnormalities quantitated by automated analyzers successfully identified malaria and dengue and distinguished them from other fevers. These economic discriminant functions can be rapidly calculated by analyzer software programs to generate electronic flags to trigger-specific testing. They could potentially transform diagnostic approaches to tropical febrile illnesses in cost-constrained settings. © 2013 John Wiley & Sons Ltd.
Emergence of patterns in random processes
NASA Astrophysics Data System (ADS)
Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.
2012-08-01
Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.
Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C
2018-03-07
Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.
USDA-ARS?s Scientific Manuscript database
We describe new methods for characterizing gene tree discordance in phylogenomic datasets, which screen for deviations from neutral expectations, summarize variation in statistical support among gene trees, and allow comparison of the patterns of discordance induced by various analysis choices. Usin...
Developing Competency of Teachers in Basic Education Schools
ERIC Educational Resources Information Center
Yuayai, Rerngrit; Chansirisira, Pacharawit; Numnaphol, Kochaporn
2015-01-01
This study aims to develop competency of teachers in basic education schools. The research instruments included the semi-structured in-depth interview form, questionnaire, program developing competency, and evaluation competency form. The statistics used for data analysis were percentage, mean, and standard deviation. The research found that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dizgah, Azadeh Moradinezhad; Durrer, Ruth, E-mail: Azadeh.Moradinezhad@unige.ch, E-mail: Ruth.Durrer@unige.ch
We study the impact of the often neglected lensing contribution to galaxy number counts on the E {sub g} statistics which is used to constrain deviations from GR. This contribution affects both the galaxy-galaxy and the convergence-galaxy spectra, while it is larger for the latter. At higher redshifts probed by upcoming surveys, for instance at z = 1.5, neglecting this term induces an error of (25–40)% in the spectra and therefore on the E {sub g} statistics which is constructed from the combination of the two. Moreover, including it, renders the E {sub g} statistics scale and bias-dependent and hencemore » puts into question its very objective.« less
Three-level sampler having automated thresholds
NASA Technical Reports Server (NTRS)
Jurgens, R. F.
1976-01-01
A three-level sampler is described that has its thresholds controlled automatically so as to track changes in the statistics of the random process being sampled. In particular, the mean value is removed and the ratio of the standard deviation of the random process to the threshold is maintained constant. The system is configured in such a manner that slow drifts in the level comparators and digital-to-analog converters are also removed. The ratio of the standard deviation to threshold level may be chosen within the constraints of the ratios of two integers N and M. These may be chosen to minimize the quantizing noise of the sampled process.
Standard deviation of scatterometer measurements from space.
NASA Technical Reports Server (NTRS)
Fischer, R. E.
1972-01-01
The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.
A microscopic model of the Stokes-Einstein relation in arbitrary dimension.
Charbonneau, Benoit; Charbonneau, Patrick; Szamel, Grzegorz
2018-06-14
The Stokes-Einstein relation (SER) is one of the most robust and widely employed results from the theory of liquids. Yet sizable deviations can be observed for self-solvation, which cannot be explained by the standard hydrodynamic derivation. Here, we revisit the work of Masters and Madden [J. Chem. Phys. 74, 2450-2459 (1981)], who first solved a statistical mechanics model of the SER using the projection operator formalism. By generalizing their analysis to all spatial dimensions and to partially structured solvents, we identify a potential microscopic origin of some of these deviations. We also reproduce the SER-like result from the exact dynamics of infinite-dimensional fluids.
Arisan, Volkan; Karabuda, Zihni Cüneyt; Pişkin, Bülent; Özdemir, Tayfun
2013-12-01
Deviations of implants that were placed by conventional computed tomography (CT)- or cone beam CT (CBCT)-derived mucosa-supported stereolithographic (SLA) surgical guides were analyzed in this study. Eleven patients were randomly scanned by a multi-slice CT (CT group) or a CBCT scanner (CBCT group). A total of 108 implants were planned on the software and placed using SLA guides. A new CT or CBCT scan was obtained and merged with the planning data to identify the deviations between the planned and placed implants. Results were analyzed by Mann-Whitney U test and multiple regressions (p < .05). Mean angular and linear deviations in the CT group were 3.30° (SD 0.36), and 0.75 (SD 0.32) and 0.80 mm (SD 0.35) at the implant shoulder and tip, respectively. In the CBCT group, mean angular and linear deviations were 3.47° (SD 0.37), and 0.81 (SD 0.32) and 0.87 mm (SD 0.32) at the implant shoulder and tip, respectively. No statistically significant differences were detected between the CT and CBCT groups (p = .169 and p = .551, p = .113 for angular and linear deviations, respectively). Implant placement via CT- or CBCT-derived mucosa-supported SLA guides yielded similar deviation values. Results should be confirmed on alternative CBCT scanners. © 2012 Wiley Periodicals, Inc.
Pattern statistics on Markov chains and sensitivity to parameter estimation
Nuel, Grégory
2006-01-01
Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916
Pattern statistics on Markov chains and sensitivity to parameter estimation.
Nuel, Grégory
2006-10-17
In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.
Statistical Analysis of 30 Years Rainfall Data: A Case Study
NASA Astrophysics Data System (ADS)
Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.
2017-07-01
Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.
Sterile Basics of Compounding: Relationship Between Syringe Size and Dosing Accuracy.
Kosinski, Tracy M; Brown, Michael C; Zavala, Pedro J
2018-01-01
The purpose of this study was to investigate the accuracy and reproducibility of a 2-mL volume injection using a 3-mL and 10-mL syringe with pharmacy student compounders. An exercise was designed to assess each student's accuracy in compounding a sterile preparation with the correct 4-mg strength using a 3-mL and 10-mL syringe. The average ondansetron dose when compounded with the 3-mL syringe was 4.03 mg (standard deviation ± 0.45 mg), which was not statistically significantly different than the intended 4-mg desired dose (P=0.497). The average ondansetron dose when compounded with the 10-mL syringe was 4.18 mg (standard deviation + 0.68 mg), which was statistically significantly different than the intended 4-mg desired dose (P=0.002). Additionally, there also was a statistically significant difference in the average ondansetron dose compounded using a 3-mL syringe (4.03 mg) and a 10-mL syringe (4.18 mg) (P=0.027). The accuracy and reproducibility of the 2-mL desired dose volume decreased as the compounding syringe size increased from 3 mL to 10 mL. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Ocular surface culture changes in patients after septoplasty.
Ozkiriş, Mahmut; Kapusuz Gencer, Zeliha; Kader, Ciğdem; Saydam, Levent
2014-01-01
To investigate the interrelationships between pre- and postoperative microbiological changes by taking samples from both eyes of 40 patients who underwent septoplasty due to septal deviation. Forty patients diagnosed with septal deviation who underwent a septoplasty operation under general anesthesia were enrolled in this study. The study was conducted on 40 patients who met the inclusion criteria and attended follow-up visits. One day before the operation and 48 h after the operation, cultures were taken individually from the conjunctivas and puncta of both eyes and sent to the microbiology laboratory. Patients who were candidates for nasal surgery due to their symptoms and clinical examination results were randomly selected and 40 of these completed the study. No statistically significant differences in bacterial growth were observed between the eyes before the operation (P > 0.05). There were, however, statistically significant differences between the eyes in terms of bacterial growth in the postoperative period (P < 0.05). Pathogenic bacterial cultures were grown in 47 eyes in the postoperative period, and this finding was statistically significant. In the eye cultures, the most commonly isolated pathogens were S. epidermidis, and S. aureus. Although the indicated microorganisms isolated from the patient groups were grown in cultures, there were neither clinical symptoms nor signs related to ocular infections.
NASA Astrophysics Data System (ADS)
Kim, Ji-Soo; Han, Soo-Hyung; Ryang, Woo-Hun
2001-12-01
Electrical resistivity mapping was conducted to delineate boundaries and architecture of the Eumsung Basin Cretaceous. Basin boundaries are effectively clarified in electrical dipole-dipole resistivity sections as high-resistivity contrast bands. High resistivities most likely originate from the basement of Jurassic granite and Precambrian gneiss, contrasting with the lower resistivities from infilled sedimentary rocks. The electrical properties of basin-margin boundaries are compatible with the results of vertical electrical soundings and very-low-frequency electromagnetic surveys. A statistical analysis of the resistivity sections is tested in terms of standard deviation and is found to be an effective scheme for the subsurface reconstruction of basin architecture as well as the surface demarcation of basin-margin faults and brittle fracture zones, characterized by much higher standard deviation. Pseudo three-dimensional architecture of the basin is delineated by integrating the composite resistivity structure information from two cross-basin E-W magnetotelluric lines and dipole-dipole resistivity lines. Based on statistical analysis, the maximum depth of the basin varies from about 1 km in the northern part to 3 km or more in the middle part. This strong variation supports the view that the basin experienced pull-apart opening with rapid subsidence of the central blocks and asymmetric cross-basinal extension.
Spectral theory of extreme statistics in birth-death systems
NASA Astrophysics Data System (ADS)
Meerson, Baruch
2008-03-01
Statistics of rare events, or large deviations, in chemical reactions and systems of birth-death type have attracted a great deal of interest in many areas of science including cell biochemistry, astrochemistry, epidemiology, population biology, etc. Large deviations become of vital importance when discrete (non-continuum) nature of a population of ``particles'' (molecules, bacteria, cells, animals or even humans) and stochastic character of interactions can drive the population to extinction. I will briefly review the novel spectral method [1-3] for calculating the extreme statistics of a broad class of birth-death processes and reactions involving a single species. The spectral method combines the probability generating function formalism with the Sturm-Liouville theory of linear differential operators. It involves a controlled perturbative treatment based on a natural large parameter of the problem: the average number of particles/individuals in a stationary or metastable state. For extinction (the first passage) problems the method yields accurate results for the extinction statistics and for the quasi-stationary probability distribution, including the tails, of metastable states. I will demonstrate the power of the method on the example of a branching and annihilation reaction, A ->-2.8mm2mm2A,,A ->-2.8mm2mm , representative of a rather general class of processes. *M. Assaf and B. Meerson, Phys. Rev. Lett. 97, 200602 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 74, 041115 (2006). *M. Assaf and B. Meerson, Phys. Rev. E 75, 031122 (2007).
Analysis of Longitudinal Outcome Data with Missing Values in Total Knee Arthroplasty.
Kang, Yeon Gwi; Lee, Jang Taek; Kang, Jong Yeal; Kim, Ga Hye; Kim, Tae Kyun
2016-01-01
We sought to determine the influence of missing data on the statistical results, and to determine which statistical method is most appropriate for the analysis of longitudinal outcome data of TKA with missing values among repeated measures ANOVA, generalized estimating equation (GEE) and mixed effects model repeated measures (MMRM). Data sets with missing values were generated with different proportion of missing data, sample size and missing-data generation mechanism. Each data set was analyzed with three statistical methods. The influence of missing data was greater with higher proportion of missing data and smaller sample size. MMRM tended to show least changes in the statistics. When missing values were generated by 'missing not at random' mechanism, no statistical methods could fully avoid deviations in the results. Copyright © 2016 Elsevier Inc. All rights reserved.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
A New Look at Bias in Aptitude Tests.
ERIC Educational Resources Information Center
Scheuneman, Janice Dowd
1981-01-01
Statistical bias in measurement and ethnic-group bias in testing are discussed, reviewing predictive and construct validity studies. Item bias is reconceptualized to include distance of item content from respondent's experience. Differing values of mean and standard deviation for bias parameter are analyzed in a simulation. References are…
A Coherent VLSI Design Environment.
1985-09-30
deviation were only a few percent. If the number of paths with a delay close to 9ns were large, even more statistical accuracy would be required to...Zippel, 1Capsules, IGPLAN Bulletn, vol. 18, no. 6, waveforms. In the bottom window, the currents into the pp. 164-169, 1983. depletion transitors are
SU-F-I-10: Spatially Local Statistics for Adaptive Image Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliopoulos, AS; Sun, X; Floros, D
Purpose: To facilitate adaptive image filtering operations, addressing spatial variations in both noise and signal. Such issues are prevalent in cone-beam projections, where physical effects such as X-ray scattering result in spatially variant noise, violating common assumptions of homogeneous noise and challenging conventional filtering approaches to signal extraction and noise suppression. Methods: We present a computational mechanism for probing into and quantifying the spatial variance of noise throughout an image. The mechanism builds a pyramid of local statistics at multiple spatial scales; local statistical information at each scale includes (weighted) mean, median, standard deviation, median absolute deviation, as well asmore » histogram or dynamic range after local mean/median shifting. Based on inter-scale differences of local statistics, the spatial scope of distinguishable noise variation is detected in a semi- or un-supervised manner. Additionally, we propose and demonstrate the incorporation of such information in globally parametrized (i.e., non-adaptive) filters, effectively transforming the latter into spatially adaptive filters. The multi-scale mechanism is materialized by efficient algorithms and implemented in parallel CPU/GPU architectures. Results: We demonstrate the impact of local statistics for adaptive image processing and analysis using cone-beam projections of a Catphan phantom, fitted within an annulus to increase X-ray scattering. The effective spatial scope of local statistics calculations is shown to vary throughout the image domain, necessitating multi-scale noise and signal structure analysis. Filtering results with and without spatial filter adaptation are compared visually, illustrating improvements in imaging signal extraction and noise suppression, and in preserving information in low-contrast regions. Conclusion: Local image statistics can be incorporated in filtering operations to equip them with spatial adaptivity to spatial signal/noise variations. An efficient multi-scale computational mechanism is developed to curtail processing latency. Spatially adaptive filtering may impact subsequent processing tasks such as reconstruction and numerical gradient computations for deformable registration. NIH Grant No. R01-184173.« less
Barker, C.E.; Pawlewicz, M.J.
1993-01-01
In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.
Sedaghat, Ahmad R; Kieff, David A; Bergmark, Regan W; Cunnane, Mary E; Busaba, Nicolas Y
2015-03-01
Performance of septoplasty is dependent on objective evidence of nasal septal deviation. Although physical examination including anterior rhinoscopy and endoscopic examination is the gold standard for evaluation of septal deviation, third-party payors' reviews of septoplasty claims are often made on computed tomography (CT) findings. However, the correlation between radiographic evaluation of septal deviation with physical examination findings is unknown. Retrospective, blinded, independent evaluation of septal deviation in 39 consecutive patients from physical examination, including anterior rhinoscopy and endoscopic examination, by an otolaryngologist and radiographic evaluation of sinus CT scan by a neuroradiologist. Four distinct septal locations (nasal valve, cartilaginous, inferior/maxillary crest and osseous septum) were evaluated on a 4-point scale representing (1) 0% to 25%, (2) >25% to 50%, (3) >50% to 75%, and (4) >75% obstruction. Correlation between physical examination and radiographic evaluations was made by Pearson's correlation and quantitative agreement assessed by Krippendorf's alpha. Statistically significant correlation was detected between physical examination including nasal endoscopy and radiographic assessment of septal deviation only at the osseous septum (p = 0.007, r = 0.425) with low quantitative agreement (α = 0.290). No significant correlation was detected at the cartilaginous septum (p = 0.286, r = 0.175), inferior septum (p = 0.117, r = 0.255), or nasal valve (p = 0.174, r = 0.222). Quantitative agreement at the nasal valve suggested a bias in CT to underestimate physical exam findings (α = -0.490). CT is a poor substitute for physical examination, the gold standard, in assessment of septal deviation. Clinical decisions about pursuit of septoplasty or third-party payors' decisions to approve septoplasty should not be made on radiographic evidence. © 2014 ARS-AAOA, LLC.
Test-retest reliability of 3D ultrasound measurements of the thoracic spine.
Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian
2012-05-01
To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Baek, Chaehwan; Paeng, Jun-Young; Lee, Janice S; Hong, Jongrak
2012-05-01
A systematic classification is needed for the diagnosis and surgical treatment of facial asymmetry. The purposes of this study were to analyze the skeletal structures of patients with facial asymmetry and to objectively classify these patients into groups according to these structural characteristics. Patients with facial asymmetry and recent computed tomographic images from 2005 through 2009 were included in this study, which was approved by the institutional review board. Linear measurements, angles, and reference planes on 3-dimensional computed tomograms were obtained, including maxillary (upper midline deviation, maxilla canting, and arch form discrepancy) and mandibular (menton deviation, gonion to midsagittal plane, ramus height, and frontal ramus inclination) measurements. All measurements were analyzed using paired t tests with Bonferroni correction followed by K-means cluster analysis using SPSS 13.0 to determine an objective classification of facial asymmetry in the enrolled patients. Kruskal-Wallis test was performed to verify differences among clustered groups. P < .05 was considered statistically significant. Forty-three patients (18 male, 25 female) were included in the study. They were classified into 4 groups based on cluster analysis. Their mean age was 24.3 ± 4.4 years. Group 1 included subjects (44% of patients) with asymmetry caused by a shift or lateralization of the mandibular body. Group 2 included subjects (39%) with a significant difference between the left and right ramus height with menton deviation to the short side. Group 3 included subjects (12%) with atypical asymmetry, including deviation of the menton to the short side, prominence of the angle/gonion on the larger side, and reverse maxillary canting. Group 4 included subjects (5%) with severe maxillary canting, ramus height differences, and menton deviation to the short side. In this study, patients with asymmetry were classified into 4 statistically distinct groups according to their anatomic features. This diagnostic classification method will assist in treatment planning for patients with facial asymmetry and may be used to explore the etiology of these variants of facial asymmetry. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Non-specific filtering of beta-distributed data.
Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D
2014-06-19
Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with different characteristics, each performed well for identifying clusters of cancer and non-cancer tissue, and identifying a cancer CpG island hypermethylation phenotype. Since cluster analysis is for discovery, we would suggest trying both filters on any new data sets, evaluating the overlap of features selected and clusters discovered.
Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.
Yang, Zhao; Liu, Pan; Xu, Xin
2016-10-01
Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.
Revazov, A A; Pasekov, V P; Lukasheva, I D
1975-01-01
The paper deals with the distribution of genetic markers (systems ABO, MN, Rh (D), Hp, PTC) and a number of demographic (folding of arms, hand clasping, tongue rolling, right- and left-handedness, of the type of ear lobe, of the types of dermatoglyphic patterns) in the inhabitants of 6 villages in the Mezen District of the Archangelsk Region of the RSFSR (river Peosa basin). The data presented in this work were obtained in the course of examination of over 800 persons. Differences in the interpretation of the results of generally adopted methods of statistical analysis of samples from small populations are discussed. Among the systems analysed in one third of all the cases there was a statistically significant deviation from Hardy-Weinberg's ratios. For the MN blood groups and haptoglobins this was caused by the excess of heterozygotes. The test of Hardy--Weinberg's ratios at the level of two-loci phenotypes revealed no statistically significant deviations either in separate villages or in all the villages taken together. The analysis of heterogeneity with respect to markers inherited according to Mendel's law revealed statistically significant differences between villages in all the systems except haptoglobins. A considerable heterogeneity in the distribution of family names, the frequencies of some of them varying from village to village from 0 to 90%. Statistically significant differences between villages were shown for all the anthropogenetic characters except arm folding, hand clasping and right-left-handedness. Considering the uniformity of the environmental pressure in the region examined, the heterogeneity of the population studied is apparently associated with a random genetic differentiation (genetic drift) and, possibly, with the effect of the progenitor.
NASA Astrophysics Data System (ADS)
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.
Cosmological implications of a large complete quasar sample.
Segal, I E; Nicoll, J F
1998-04-28
Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.
Foster, Ken; Anwar, Nasim; Pogue, Rhea; Morré, Dorothy M.; Keenan, T. W.; Morré, D. James
2003-01-01
Seasonal decomposition analyses were applied to the statistical evaluation of an oscillating activity for a plasma membrane NADH oxidase activity with a temperature compensated period of 24 min. The decomposition fits were used to validate the cyclic oscillatory pattern. Three measured values, average percentage error (MAPE), a measure of the periodic oscillation, mean average deviation (MAD), a measure of the absolute average deviations from the fitted values, and mean standard deviation (MSD), the measure of standard deviation from the fitted values plus R-squared and the Henriksson-Merton p value were used to evaluate accuracy. Decomposition was carried out by fitting a trend line to the data, then detrending the data if necessary, by subtracting the trend component. The data, with or without detrending, were then smoothed by subtracting a centered moving average of length equal to the period length determined by Fourier analysis. Finally, the time series were decomposed into cyclic and error components. The findings not only validate the periodic nature of the major oscillations but suggest, as well, that the minor intervening fluctuations also recur within each period with a reproducible pattern of recurrence. PMID:19330112
Estimation of Tooth Size Discrepancies among Different Malocclusion Groups
Bala, Madhu; Goyal, Virender
2014-01-01
ABSTRACT Regards and Tribute: Late Dr Narender Hasija was a mentor and visionary in the light of knowledge and experience. We pay our regards with deepest gratitude to the departed soul to rest in peace. Bolton’s ratios help in estimating overbite, overjet relationships, the effects of contemplated extractions on posterior occlusion, incisor relationships and identification of occlusal misfit produced by tooth size discrepancies. Aim: To determine any difference in tooth size discrepancy in anterior as well as overall ratio in different malocclusions and comparison with Bolton’s study. Materials and methods: After measuring the teeth on all 100 patients, Bolton’s analysis was performed. Results were compared with Bolton’s means and standard deviations. The results were also subjected to statistical analysis. Results show that the mean and standard deviations of ideal occlusion cases are comparable with those Bolton but, when the mean and standard deviation of malocclusion groups are compared with those of Bolton, the values of standard deviation are higher, though the mean is comparable. How to cite this article: Hasija N, Bala M, Goyal V. Estimation of Tooth Size Discrepancies among Different Malocclusion Groups. Int J Clin Pediatr Dent 2014;7(2):82-85. PMID:25356005
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.
WASP (Write a Scientific Paper) using Excel - 7: The t-distribution.
Grech, Victor
2018-03-01
The calculation of descriptive statistics after data collection provides researchers with an overview of the shape and nature of their datasets, along with basic descriptors, and may help identify true or incorrect outlier values. This exercise should always precede inferential statistics, when possible. This paper provides some pointers for doing so in Microsoft Excel, both statically and dynamically, with Excel's functions, including the calculation of standard deviation and variance and the relevance of the t-distribution. Copyright © 2018 Elsevier B.V. All rights reserved.
Quality Control of the Print with the Application of Statistical Methods
NASA Astrophysics Data System (ADS)
Simonenko, K. V.; Bulatova, G. S.; Antropova, L. B.; Varepo, L. G.
2018-04-01
The basis for standardizing the process of offset printing is the control of print quality indicators. The solution of this problem has various approaches, among which the most important are statistical methods. Practical implementation of them for managing the quality of the printing process is very relevant and is reflected in this paper. The possibility of using the method of constructing a Control Card to identify the reasons for the deviation of the optical density for a triad of inks in offset printing is shown.
Simulation Study Using a New Type of Sample Variance
NASA Technical Reports Server (NTRS)
Howe, D. A.; Lainson, K. J.
1996-01-01
We evaluate with simulated data a new type of sample variance for the characterization of frequency stability. The new statistic (referred to as TOTALVAR and its square root TOTALDEV) is a better predictor of long-term frequency variations than the present sample Allan deviation. The statistical model uses the assumption that a time series of phase or frequency differences is wrapped (periodic) with overall frequency difference removed. We find that the variability at long averaging times is reduced considerably for the five models of power-law noise commonly encountered with frequency standards and oscillators.
Weinstein, Ronald S; Krupinski, Elizabeth A; Weinstein, John B; Graham, Anna R; Barker, Gail P; Erps, Kristine A; Holtrust, Angelette L; Holcomb, Michael J
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school ( F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender ( F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level ( F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student's expectations. One class voted K-12 general pathology their "elective course-of-the-year."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brock, K; Lee, C; Samuels, S
Purpose: Tools are now available to perform daily dose assessment in radiotherapy, however, guidance is lacking as to when to replan to limit increase in normal tissue dose. This work performs statistical analysis to provide guidance for when adaptive replanning may be necessary for head/neck (HN) patients. Methods: Planning CT and daily kVCBCT images for 50 HN patients treated with VMAT were retrospectively evaluated. Twelve of 50 patients were replanned due to anatomical changes noted over their RT course. Daily dose assessment was performed to calculate the variation between the planned and delivered dose for the 38 patients not replannedmore » and the patients replanned using their delivered plan. In addition, for the replanned patients, the dose that would have been delivered if the plan was not modified was also quantified. Deviations in dose were analyzed before and after replanning, the daily variations in patients who were not replanned assessed, and the predictive power of the deviation after 1, 5, and 15 fractions determined. Results: Dose deviations were significantly reduced following replanning, compared to if the original plan would have been delivered for the entire course. Early deviations were significantly correlated with total deviations (p<0.01). Using the criteria that a 10% increase in the final delivered dose indicates a replan may be needed earlier in the treatment course, the following guidelines can be made with a 90% specificity after the first 5 fractions: deviations of 7% in the mean dose to the inferior constrictors and 5% in the mean dose to the parotid glands and submandibular glands. No significant dose deviations were observed in any patients for the CTV -70Gy (max deviation 4%). Conclusions: A 5–7% increase in mean dose to normal tissues within the first 5 fractions strongly correlate to an overall deviatios in the delivered dose for HN patients. This work is funded in part by NIH 2P01CA059827-16.« less
Flexner 3.0—Democratization of Medical Knowledge for the 21st Century
Krupinski, Elizabeth A.; Weinstein, John B.; Graham, Anna R.; Barker, Gail P.; Erps, Kristine A.; Holtrust, Angelette L.; Holcomb, Michael J.
2016-01-01
A medical school general pathology course has been reformatted into a K-12 general pathology course. This new course has been implemented at a series of 7 to 12 grade levels and the student outcomes compared. Typically, topics covered mirrored those in a medical school general pathology course serving as an introduction to the mechanisms of diseases. Assessment of student performance was based on their score on a multiple-choice final examination modeled after an examination given to medical students. Two Tucson area schools, in a charter school network, participated in the study. Statistical analysis of examination performances showed that there were no significant differences as a function of school (F = 0.258, P = .6128), with students at school A having an average test scores of 87.03 (standard deviation = 8.99) and school B 86.00 (standard deviation = 8.18; F = 0.258, P = .6128). Analysis of variance was also conducted on the test scores as a function of gender and class grade. There were no significant differences as a function of gender (F = 0.608, P = .4382), with females having an average score of 87.18 (standard deviation = 7.24) and males 85.61 (standard deviation = 9.85). There were also no significant differences as a function of grade level (F = 0.627, P = .6003), with 7th graders having an average of 85.10 (standard deviation = 8.90), 8th graders 86.00 (standard deviation = 9.95), 9th graders 89.67 (standard deviation = 5.52), and 12th graders 86.90 (standard deviation = 7.52). The results demonstrated that middle and upper school students performed equally well in K-12 general pathology. Student course evaluations showed that the course met the student’s expectations. One class voted K-12 general pathology their “elective course-of-the-year.” PMID:28725762
A study on the measurement of wrist motion range using the iPhone 4 gyroscope application.
Kim, Tae Seob; Park, David Dae Hwan; Lee, Young Bae; Han, Dong Gil; Shim, Jeong Su; Lee, Young Jig; Kim, Peter Chan Woo
2014-08-01
Measuring the range of motion (ROM) of the wrist is an important physical examination conducted in the Department of Hand Surgery for the purpose of evaluation, diagnosis, prognosis, and treatment of patients. The most common method for performing this task is by using a universal goniometer. This study was performed using 52 healthy participants to compare wrist ROM measurement using a universal goniometer and the iPhone 4 Gyroscope application. Participants did not have previous wrist illnesses and their measured values for wrist motion were compared in each direction. Normal values for wrist ROM are 73 degrees of flexion, 71 degrees of extension, 19 degrees of radial deviation, 33 degrees of ulnar deviation, 140 degrees of supination, and 60 degrees of pronation.The average measurement values obtained using the goniometer were 74.2 (5.1) degrees for flexion, 71.1 (4.9) degrees for extension, 19.7 (3.0) degrees for radial deviation, 34.0 (3.7) degrees for ulnar deviation, 140.8 (5.6) degrees for supination, and 61.1 (4.7) degrees for pronation. The average measurement values obtained using the iPhone 4 Gyroscope application were 73.7 (5.5) degrees for flexion, 70.8 (5.1) degrees for extension, 19.5 (3.0) degrees for radial deviation, 33.7 (3.9) degrees for ulnar deviation, 140.4 (5.7) degrees for supination, and 60.8 (4.9) degrees for pronation. The differences between the measurement values by the Gyroscope application and average value were 0.7 degrees for flexion, -0.2 degrees for extension, 0.5 degrees for radial deviation, 0.7 degrees for ulnar deviation, 0.4 degrees for supination, and 0.8 degrees for pronation. The differences in average value were not statistically significant. The authors introduced a new method of measuring the range of wrist motion using the iPhone 4 Gyroscope application that is simpler to use and can be performed by the patient outside a clinical setting.
Extreme-value statistics of work done in stretching a polymer in a gradient flow.
Vucelja, M; Turitsyn, K S; Chertkov, M
2015-02-01
We analyze the statistics of work generated by a gradient flow to stretch a nonlinear polymer. We obtain the large deviation function (LDF) of the work in the full range of appropriate parameters by combining analytical and numerical tools. The LDF shows two distinct asymptotes: "near tails" are linear in work and dominated by coiled polymer configurations, while "far tails" are quadratic in work and correspond to preferentially fully stretched polymers. We find the extreme value statistics of work for several singular elastic potentials, as well as the mean and the dispersion of work near the coil-stretch transition. The dispersion shows a maximum at the transition.
NASA Technical Reports Server (NTRS)
1982-01-01
A FORTRAN coded computer program and method to predict the reaction control fuel consumption statistics for a three axis stabilized rocket vehicle upper stage is described. A Monte Carlo approach is used which is more efficient by using closed form estimates of impulses. The effects of rocket motor thrust misalignment, static unbalance, aerodynamic disturbances, and deviations in trajectory, mass properties and control system characteristics are included. This routine can be applied to many types of on-off reaction controlled vehicles. The pseudorandom number generation and statistical analyses subroutines including the output histograms can be used for other Monte Carlo analyses problems.
Phenomenology of small violations of Fermi and Bose statistics
NASA Astrophysics Data System (ADS)
Greenberg, O. W.; Mohapatra, Rabindra N.
1989-04-01
In a recent paper, we proposed a ``paronic'' field-theory framework for possible small deviations from the Pauli exclusion principle. This theory cannot be represented in a positive-metric (Hilbert) space. Nonetheless, the issue of possible small violations of the exclusion principle can be addressed in the framework of quantum mechanics, without being connected with a local quantum field theory. In this paper, we discuss the phenomenology of small violations of both Fermi and Bose statistics. We consider the implications of such violations in atomic, nuclear, particle, and condensed-matter physics and in astrophysics and cosmology. We also discuss experiments that can detect small violations of Fermi and Bose statistics or place stringent bounds on their validity.
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
NASA Astrophysics Data System (ADS)
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
De Sousa Fontes, Aderito; Sandrea Jiménez, Minaret; Chacaltana Ayerve, Rosa R
2013-01-01
The microdebrider is a surgical tool which has been used successfully in many endoscopic surgical procedures in otolaryngology. In this study, we analysed our experience using this powered instrument in the resection of obstructive nasal septum deviations. This was a longitudinal, prospective, descriptive study conducted between January and June 2007 on 141 patients who consulted for chronic nasal obstruction caused by a septal deviation or deformity and underwent powered endoscopic septoplasty (PES). The mean age was 39.9 years (15-63 years); 60.28% were male (n=85) The change in nasal symptom severity decreased after surgery from 6.12 (preoperative) to 2.01 (postoperative). Patients undergoing PES had a significant reduction of nasal symptoms in the pre- and postoperative period, which was statistically significant (P<.05). There were no statistically significant differences between the results at the 2 nd week, 6th week and 5th year after surgery. The 100% of patients were satisfied with the results of surgery and no patient answered "No" to the question added to compare patient satisfaction after surgery. Minor complications in the postoperative period were present in 4.96% of the cases. Powered endoscopic septoplasty allows accurate, conservative repair of obstructive nasal septum deviations, with fewer complications and better functional results. In our experience, this technique offered significant perioperative advantages with high postoperative patient satisfaction in terms of reducing the severity of nasal symptoms. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J; Murtha, Michael T; Hus, Vanessa; Lowe, Jennifer K; Willsey, A Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E; Ledbetter, David H; Lord, Catherine; Mane, Shrikant M; Lese Martin, Christa; Martin, Donna M; Morrow, Eric M; Walsh, Christopher A; Sutcliffe, James S; State, Matthew W; Devlin, Bernie; Cook, Edwin H; Kim, Soo-Jeong
2013-10-15
Brain development follows a different trajectory in children with autism spectrum disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. Gender, age, height, weight, genetic ancestry, and ASD status were significant predictors of HC (estimate of the ASD effect = .2 cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait, and population norms for HC would be far more accurate if covariates including genetic ancestry, height, and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. © 2013 Society of Biological Psychiatry.
Qualitative computer aided evaluation of dental impressions in vivo.
Luthardt, Ralph G; Koch, Rainer; Rudolph, Heike; Walter, Michael H
2006-01-01
Clinical investigations dealing with the precision of different impression techniques are rare. Objective of the present study was to develop and evaluate a procedure for the qualitative analysis of the three-dimensional impression precision based on an established in-vitro procedure. The zero hypothesis to be tested was that the precision of impressions does not differ depending on the impression technique used (single-step, monophase and two-step-techniques) and on clinical variables. Digital surface data of patient's teeth prepared for crowns were gathered from standardized manufactured master casts after impressions with three different techniques were taken in a randomized order. Data-sets were analyzed for each patient in comparison with the one-step impression chosen as the reference. The qualitative analysis was limited to data-points within the 99.5%-range. Based on the color-coded representation areas with maximum deviations were determined (preparation margin and the mantle and occlusal surface). To qualitatively analyze the precision of the impression techniques, the hypothesis was tested in linear models for repeated measures factors (p < 0.05). For the positive 99.5% deviations no variables with significant influence were determined in the statistical analysis. In contrast, the impression technique and the position of the preparation margin significantly influenced the negative 99.5% deviations. The influence of clinical parameter on the deviations between impression techniques can be determined reliably using the 99.5 percentile of the deviations. An analysis regarding the areas with maximum deviations showed high clinical relevance. The preparation margin was pointed out as the weak spot of impression taking.
Chaste, Pauline; Klei, Lambertus; Sanders, Stephan J.; Murtha, Michael T.; Hus, Vanessa; Lowe, Jennifer K.; Willsey, A. Jeremy; Moreno-De-Luca, Daniel; Yu, Timothy W.; Fombonne, Eric; Geschwind, Daniel; Grice, Dorothy E.; Ledbetter, David H.; Lord, Catherine; Mane, Shrikant M.; Martin, Christa Lese; Martin, Donna M.; Morrow, Eric M.; Walsh, Christopher A.; Sutcliffe, James S.; State, Matthew W.; Devlin, Bernie; Cook, Edwin H.; Kim, Soo-Jeong
2013-01-01
BACKGROUND Brain development follows a different trajectory in children with Autism Spectrum Disorders (ASD) than in typically developing children. A proxy for neurodevelopment could be head circumference (HC), but studies assessing HC and its clinical correlates in ASD have been inconsistent. This study investigates HC and clinical correlates in the Simons Simplex Collection cohort. METHODS We used a mixed linear model to estimate effects of covariates and the deviation from the expected HC given parental HC (genetic deviation). After excluding individuals with incomplete data, 7225 individuals in 1891 families remained for analysis. We examined the relationship between HC/genetic deviation of HC and clinical parameters. RESULTS Gender, age, height, weight, genetic ancestry and ASD status were significant predictors of HC (estimate of the ASD effect=0.2cm). HC was approximately normally distributed in probands and unaffected relatives, with only a few outliers. Genetic deviation of HC was also normally distributed, consistent with a random sampling of parental genes. Whereas larger HC than expected was associated with ASD symptom severity and regression, IQ decreased with the absolute value of the genetic deviation of HC. CONCLUSIONS Measured against expected values derived from covariates of ASD subjects, statistical outliers for HC were uncommon. HC is a strongly heritable trait and population norms for HC would be far more accurate if covariates including genetic ancestry, height and age were taken into account. The association of diminishing IQ with absolute deviation from predicted HC values suggests HC could reflect subtle underlying brain development and warrants further investigation. PMID:23746936
Pilot scanning patterns while viewing cockpit displays of traffic information
NASA Technical Reports Server (NTRS)
Ellis, S. R.; Stark, L.
1981-01-01
Scanning eye movements of airline pilots were recorded while they judged air traffic situations displayed on cockpit displays of traffic information (CDTI). The observed 1st order transition patterns between points of interest on the display showed reliable deviation from those patterns predicted by the assumption of statistical independence. However, both patterns of transitions correlated quite well with each other. Accordingly, the assumption of independence provided a surprisingly good model of the results. Nevertheless, the deviation between the observed patterns of transition and that based on the assumption of independence was for all subjects in the direction of increased determinism. Thus, the results provide objective evidence consistent with the existence of "scanpaths" in the data.
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
Flow throughout the Earth's core inverted from geomagnetic observations and numerical dynamo models
NASA Astrophysics Data System (ADS)
Aubert, Julien
2013-02-01
This paper introduces inverse geodynamo modelling, a framework imaging flow throughout the Earth's core from observations of the geomagnetic field and its secular variation. The necessary prior information is provided by statistics from 3-D and self-consistent numerical simulations of the geodynamo. The core method is a linear estimation (or Kalman filtering) procedure, combined with standard frozen-flux core surface flow inversions in order to handle the non-linearity of the problem. The inversion scheme is successfully validated using synthetic test experiments. A set of four numerical dynamo models of increasing physical complexity and similarity to the geomagnetic field is then used to invert for flows at single epochs within the period 1970-2010, using data from the geomagnetic field models CM4 and gufm-sat-Q3. The resulting core surface flows generally provide satisfactory fits to the secular variation within the level of modelled errors, and robustly reproduce the most commonly observed patterns while additionally presenting a high degree of equatorial symmetry. The corresponding deep flows present a robust, highly columnar structure once rotational constraints are enforced to a high level in the prior models, with patterns strikingly similar to the results of quasi-geostrophic inversions. In particular, the presence of a persistent planetary scale, eccentric westward columnar gyre circling around the inner core is confirmed. The strength of the approach is to uniquely determine the trade-off between fit to the data and complexity of the solution by clearly connecting it to first principle physics; statistical deviations observed between the inverted flows and the standard model behaviour can then be used to quantitatively assess the shortcomings of the physical modelling. Such deviations include the (i) westwards and (ii) hemispherical character of the eccentric gyre. A prior model with angular momentum conservation of the core-mantle inner-core system, and gravitational coupling of reasonable strength between the mantle and the inner core, is shown to produce enough westward drift to resolve statistical deviation (i). Deviation (ii) is resolved by a prior with an hemispherical buoyancy release at the inner-core boundary, with excess buoyancy below Asia. This latter result suggests that the recently proposed inner-core translational instability presently transports the solid inner-core material westwards, opposite to the seismologically inferred long-term trend but consistently with the eccentricity of the geomagnetic dipole in recent times.
Structure of Pine Stands in the Southeast
William A. Bechtold; Gregory A. Ruark
1988-01-01
Distributional and statistical information associated with stand age, site index, basal area per acre, number of stems per acre, and stand density index is reported for major pine cover types of the Southeastern United States. Means, standard deviations, and ranges of these variables are listed by State and physiographic region for loblolly, slash, longleaf, pond,...
A Monte Carlo Approach to Unidimensionality Testing in Polytomous Rasch Models
ERIC Educational Resources Information Center
Christensen, Karl Bang; Kreiner, Svend
2007-01-01
Many statistical tests are designed to test the different assumptions of the Rasch model, but only few are directed at detecting multidimensionality. The Martin-Lof test is an attractive approach, the disadvantage being that its null distribution deviates strongly from the asymptotic chi-square distribution for most realistic sample sizes. A Monte…
1983-09-15
there is obvious dif- ficulty in intepreting these results. In the absence of degradation in system performance, it would be difficult to argue that...standard deviation during instru- 2 .441 ment take-off 3 -.218 4 .596 5 .831 6 -.916 0 44 0~i FILMED 3-85 DTIC
A new SAS program for behavioral analysis of Electrical Penetration Graph (EPG) data
USDA-ARS?s Scientific Manuscript database
A new program is introduced that uses SAS software to duplicate output of descriptive statistics from the Sarria Excel workbook for EPG waveform analysis. Not only are publishable means and standard errors or deviations output, the user also is guided through four relatively simple sub-programs for ...
NASA Astrophysics Data System (ADS)
Slaski, G.; Ohde, B.
2016-09-01
The article presents the results of a statistical dispersion analysis of an energy and power demand for tractive purposes of a battery electric vehicle. The authors compare data distribution for different values of an average speed in two approaches, namely a short and long period of observation. The short period of observation (generally around several hundred meters) results from a previously proposed macroscopic energy consumption model based on an average speed per road section. This approach yielded high values of standard deviation and coefficient of variation (the ratio between standard deviation and the mean) around 0.7-1.2. The long period of observation (about several kilometers long) is similar in length to standardized speed cycles used in testing a vehicle energy consumption and available range. The data were analysed to determine the impact of observation length on the energy and power demand variation. The analysis was based on a simulation of electric power and energy consumption performed with speed profiles data recorded in Poznan agglomeration.
Climate change and the detection of trends in annual runoff
McCabe, G.J.; Wolock, D.M.
1997-01-01
This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.
New Evidence on the Relationship Between Climate and Conflict
NASA Astrophysics Data System (ADS)
Burke, M.
2015-12-01
We synthesize a large new body of research on the relationship between climate and conflict. We consider many types of human conflict, ranging from interpersonal conflict -- domestic violence, road rage, assault, murder, and rape -- to intergroup conflict -- riots, coups, ethnic violence, land invasions, gang violence, and civil war. After harmonizing statistical specifications and standardizing estimated effect sizes within each conflict category, we implement a meta-analysis that allows us to estimate the mean effect of climate variation on conflict outcomes as well as quantify the degree of variability in this effect size across studies. Looking across more than 50 studies, we find that deviations from moderate temperatures and precipitation patterns systematically increase the risk of conflict, often substantially, with average effects that are highly statistically significant. We find that contemporaneous temperature has the largest average effect by far, with each 1 standard deviation increase toward warmer temperatures increasing the frequency of contemporaneous interpersonal conflict by 2% and of intergroup conflict by more than 10%. We also quantify substantial heterogeneity in these effect estimates across settings.
NASA Astrophysics Data System (ADS)
Pengvanich, P.; Chernin, D. P.; Lau, Y. Y.; Luginsland, J. W.; Gilgenbach, R. M.
2007-11-01
Motivated by the current interest in mm-wave and THz sources, which use miniature, difficult-to-fabricate circuit components, we evaluate the statistical effects of random fabrication errors on a helix traveling wave tube amplifier's small signal characteristics. The small signal theory is treated in a continuum model in which the electron beam is assumed to be monoenergetic, and axially symmetric about the helix axis. Perturbations that vary randomly along the beam axis are introduced in the dimensionless Pierce parameters b, the beam-wave velocity mismatch, C, the gain parameter, and d, the cold tube circuit loss. Our study shows, as expected, that perturbation in b dominates the other two. The extensive numerical data have been confirmed by our analytic theory. They show in particular that the standard deviation of the output phase is linearly proportional to standard deviation of the individual perturbations in b, C, and d. Simple formulas have been derived which yield the output phase variations in terms of the statistical random manufacturing errors. This work was supported by AFOSR and by ONR.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Extreme statistics and index distribution in the classical 1d Coulomb gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2018-07-01
We consider a 1D gas of N charged particles confined by an external harmonic potential and interacting via the 1D Coulomb potential. For this system we show that in equilibrium the charges settle, on an average, uniformly and symmetrically on a finite region centred around the origin. We study the statistics of the position of the rightmost particle and show that the limiting distribution describing its typical fluctuations is different from the Tracy–Widom distribution found in the 1D log-gas. We also compute the large deviation functions which characterise the atypical fluctuations of far away from its mean value. In addition, we study the gap between the two rightmost particles as well as the index N + , i.e. the number of particles on the positive semi-axis. We compute the limiting distributions associated to the typical fluctuations of these observables as well as the corresponding large deviation functions. We provide numerical supports to our analytical predictions. Part of these results were announced in a recent letter, Dhar et al (2017 Phys. Rev. Lett. 119 060601).
Extended analysis of Skylab experiment M558 data
NASA Technical Reports Server (NTRS)
Ukanwa, A. O.
1976-01-01
A careful review of the data from Skylab M558 was made in an effort to explain the apparent anomaly of the existence of radial concentration gradients whereas none should bave been observed. The very close modelling of the experimental axial concentration profiles by the unsteady-state one-dimensional solution of Fick's Law of self-diffusion in liquid zinc, and the condition of initial uniform concentration in the radioactive pellet portion of the experimental specimens would have precluded the appearance of such radial concentration gradients. Statistical analyses were used to test the significance of the observed deviation from radial-concentration homogeneity. A student t-distribution test of significance showed that, at 90% or even at 80% level of significance, there were no significant deviations from uniformity in radial concentrations. It was also concluded that the two likely causes of any deviation that existed were the zinc to zinc-65 bonding procedure and surface phenomena such as surface tension and capillary action.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
A method to estimate statistical errors of properties derived from charge-density modelling
Lecomte, Claude
2018-01-01
Estimating uncertainties of property values derived from a charge-density model is not straightforward. A methodology, based on calculation of sample standard deviations (SSD) of properties using randomly deviating charge-density models, is proposed with the MoPro software. The parameter shifts applied in the deviating models are generated in order to respect the variance–covariance matrix issued from the least-squares refinement. This ‘SSD methodology’ procedure can be applied to estimate uncertainties of any property related to a charge-density model obtained by least-squares fitting. This includes topological properties such as critical point coordinates, electron density, Laplacian and ellipticity at critical points and charges integrated over atomic basins. Errors on electrostatic potentials and interaction energies are also available now through this procedure. The method is exemplified with the charge density of compound (E)-5-phenylpent-1-enylboronic acid, refined at 0.45 Å resolution. The procedure is implemented in the freely available MoPro program dedicated to charge-density refinement and modelling. PMID:29724964
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Marsan, Lynnsay A; D'Arcy, Christina E; Olimpo, Jeffrey T
2016-12-01
Evidence suggests that incorporating quantitative reasoning exercises into existent curricular frameworks within the science, technology, engineering, and mathematics (STEM) disciplines is essential for novices' development of conceptual understanding and process skills in these domains. Despite this being the case, such studies acknowledge that students often experience difficulty in applying mathematics in the context of scientific problems. To address this concern, the present study sought to explore the impact of active demonstrations and critical reading exercises on novices' comprehension of basic statistical concepts, including hypothesis testing, experimental design, and interpretation of research findings. Students first engaged in a highly interactive height activity that served to intuitively illustrate normal distribution, mean, standard deviation, and sample selection criteria. To enforce practical applications of standard deviation and p -value, student teams were subsequently assigned a figure from a peer-reviewed primary research article and instructed to evaluate the trustworthiness of the data. At the conclusion of this exercise, students presented their evaluations to the class for open discussion and commentary. Quantitative assessment of pre- and post-module survey data indicated a statistically significant increase both in students' scientific reasoning and process skills and in their self-reported confidence in understanding the statistical concepts presented in the module. Furthermore, data indicated that the majority of students (>85%) found the module both interesting and helpful in nature. Future studies will seek to develop additional, novel exercises within this area and to evaluate the impact of such modules across a variety of STEM and non-STEM contexts.
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Pajer, E.; Pichon, C.; Nishimichi, T.; Codis, S.; Bernardeau, F.
2018-03-01
Non-Gaussianities of dynamical origin are disentangled from primordial ones using the formalism of large deviation statistics with spherical collapse dynamics. This is achieved by relying on accurate analytical predictions for the one-point probability distribution function and the two-point clustering of spherically averaged cosmic densities (sphere bias). Sphere bias extends the idea of halo bias to intermediate density environments and voids as underdense regions. In the presence of primordial non-Gaussianity, sphere bias displays a strong scale dependence relevant for both high- and low-density regions, which is predicted analytically. The statistics of densities in spheres are built to model primordial non-Gaussianity via an initial skewness with a scale dependence that depends on the bispectrum of the underlying model. The analytical formulas with the measured non-linear dark matter variance as input are successfully tested against numerical simulations. For local non-Gaussianity with a range from fNL = -100 to +100, they are found to agree within 2 per cent or better for densities ρ ∈ [0.5, 3] in spheres of radius 15 Mpc h-1 down to z = 0.35. The validity of the large deviation statistics formalism is thereby established for all observationally relevant local-type departures from perfectly Gaussian initial conditions. The corresponding estimators for the amplitude of the non-linear variance σ8 and primordial skewness fNL are validated using a fiducial joint maximum likelihood experiment. The influence of observational effects and the prospects for a future detection of primordial non-Gaussianity from joint one- and two-point densities-in-spheres statistics are discussed.
Marsan, Lynnsay A.; D’Arcy, Christina E.; Olimpo, Jeffrey T.
2016-01-01
Evidence suggests that incorporating quantitative reasoning exercises into existent curricular frameworks within the science, technology, engineering, and mathematics (STEM) disciplines is essential for novices’ development of conceptual understanding and process skills in these domains. Despite this being the case, such studies acknowledge that students often experience difficulty in applying mathematics in the context of scientific problems. To address this concern, the present study sought to explore the impact of active demonstrations and critical reading exercises on novices’ comprehension of basic statistical concepts, including hypothesis testing, experimental design, and interpretation of research findings. Students first engaged in a highly interactive height activity that served to intuitively illustrate normal distribution, mean, standard deviation, and sample selection criteria. To enforce practical applications of standard deviation and p-value, student teams were subsequently assigned a figure from a peer-reviewed primary research article and instructed to evaluate the trustworthiness of the data. At the conclusion of this exercise, students presented their evaluations to the class for open discussion and commentary. Quantitative assessment of pre- and post-module survey data indicated a statistically significant increase both in students’ scientific reasoning and process skills and in their self-reported confidence in understanding the statistical concepts presented in the module. Furthermore, data indicated that the majority of students (>85%) found the module both interesting and helpful in nature. Future studies will seek to develop additional, novel exercises within this area and to evaluate the impact of such modules across a variety of STEM and non-STEM contexts. PMID:28101271
NASA Astrophysics Data System (ADS)
Rock, N. M. S.
ROBUST calculates 53 statistics, plus significance levels for 6 hypothesis tests, on each of up to 52 variables. These together allow the following properties of the data distribution for each variable to be examined in detail: (1) Location. Three means (arithmetic, geometric, harmonic) are calculated, together with the midrange and 19 high-performance robust L-, M-, and W-estimates of location (combined, adaptive, trimmed estimates, etc.) (2) Scale. The standard deviation is calculated along with the H-spread/2 (≈ semi-interquartile range), the mean and median absolute deviations from both mean and median, and a biweight scale estimator. The 23 location and 6 scale estimators programmed cover all possible degrees of robustness. (3) Normality: Distributions are tested against the null hypothesis that they are normal, using the 3rd (√ h1) and 4th ( b 2) moments, Geary's ratio (mean deviation/standard deviation), Filliben's probability plot correlation coefficient, and a more robust test based on the biweight scale estimator. These statistics collectively are sensitive to most usual departures from normality. (4) Presence of outliers. The maximum and minimum values are assessed individually or jointly using Grubbs' maximum Studentized residuals, Harvey's and Dixon's criteria, and the Studentized range. For a single input variable, outliers can be either winsorized or eliminated and all estimates recalculated iteratively as desired. The following data-transformations also can be applied: linear, log 10, generalized Box Cox power (including log, reciprocal, and square root), exponentiation, and standardization. For more than one variable, all results are tabulated in a single run of ROBUST. Further options are incorporated to assess ratios (of two variables) as well as discrete variables, and be concerned with missing data. Cumulative S-plots (for assessing normality graphically) also can be generated. The mutual consistency or inconsistency of all these measures helps to detect errors in data as well as to assess data-distributions themselves.
NASA Astrophysics Data System (ADS)
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.
Wéra, A-C; Barazzuol, L; Jeynes, J C G; Merchant, M J; Suzuki, M; Kirkby, K J
2014-08-07
It is well known that broad beam irradiation with heavy ions leads to variation in the number of hit(s) received by each cell as the distribution of particles follows the Poisson statistics. Although the nucleus area will determine the number of hit(s) received for a given dose, variation amongst its irradiated cell population is generally not considered. In this work, we investigate the effect of the nucleus area's distribution on the survival fraction. More specifically, this work aims to explain the deviation, or tail, which might be observed in the survival fraction at high irradiation doses. For this purpose, the nucleus area distribution was added to the beam Poisson statistics and the Linear-Quadratic model in order to fit the experimental data. As shown in this study, nucleus size variation, and the associated Poisson statistics, can lead to an upward survival trend after broad beam irradiation. The influence of the distribution parameters (mean area and standard deviation) was studied using a normal distribution, along with the Linear-Quadratic model parameters (α and β). Finally, the model proposed here was successfully tested to the survival fraction of LN18 cells irradiated with a 85 keV µm(- 1) carbon ion broad beam for which the distribution in the area of the nucleus had been determined.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Evidence-based orthodontics. Current statistical trends in published articles in one journal.
Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J
2010-09-01
To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).
Creating Realistic Data Sets with Specified Properties via Simulation
ERIC Educational Resources Information Center
Goldman, Robert N.; McKenzie, John D. Jr.
2009-01-01
We explain how to simulate both univariate and bivariate raw data sets having specified values for common summary statistics. The first example illustrates how to "construct" a data set having prescribed values for the mean and the standard deviation--for a one-sample t test with a specified outcome. The second shows how to create a bivariate data…
Government Expenditures on Education as the Percentage of GDP in the EU
ERIC Educational Resources Information Center
Galetic, Fran
2015-01-01
This paper analyzes the government expenditures as the percentage of gross domestic product across countries of the European Union. There is a statistical model based on Z-score, whose aim is to calculate how much each EU country deviates from the average value. The model shows that government expenditures on education vary significantly between…
Using Group Projects to Assess the Learning of Sampling Distributions
ERIC Educational Resources Information Center
Neidigh, Robert O.; Dunkelberger, Jake
2012-01-01
In an introductory business statistics course, student groups used sample data to compare a set of sample means to the theoretical sampling distribution. Each group was given a production measurement with a population mean and standard deviation. The groups were also provided an excel spreadsheet with 40 sample measurements per week for 52 weeks…
ERIC Educational Resources Information Center
Barchard, Kimberly A.
2012-01-01
This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Implementation of an Algorithm for Prosthetic Joint Infection: Deviations and Problems.
Mühlhofer, Heinrich M L; Kanz, Karl-Georg; Pohlig, Florian; Lenze, Ulrich; Lenze, Florian; Toepfer, Andreas; von Eisenhart-Rothe, Ruediger; Schauwecker, Johannes
The outcome of revision surgery in arthroplasty is based on a precise diagnosis. In addition, the treatment varies based on whether the prosthetic failure is caused by aseptic or septic loosening. Algorithms can help to identify periprosthetic joint infections (PJI) and standardize diagnostic steps, however, algorithms tend to oversimplify the treatment of complex cases. We conducted a process analysis during the implementation of a PJI algorithm to determine problems and deviations associated with the implementation of this algorithm. Fifty patients who were treated after implementing a standardized algorithm were monitored retrospectively. Their treatment plans and diagnostic cascades were analyzed for deviations from the implemented algorithm. Each diagnostic procedure was recorded, compared with the algorithm, and evaluated statistically. We detected 52 deviations while treating 50 patients. In 25 cases, no discrepancy was observed. Synovial fluid aspiration was not performed in 31.8% of patients (95% confidence interval [CI], 18.1%-45.6%), while white blood cell counts (WBCs) and neutrophil differentiation were assessed in 54.5% of patients (95% CI, 39.8%-69.3%). We also observed that the prolonged incubation of cultures was not requested in 13.6% of patients (95% CI, 3.5%-23.8%). In seven of 13 cases (63.6%; 95% CI, 35.2%-92.1%), arthroscopic biopsy was performed; 6 arthroscopies were performed in discordance with the algorithm (12%; 95% CI, 3%-21%). Self-critical analysis of diagnostic processes and monitoring of deviations using algorithms are important and could increase the quality of treatment by revealing recurring faults.
Visual field changes after cataract extraction: the AGIS experience.
Koucheki, Behrooz; Nouri-Mahdavi, Kouros; Patel, Gitane; Gaasterland, Douglas; Caprioli, Joseph
2004-12-01
To test the hypothesis that cataract extraction in glaucomatous eyes improves overall sensitivity of visual function without affecting the size or depth of glaucomatous scotomas. Experimental study with no control group. One hundred fifty-eight eyes (of 140 patients) from the Advanced Glaucoma Intervention Study with at least two reliable visual fields within a year both before and after cataract surgery were included. Average mean deviation (MD), pattern standard deviation (PSD), and corrected pattern standard deviation (CPSD) were compared before and after cataract extraction. To evaluate changes in scotoma size, the number of abnormal points (P < .05) on the pattern deviation plot was compared before and after surgery. We described an index ("scotoma depth index") to investigate changes of scotoma depth after surgery. Mean values for MD, PSD, and CPSD were -13.2, 6.4, and 5.9 dB before and -11.9, 6.8, and 6.2 dB after cataract surgery (P < or = .001 for all comparisons). Mean (+/- SD) number of abnormal points on pattern deviation plot was 26.7 +/- 9.4 and 27.5 +/- 9.0 before and after cataract surgery, respectively (P = .02). Scotoma depth index did not change after cataract extraction (-19.3 vs -19.2 dB, P = .90). Cataract extraction caused generalized improvement of the visual field, which was most marked in eyes with less advanced glaucomatous damage. Although the enlargement of scotomas was statistically significant, it was not clinically meaningful. No improvement of sensitivity was observed in the deepest part of the scotomas.
Durand, Marc; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Talebi, Shirin Ataei; Graner, François
2011-10-14
We propose an analytical model for the statistical mechanics of shuffled two-dimensional foams with moderate bubble size polydispersity. It predicts without any adjustable parameters the correlations between the number of sides n of the bubbles (topology) and their areas A (geometry) observed in experiments and numerical simulations of shuffled foams. Detailed statistics show that in shuffled cellular patterns n correlates better with √A (as claimed by Desch and Feltham) than with A (as claimed by Lewis and widely assumed in the literature). At the level of the whole foam, standard deviations Δn and ΔA are in proportion. Possible applications include correlations of the detailed distributions of n and A, three-dimensional foams, and biological tissues.
Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z
2016-08-15
In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.
Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain
NASA Astrophysics Data System (ADS)
Žnidarič, Marko
2014-01-01
We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
Antipodal hotspot pairs on the earth
NASA Technical Reports Server (NTRS)
Rampino, Michael R.; Caldeira, Ken
1992-01-01
The results of statistical analyses performed on three published hotspot distributions suggest that significantly more hotspots occur as nearly antipodal pairs than is anticipated from a random distribution, or from their association with geoid highs and divergent plate margins. The observed number of antipodal hotspot pairs depends on the maximum allowable deviation from exact antipodality. At a maximum deviation of not greater than 700 km, 26 to 37 percent of hotspots form antipodal pairs in the published lists examined here, significantly more than would be expected from the general hotspot distribution. Two possible mechanisms that might create such a distribution include: (1) symmetry in the generation of mantle plumes, and (2) melting related to antipodal focusing of seismic energy from large-body impacts.
2012-10-09
many papers thereafter can not be obtained. A. Semi-ordered Pack 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 radius [mm] S rs Smm Sme See (a) 0 2 4 6 8...10 0 0.002 0.004 0.006 0.008 0.01 radius [mm] st d( S rs ) Smm Sme See (b) FIG. 16. Mean and standard deviation of two-point probability functions...functions reflect this behavior and smooth out these standard deviation peaks. 30 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 radius [mm] S rs Smm Sme See (a
Statistical characterization of the nonlinear noise in 2.8 Tbit/s PDM-16QAM CO-OFDM system.
Wang, Zhe; Qiao, Yaojun; Xu, Yanfei; Ji, Yuefeng
2013-07-29
We show for the first time through comprehensive simulations under both uncompensated transmission (UT) and dispersion managed transmission (DMT) systems that the statistical distribution of the nonlinear interference (NLI) within the polarization multiplexed 16-state quadrature amplitude modulation (PM-16QAM) Coherent Optical OFDM (CO-OFDM) system deviates from Gaussian distribution in the absence of amplified spontaneous emission (ASE) noise. We also observe that the dependences of the variance of the NLI noise on both the launch power and the transmission distance (logrithm) seem to be in a simple linear way.
Ballanti, Fabiana; Baldini, Alberto; Ranieri, Salvatore; Nota, Alessandro; Cozza, Paola
2016-04-01
Deviated nasal septum may cause a reduction of the nasal airflow, thus, during the craniofacial development, a reduced nasal airflow could originate a chronic mouth-breathing pattern, related with moderate to severe maxillary constriction. The aim of this retrospective study is to analyze the correlation between maxillary transverse deficiency and nasal septum deviation. Frontal cephalograms were performed on 66 posterior-anterior radiographs of subjects (34M, 32F; mean age 9.95±2.50 years) with maxillary transverse deficiency and on a control group of 31 posterior-anterior radiographs of subjects (13M, 18F; 9.29±2.08 years). Angular parameters of the nasal cavities were recorded and compared between the two groups using a Student's t-test. Generally all the parameters are very similar between the two groups except for the ASY angle that differs for about the 27%; anyway the Student's t-test showed no statistically significant differences between the two groups (mostly p>0.20). This study failed to show an association between transverse maxillary deficiencies and nasal septum deviations. Moreover, no significant differences were found between the mean nasal cavities dimensions in subjects with transverse maxillary deficiency and the control group. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Changes in deviation of absorbed dose to water among users by chamber calibration shift.
Katayose, Tetsurou; Saitoh, Hidetoshi; Igari, Mitsunobu; Chang, Weishan; Hashimoto, Shimpei; Morioka, Mie
2017-07-01
The JSMP01 dosimetry protocol had adopted the provisional 60 Co calibration coefficient [Formula: see text], namely, the product of exposure calibration coefficient N C and conversion coefficient k D,X . After that, the absorbed dose to water D w standard was established, and the JSMP12 protocol adopted the [Formula: see text] calibration. In this study, the influence of the calibration shift on the measurement of D w among users was analyzed. The intercomparison of the D w using an ionization chamber was annually performed by visiting related hospitals. Intercomparison results before and after the calibration shift were analyzed, the deviation of D w among users was re-evaluated, and the cause of deviation was estimated. As a result, the stability of LINAC, calibration of the thermometer and barometer, and collection method of ion recombination were confirmed. The statistical significance of standard deviation of D w was not observed, but that of difference of D w among users was observed between N C and [Formula: see text] calibration. Uncertainty due to chamber-to-chamber variation was reduced by the calibration shift, consequently reducing the uncertainty among users regarding D w . The result also pointed out uncertainty might be reduced by accurate and detailed instructions on the setup of an ionization chamber.
Cosmological implications of a large complete quasar sample
Segal, I. E.; Nicoll, J. F.
1998-01-01
Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182
Bhatia, Gaurav; Tandon, Arti; Patterson, Nick; Aldrich, Melinda C.; Ambrosone, Christine B.; Amos, Christopher; Bandera, Elisa V.; Berndt, Sonja I.; Bernstein, Leslie; Blot, William J.; Bock, Cathryn H.; Caporaso, Neil; Casey, Graham; Deming, Sandra L.; Diver, W. Ryan; Gapstur, Susan M.; Gillanders, Elizabeth M.; Harris, Curtis C.; Henderson, Brian E.; Ingles, Sue A.; Isaacs, William; De Jager, Phillip L.; John, Esther M.; Kittles, Rick A.; Larkin, Emma; McNeill, Lorna H.; Millikan, Robert C.; Murphy, Adam; Neslund-Dudas, Christine; Nyante, Sarah; Press, Michael F.; Rodriguez-Gil, Jorge L.; Rybicki, Benjamin A.; Schwartz, Ann G.; Signorello, Lisa B.; Spitz, Margaret; Strom, Sara S.; Tucker, Margaret A.; Wiencke, John K.; Witte, John S.; Wu, Xifeng; Yamamura, Yuko; Zanetti, Krista A.; Zheng, Wei; Ziegler, Regina G.; Chanock, Stephen J.; Haiman, Christopher A.; Reich, David; Price, Alkes L.
2014-01-01
The extent of recent selection in admixed populations is currently an unresolved question. We scanned the genomes of 29,141 African Americans and failed to find any genome-wide-significant deviations in local ancestry, indicating no evidence of selection influencing ancestry after admixture. A recent analysis of data from 1,890 African Americans reported that there was evidence of selection in African Americans after their ancestors left Africa, both before and after admixture. Selection after admixture was reported on the basis of deviations in local ancestry, and selection before admixture was reported on the basis of allele-frequency differences between African Americans and African populations. The local-ancestry deviations reported by the previous study did not replicate in our very large sample, and we show that such deviations were expected purely by chance, given the number of hypotheses tested. We further show that the previous study’s conclusion of selection in African Americans before admixture is also subject to doubt. This is because the FST statistics they used were inflated and because true signals of unusual allele-frequency differences between African Americans and African populations would be best explained by selection that occurred in Africa prior to migration to the Americas. PMID:25242497
Quantifying fluctuations in economic systems by adapting methods of statistical physics
NASA Astrophysics Data System (ADS)
Stanley, H. E.; Gopikrishnan, P.; Plerou, V.; Amaral, L. A. N.
2000-12-01
The emerging subfield of econophysics explores the degree to which certain concepts and methods from statistical physics can be appropriately modified and adapted to provide new insights into questions that have been the focus of interest in the economics community. Here we give a brief overview of two examples of research topics that are receiving recent attention. A first topic is the characterization of the dynamics of stock price fluctuations. For example, we investigate the relation between trading activity - measured by the number of transactions NΔ t - and the price change GΔ t for a given stock, over a time interval [t, t+ Δt] . We relate the time-dependent standard deviation of price fluctuations - volatility - to two microscopic quantities: the number of transactions NΔ t in Δ t and the variance WΔ t2 of the price changes for all transactions in Δ t. Our work indicates that while the pronounced tails in the distribution of price fluctuations arise from WΔ t, the long-range correlations found in ∣ GΔ t∣ are largely due to NΔ t. We also investigate the relation between price fluctuations and the number of shares QΔ t traded in Δ t. We find that the distribution of QΔ t is consistent with a stable Lévy distribution, suggesting a Lévy scaling relationship between QΔ t and NΔ t, which would provide one explanation for volume-volatility co-movement. A second topic concerns cross-correlations between the price fluctuations of different stocks. We adapt a conceptual framework, random matrix theory (RMT), first used in physics to interpret statistical properties of nuclear energy spectra. RMT makes predictions for the statistical properties of matrices that are universal, that is, do not depend on the interactions between the elements comprising the system. In physics systems, deviations from the predictions of RMT provide clues regarding the mechanisms controlling the dynamics of a given system, so this framework can be of potential value if applied to economic systems. We discuss a systematic comparison between the statistics of the cross-correlation matrix C - whose elements Cij are the correlation-coefficients between the returns of stock i and j - and that of a random matrix having the same symmetry properties. Our work suggests that RMT can be used to distinguish random and non-random parts of C; the non-random part of C, which deviates from RMT results provides information regarding genuine cross-correlations between stocks.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
The impact of prism adaptation test on surgical outcomes in patients with primary exotropia.
Kiyak Yilmaz, Ayse; Kose, Suheyla; Guven Yilmaz, Suzan; Uretmen, Onder
2015-05-01
We aimed to determine the impact of the preoperative prism adaptation test (PAT) on surgical outcomes in patients with primary exotropia. Thirty-eight consecutive patients with primary exotropia were enrolled. Pre-operative PAT was performed in 18 randomly selected patients (Group 1). Surgery was based on the angle of deviation at distance measured after PAT. The remaining 20 patients in whom PAT was not performed comprised Group 2. Surgery was based on the angle of deviation at distance in these patients. Surgical success was defined as ocular alignment within eight prism dioptres (PD) of orthophoria. Satisfactory motor alignment (± 8 PD) was achieved in 16 Group 1 patients (88.9 per cent) and 16 Group 2 patients (80 per cent) one year after surgery (p = 0.6; chi-square test). There were no statistically significant differences in demographic parameters, pre-operative and post-operative angle of deviation between the two groups (p > 0.05; Mann-Whitney U and chi-square tests). Nine patients in Group 1 (50 per cent) and two patients in Group 2 (10 per cent) had increased binocular vision one year post-operatively. A statistically significant difference was determined in terms of change in binocular single vision between the two groups (p = 0.01; chi-square test). Although the prism adaptation test did not lead to a significant increment in motor success, it may be helpful in achieving a more favourable functional surgical outcome in patients with primary exotropia. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Order statistics applied to the most massive and most distant galaxy clusters
NASA Astrophysics Data System (ADS)
Waizmann, J.-C.; Ettori, S.; Bartelmann, M.
2013-06-01
In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.
Desensitizing the posterior interosseous nerve alters wrist proprioceptive reflexes.
Hagert, Elisabet; Persson, Jonas K E
2010-07-01
The presence of wrist proprioceptive reflexes after stimulation of the dorsal scapholunate interosseous ligament has previously been described. Because this ligament is primarily innervated by the posterior interosseous nerve (PIN) we hypothesized altered ligamento-muscular reflex patterns following desensitization of the PIN. Eight volunteers (3 women, 5 men; mean age, 26 y; range 21-28 y) participated in the study. In the first study on wrist proprioceptive reflexes (study 1), the scapholunate interosseous ligament was stimulated through a fine-wire electrode with 4 1-ms bipolar pulses at 200 Hz, 30 times consecutively, while EMG activity was recorded from the extensor carpi radialis brevis, extensor carpi ulnaris, flexor carpi radialis, and flexor carpi ulnaris, with the wrist in extension, flexion, radial deviation, and ulnar deviation. After completion of study 1, the PIN was anesthetized in the radial aspect of the fourth extensor compartment using 2-mL lidocaine (10 mg/mL) infiltration anesthesia. Ten minutes after desensitization, the experiment was repeated as in study 1. The average EMG results from the 30 consecutive stimulations were rectified and analyzed using Student's t-test. Statistically significant changes in EMG amplitude were plotted along time lines so that the results of study 1 and 2 could be compared. Dramatic alterations in reflex patterns were observed in wrist flexion, radial deviation, and ulnar deviation following desensitization of the PIN, with an average of 72% reduction in excitatory reactions. In ulnar deviation, the inhibitory reactions of the extensor carpi ulnaris were entirely eliminated. In wrist extension, no differences in the reflex patterns were observed. Wrist proprioception through the scapholunate ligament in flexion, radial deviation, and ulnar deviation depends on an intact PIN function. The unchanged reflex patterns in wrist extension suggest an alternate proprioceptive pathway for this position. Routine excision of the PIN during wrist surgical procedures should be avoided, as it alters the proprioceptive function of the wrist. Therapeutic IV. Copyright 2010 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Evaluating Silent Reading Performance with an Eye Tracking System in Patients with Glaucoma
Murata, Noriaki; Fukuchi, Takeo
2017-01-01
Objective To investigate the relationship between silent reading performance and visual field defects in patients with glaucoma using an eye tracking system. Methods Fifty glaucoma patients (Group G; mean age, 52.2 years, standard deviation: 11.4 years) and 20 normal controls (Group N; mean age, 46.9 years; standard deviation: 17.2 years) were included in the study. All participants in Group G had early to advanced glaucomatous visual field defects but better than 20/20 visual acuity in both eyes. Participants silently read Japanese articles written horizontally while the eye tracking system monitored and calculated reading duration per 100 characters, number of fixations per 100 characters, and mean fixation duration, which were compared with mean deviation and visual field index values from Humphrey visual field testing (24–2 and 10–2 Swedish interactive threshold algorithm standard) of the right versus left eye and the better versus worse eye. Results There was a statistically significant difference between Groups G and N in mean fixation duration (G, 233.4 msec; N, 215.7 msec; P = 0.010). Within Group G, significant correlations were observed between reading duration and 24–2 right mean deviation (rs = -0.280, P = 0.049), 24–2 right visual field index (rs = -0.306, P = 0.030), 24–2 worse visual field index (rs = -0.304, P = 0.032), and 10–2 worse mean deviation (rs = -0.326, P = 0.025). Significant correlations were observed between mean fixation duration and 10–2 left mean deviation (rs = -0.294, P = 0.045) and 10–2 worse mean deviation (rs = -0.306, P = 0.037), respectively. Conclusions The severity of visual field defects may influence some aspects of reading performance. At least concerning silent reading, the visual field of the worse eye is an essential element of smoothness of reading. PMID:28095478
On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris
NASA Technical Reports Server (NTRS)
Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt
2007-01-01
A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.
1994-06-01
The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less
Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa
2013-03-01
To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.
ERIC Educational Resources Information Center
Hida, Takeyuki; Shimizu, Akinobu
This volume contains the papers and comments from the Workshop on Mathematics Education, a special session of the 15th Conference on Stochastic Processes and Their Applications, held in Nagoya, Japan, July 2-5, 1985. Topics covered include: (1) probability; (2) statistics; (3) deviation; (4) Japanese mathematics curriculum; (5) statistical…
Nathaniel E. Seavy; Suhel Quader; John D. Alexander; C. John Ralph
2005-01-01
The success of avian monitoring programs to effectively guide management decisions requires that studies be efficiently designed and data be properly analyzed. A complicating factor is that point count surveys often generate data with non-normal distributional properties. In this paper we review methods of dealing with deviations from normal assumptions, and we focus...
Sources of Instabilities in Two-Way Satellite Time Transfer
2005-08-01
Frequency Division 325 Broadway Boulder, CO USA Abstract -- Two-Way Satellite Time and Frequency Transfer ( TWSTFT ) has become an important...stability of TWSTFT a more complete understanding of the sources of instabilities is required. This paper analyzes several sources of instabilities...Frequency Transfer ( TWSTFT ) regularly delivers subnanosecond time transfer stability at 1 day as measured by the time deviation (TDEV) statistic
DIMENSION-BASED STATISTICAL LEARNING OF VOWELS
Liu, Ran; Holt, Lori L.
2015-01-01
Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268
Sampling for mercury at subnanogram per litre concentrations for load estimation in rivers
Colman, J.A.; Breault, R.F.
2000-01-01
Estimation of constituent loads in streams requires collection of stream samples that are representative of constituent concentrations, that is, composites of isokinetic multiple verticals collected along a stream transect. An all-Teflon isokinetic sampler (DH-81) cleaned in 75??C, 4 N HCl was tested using blank, split, and replicate samples to assess systematic and random sample contamination by mercury species. Mean mercury concentrations in field-equipment blanks were low: 0.135 ng??L-1 for total mercury (??Hg) and 0.0086 ng??L-1 for monomethyl mercury (MeHg). Mean square errors (MSE) for ??Hg and MeHg duplicate samples collected at eight sampling stations were not statistically different from MSE of samples split in the laboratory, which represent the analytical and splitting error. Low fieldblank concentrations and statistically equal duplicate- and split-sample MSE values indicate that no measurable contamination was occurring during sampling. Standard deviations associated with example mercury load estimations were four to five times larger, on a relative basis, than standard deviations calculated from duplicate samples, indicating that error of the load determination was primarily a function of the loading model used, not of sampling or analytical methods.
On statistical irregularity of stratospheric warming occurrence during northern winters
NASA Astrophysics Data System (ADS)
Savenkova, Elena N.; Gavrilov, Nikolai M.; Pogoreltsev, Alexander I.
2017-10-01
Statistical analysis of dates of warming events observed during the years 1981-2016 at different stratospheric altitudes reveals their non-uniform distributions during northern winter months with maxima at the beginning of January, at the end of January - beginning of February and at the end of February. Climatology of zonal-mean zonal wind, deviations of temperature from its winter-averaged values, and planetary wave (PW) characteristics at high and middle northern latitudes in the altitude range from the ground up to 60 km is studied using the database of meteorological reanalysis MERRA. Climatological temperature deviations averaged over the 60-90°N latitudinal bands reveal cooler and warmer layers descending due to seasonal changes during the polar night. PW amplitudes and upward Eliassen-Palm fluxes averaged over 36 years have periodical maxima with the main maximum at the beginning of January at altitudes 40-50 km. During the above-mentioned intervals of more frequent occurrence of stratospheric warming events, maxima of PW amplitudes and Eliassen-Palm fluxes, also minima of eastward winds in the high-latitude northern stratosphere have been found. Climatological intra-seasonal irregularities of stratospheric warming dates could indicate reiterating phases of stratospheric vacillations in different years.
Motion Control of Drives for Prosthetic Hand Using Continuous Myoelectric Signals
NASA Astrophysics Data System (ADS)
Purushothaman, Geethanjali; Ray, Kalyan Kumar
2016-03-01
In this paper the authors present motion control of a prosthetic hand, through continuous myoelectric signal acquisition, classification and actuation of the prosthetic drive. A four channel continuous electromyogram (EMG) signal also known as myoelectric signals (MES) are acquired from the abled-body to classify the six unique movements of hand and wrist, viz, hand open (HO), hand close (HC), wrist flexion (WF), wrist extension (WE), ulnar deviation (UD) and radial deviation (RD). The classification technique involves in extracting the features/pattern through statistical time domain (TD) parameter/autoregressive coefficients (AR), which are reduced using principal component analysis (PCA). The reduced statistical TD features and or AR coefficients are used to classify the signal patterns through k nearest neighbour (kNN) as well as neural network (NN) classifier and the performance of the classifiers are compared. Performance comparison of the above two classifiers clearly shows that kNN classifier in identifying the hidden intended motion in the myoelectric signals is better than that of NN classifier. Once the classifier identifies the intended motion, the signal is amplified to actuate the three low power DC motor to perform the above mentioned movements.
Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
Evaluation of visual field parameters in patients with chronic obstructive pulmonary disease.
Demir, Helin Deniz; Inönü, Handan; Kurt, Semiha; Doruk, Sibel; Aydın, Erdinc; Etikan, Ilker
2012-08-01
To evaluate the effects of chronic obstructive pulmonary disease (COPD) on retina and optic nerve. Thirty-eight patients with COPD and 29 healthy controls, totally 67 subjects, were included in the study. Visual evoked potentials (VEP) and visual field assessment (both standard achromatic perimetry (SAP) and short-wavelength automated perimetry (SWAP)) were performed on each subject after ophthalmological, neurological and pulmonary examinations. Mean deviation (MD), pattern standard deviation (PSD) and corrected pattern standard deviation (CPSD) were significantly different between patient and control groups as for both SAP and SWAP measurements (p = 0.001, 0.019, 0.009 and p = 0.004,0.019, 0.031, respectively). Short-term fluctuation (SF) was not statistically different between the study and the control groups (p = 0.874 and 0.694, respectively). VEP P100 latencies were significantly different between patients with COPD and the controls (p = 0.019). Chronic obstructive pulmonary disease is a systemic disease, and hypoxia in COPD seems to affect the retina and the optic nerve. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.
DISFLUENCY PATTERNS AND PHONOLOGICAL SKILLS NEAR STUTTERING ONSET
Gregg, Brent Andrew; Yairi, Ehud
2012-01-01
There is a substantial amount of literature reporting the incidence of phonological difficulties to be higher for children who stutter when compared to normally fluent children, suggesting a link between stuttering and phonology. In view of this, the purpose of the investigation was to determine whether, among children who stutter, there are relationships between phonological skills and the initial characteristics of stuttering. That is, close to the onset of stuttering, are there differences in specific stuttering patterns between children who exhibit minimal and moderate phonological deviations in terms of frequency of stuttering and length of stuttering events? Twenty-nine preschool children near the onset of stuttering, ranging in age from 29 to 49 months, with a mean of 39.17 months, were divided into two groups based on the level of phonological ability: minimal phonological deviations and moderate phonological deviations. The children’s level of stuttering-like disfluencies was examined. Results revealed no statistically significant differences in the stuttering characteristics of the two groups near onset, calling into the question the nature of the stuttering-phonology link. PMID:22939524
A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity
NASA Astrophysics Data System (ADS)
Bhattacharyya, S.; Narasimha, R.
2005-12-01
The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.
Marghalani, Amin; Weber, Hans-Peter; Finkelman, Matthew; Kudara, Yukio; El Rafie, Khaled; Papaspyridakos, Panos
2018-04-01
To the authors' knowledge, while accuracy outcomes of the TRIOS scanner have been compared with conventional impressions, no available data are available regarding the accuracy of digital scans with the Omnicam and True Definition scanners versus conventional impressions for partially edentulous arches. The purpose of this in vitro study was to compare the accuracy of digital implant scans using 2 different intraoral scanners (IOSs) with that of conventional impressions for partially edentulous arches. Two partially edentulous mandibular casts with 2 implant analogs with a 30-degree angulation from 2 different implant systems (Replace Select RP; Nobel Biocare and Tissue level RN; Straumann) were used as controls. Sixty digital models were made from these 2 definitive casts in 6 different groups (n=10). Splinted implant-level impression procedures followed by digitization were used to produce the first 2 groups. The next 2 groups were produced by digital scanning with Omnicam. The last 2 groups were produced by digital scanning with the True Definition scanner. Accuracy was evaluated by superimposing the digital files of each test group onto the digital file of the controls with inspection software. The difference in 3-dimensional (3D) deviations (median ±interquartile range) among the 3 impression groups for Nobel Biocare was statistically significant among all groups (P<.001), except for the Omnicam (20 ±4 μm) and True Definition (15 ±6 μm) groups; the median ±interquartile range for the conventional group was 39 ±18 μm. The difference in 3D deviations among the 3 impression groups for Straumann was statistically significant among all groups (P=.003), except for the conventional impression (22 ±5 μm) and True Definition (17 ±5 μm) groups; the median ±interquartile range for the Omnicam group was 26 ±15 μm. The difference in 3D deviations between the 2 implant systems was significant for the Omnicam (P=.011) and conventional (P<.001) impression techniques but not for the True Definition technique (P=.247). Within the limitations of this study, both the impression technique and the implant system affected accuracy. The True Definition technique had the fewest 3D deviations compared with the other 2 techniques; however, the accuracy of all impression techniques was within clinically acceptable levels, and not all differences were statistically significant. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E; Chacron, Maurice J
2017-01-01
There is accumulating evidence that the brain's neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals.
Kee, Changwon; Cho, Changhwan
2003-06-01
The authors investigated the correlation between visual field defects detected by automated perimetry and the thickness of the retinal nerve fiber layer measured with optical coherence tomography, and examined whether there is a decrease in retinal nerve fiber layer thickness in the apparently normal hemifield of glaucomatous eyes. Forty-one patients with glaucoma and 41 normal control subjects were included in this study. Statistical correlations between the sum of the total deviation of 37 stimuli of each hemifield and the ratio of decrease in retinal nerve fiber layer thickness were evaluated. The statistical difference between the retinal nerve fiber layer thickness of the apparently normal hemifield in glaucomatous eyes and that of the corresponding hemifield in normal subjects was also evaluated. There was a statistically significant correlation in the sum of the total deviation and retinal nerve fiber layer thickness decrease ratio (superior hemifield, P = 0.001; inferior hemifield, P = 0.003). There was no significant decrease in retinal nerve fiber layer thickness in the area that corresponded to the normal visual field in the hemifield defect with respect to the horizontal meridian in glaucomatous eyes (superior side, P = 0.148; inferior side, P = 0.341). Optical coherence tomography was capable of demonstrating and measuring retinal nerve fiber layer abnormalities. No changes in the retinal nerve fiber layer thickness of the apparently normal hemifield were observed in glaucomatous eyes.
NASA Astrophysics Data System (ADS)
Lomakina, N. Ya.
2017-11-01
The work presents the results of the applied climatic division of the Siberian region into districts based on the methodology of objective classification of the atmospheric boundary layer climates by the "temperature-moisture-wind" complex realized with using the method of principal components and the special similarity criteria of average profiles and the eigen values of correlation matrices. On the territory of Siberia, it was identified 14 homogeneous regions for winter season and 10 regions were revealed for summer. The local statistical models were constructed for each region. These include vertical profiles of mean values, mean square deviations, and matrices of interlevel correlation of temperature, specific humidity, zonal and meridional wind velocity. The advantage of the obtained local statistical models over the regional models is shown.
Point process statistics in atom probe tomography.
Philippe, T; Duguay, S; Grancher, G; Blavette, D
2013-09-01
We present a review of spatial point processes as statistical models that we have designed for the analysis and treatment of atom probe tomography (APT) data. As a major advantage, these methods do not require sampling. The mean distance to nearest neighbour is an attractive approach to exhibit a non-random atomic distribution. A χ(2) test based on distance distributions to nearest neighbour has been developed to detect deviation from randomness. Best-fit methods based on first nearest neighbour distance (1 NN method) and pair correlation function are presented and compared to assess the chemical composition of tiny clusters. Delaunay tessellation for cluster selection has been also illustrated. These statistical tools have been applied to APT experiments on microelectronics materials. Copyright © 2012 Elsevier B.V. All rights reserved.
Estimating error statistics for Chambon-la-Forêt observatory definitive data
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly
2017-08-01
We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.
Nieuwenhuys, Angela; Papageorgiou, Eirini; Desloovere, Kaat; Molenaers, Guy; De Laet, Tinne
2017-01-01
Experts recently identified 49 joint motion patterns in children with cerebral palsy during a Delphi consensus study. Pattern definitions were therefore the result of subjective expert opinion. The present study aims to provide objective, quantitative data supporting the identification of these consensus-based patterns. To do so, statistical parametric mapping was used to compare the mean kinematic waveforms of 154 trials of typically developing children (n = 56) to the mean kinematic waveforms of 1719 trials of children with cerebral palsy (n = 356), which were classified following the classification rules of the Delphi study. Three hypotheses stated that: (a) joint motion patterns with 'no or minor gait deviations' (n = 11 patterns) do not differ significantly from the gait pattern of typically developing children; (b) all other pathological joint motion patterns (n = 38 patterns) differ from typically developing gait and the locations of difference within the gait cycle, highlighted by statistical parametric mapping, concur with the consensus-based classification rules. (c) all joint motion patterns at the level of each joint (n = 49 patterns) differ from each other during at least one phase of the gait cycle. Results showed that: (a) ten patterns with 'no or minor gait deviations' differed somewhat unexpectedly from typically developing gait, but these differences were generally small (≤3°); (b) all other joint motion patterns (n = 38) differed from typically developing gait and the significant locations within the gait cycle that were indicated by the statistical analyses, coincided well with the classification rules; (c) joint motion patterns at the level of each joint significantly differed from each other, apart from two sagittal plane pelvic patterns. In addition to these results, for several joints, statistical analyses indicated other significant areas during the gait cycle that were not included in the pattern definitions of the consensus study. Based on these findings, suggestions to improve pattern definitions were made.
Matta, Ragai-Edward; Bergauer, Bastian; Adler, Werner; Wichmann, Manfred; Nickenig, Hans-Joachim
2017-06-01
The use of a surgical template is a well-established method in advanced implantology. In addition to conventional fabrication, computer-aided design and computer-aided manufacturing (CAD/CAM) work-flow provides an opportunity to engineer implant drilling templates via a three-dimensional printer. In order to transfer the virtual planning to the oral situation, a highly accurate surgical guide is needed. The aim of this study was to evaluate the impact of the fabrication method on the three-dimensional accuracy. The same virtual planning based on a scanned plaster model was used to fabricate a conventional thermo-formed and a three-dimensional printed surgical guide for each of 13 patients (single tooth implants). Both templates were acquired individually on the respective plaster model using an optical industrial white-light scanner (ATOS II, GOM mbh, Braunschweig, Germany), and the virtual datasets were superimposed. Using the three-dimensional geometry of the implant sleeve, the deviation between both surgical guides was evaluated. The mean discrepancy of the angle was 3.479° (standard deviation, 1.904°) based on data from 13 patients. Concerning the three-dimensional position of the implant sleeve, the highest deviation was in the Z-axis at 0.594 mm. The mean deviation of the Euclidian distance, dxyz, was 0.864 mm. Although the two different fabrication methods delivered statistically significantly different templates, the deviations ranged within a decimillimeter span. Both methods are appropriate for clinical use. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Seay, Joseph F.; Gregorczyk, Karen N.; Hasselquist, Leif
2016-01-01
Abstract Influences of load carriage and inclination on spatiotemporal parameters were examined during treadmill and overground walking. Ten soldiers walked on a treadmill and overground with three load conditions (00 kg, 20 kg, 40 kg) during level, uphill (6% grade) and downhill (-6% grade) inclinations at self-selected speed, which was constant across conditions. Mean values and standard deviations for double support percentage, stride length and a step rate were compared across conditions. Double support percentage increased with load and inclination change from uphill to level walking, with a 0.4% stance greater increase at the 20 kg condition compared to 00 kg. As inclination changed from uphill to downhill, the step rate increased more overground (4.3 ± 3.5 steps/min) than during treadmill walking (1.7 ± 2.3 steps/min). For the 40 kg condition, the standard deviations were larger than the 00 kg condition for both the step rate and double support percentage. There was no change between modes for step rate standard deviation. For overground compared to treadmill walking, standard deviation for stride length and double support percentage increased and decreased, respectively. Changes in the load of up to 40 kg, inclination of 6% grade away from the level (i.e., uphill or downhill) and mode (treadmill and overground) produced small, yet statistically significant changes in spatiotemporal parameters. Variability, as assessed by standard deviation, was not systematically lower during treadmill walking compared to overground walking. Due to the small magnitude of changes, treadmill walking appears to replicate the spatiotemporal parameters of overground walking. PMID:28149338
Preliminary results from the White Sands Missile Range sonic boom propagation experiment
NASA Technical Reports Server (NTRS)
Willshire, William L., Jr.; Devilbiss, David W.
1992-01-01
Sonic boom bow shock amplitude and rise time statistics from a recent sonic boom propagation experiment are presented. Distributions of bow shock overpressure and rise time measured under different atmospheric turbulence conditions for the same test aircraft are quite different. The peak overpressure distributions are skewed positively, indicating a tendency for positive deviations from the mean to be larger than negative deviations. Standard deviations of overpressure distributions measured under moderate turbulence were 40 percent larger than those measured under low turbulence. As turbulence increased, the difference between the median and the mean increased, indicating increased positive overpressure deviations. The effect of turbulence was more readily seen in the rise time distributions. Under moderate turbulence conditions, the rise time distribution means were larger by a factor of 4 and the standard deviations were larger by a factor of 3 from the low turbulence values. These distribution changes resulted in a transition from a peaked appearance of the rise time distribution for the morning to a flattened appearance for the afternoon rise time distributions. The sonic boom propagation experiment consisted of flying three types of aircraft supersonically over a ground-based microphone array with concurrent measurements of turbulence and other meteorological data. The test aircraft were a T-38, an F-15, and an F-111, and they were flown at speeds of Mach 1.2 to 1.3, 30,000 feet above a 16 element, linear microphone array with an inter-element spacing of 200 ft. In two weeks of testing, 57 supersonic passes of the test aircraft were flown from early morning to late afternoon.
Reactor antineutrino shoulder explained by energy scale nonlinearities?
NASA Astrophysics Data System (ADS)
Mention, G.; Vivier, M.; Gaffiot, J.; Lasserre, T.; Letourneau, A.; Materna, T.
2017-10-01
The Daya Bay, Double Chooz and RENO experiments recently observed a significant distortion in their detected reactor antineutrino spectra, being at odds with the current predictions. Although such a result suggests to revisit the current reactor antineutrino spectra modeling, an alternative scenario, which could potentially explain this anomaly, is explored in this letter. Using an appropriate statistical method, a study of the Daya Bay experiment energy scale is performed. While still being in agreement with the γ calibration data and 12B measured spectrum, it is shown that a O (1%) deviation of the energy scale reproduces the distortion observed in the Daya Bay spectrum, remaining within the quoted calibration uncertainties. Potential origins of such a deviation, which challenge the energy calibration of these detectors, are finally discussed.
Robust LOD scores for variance component-based linkage analysis.
Blangero, J; Williams, J T; Almasy, L
2000-01-01
The variance component method is now widely used for linkage analysis of quantitative traits. Although this approach offers many advantages, the importance of the underlying assumption of multivariate normality of the trait distribution within pedigrees has not been studied extensively. Simulation studies have shown that traits with leptokurtic distributions yield linkage test statistics that exhibit excessive Type I error when analyzed naively. We derive analytical formulae relating the deviation from the expected asymptotic distribution of the lod score to the kurtosis and total heritability of the quantitative trait. A simple correction constant yields a robust lod score for any deviation from normality and for any pedigree structure, and effectively eliminates the problem of inflated Type I error due to misspecification of the underlying probability model in variance component-based linkage analysis.
Optical-frequency transfer over a single-span 1840 km fiber link.
Droste, S; Ozimek, F; Udem, Th; Predehl, K; Hänsch, T W; Schnatz, H; Grosche, G; Holzwarth, R
2013-09-13
To compare the increasing number of optical frequency standards, highly stable optical signals have to be transferred over continental distances. We demonstrate optical-frequency transfer over a 1840-km underground optical fiber link using a single-span stabilization. The low inherent noise introduced by the fiber allows us to reach short term instabilities expressed as the modified Allan deviation of 2×10(-15) for a gate time τ of 1 s reaching 4×10(-19) in just 100 s. We find no systematic offset between the sent and transferred frequencies within the statistical uncertainty of about 3×10(-19). The spectral noise distribution of our fiber link at low Fourier frequencies leads to a τ(-2) slope in the modified Allan deviation, which is also derived theoretically.
NASA Astrophysics Data System (ADS)
Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven
2016-05-01
Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.
Esik, O; Seitz, W; Lövey, J; Knocke, T H; Gaudi, I; Németh, G; Pötter, R
1999-04-01
To present an example of how to study and analyze the clinical practice and the quality of medical decision-making under daily routine working conditions in a radiotherapy department, with the aims of detecting deficiencies and improving the quality of patient care. Two departments, each with a divisional organization structure and an established internal audit system, the University Clinic of Radiotherapy and Radiobiology in Vienna (Austria), and the Department of Radiotherapy at the National Institute of Oncology in Budapest (Hungary), conducted common external audits. The descriptive parameters of the external audit provided information on the auditing (auditor and serial number of the audit), the cohorts (diagnosis, referring institution, serial number and intention of radiotherapy) and the staff responsible for the treatment (division and physician). During the ongoing external audits, the qualifying parameters were (1) the sound foundation of the indication of radiotherapy, (2) conformity to the institution protocol (3), the adequacy of the choice of radiation equipment, (4) the appropriateness of the treatment plan, and the correspondence of the latter with (5) the simulation and (6) verification films. Various degrees of deviation from the treatment principles were defined and scored on the basis of the concept of Horiot et al. (Horiot JC, Schueren van der E. Johansson KA, Bernier J, Bartelink H. The program of quality assurance of the EORTC radiotherapy group. A historical overview. Radiother. Oncol. 1993,29:81-84), with some modifications. The action was regarded as adequate (score 1) in the event of no deviation or only a small deviation with presumably no alteration of the desired end-result of the treatment. A deviation adversely influencing the result of the therapy was considered a major deviation (score 3). Cases involving a minor deviation (score 2) were those only slightly affecting the therapeutic end-results, with effects between those of cases with scores 1 and 3. Non-performance of the necessary radiotherapeutic procedures was penalized by the highest score of 4. Statistical evaluation was performed with the BMDP software package, using variance analysis. Bimonthly audits (six with a duration of 4-6 h in each institution) were carried out by three auditors from the evaluating departments; they reviewed a total of 452 cases in Department A, and 265 cases in Department B. Despite the comparable staffing and instrumental conditions, a markedly higher number (1.5 times) of new cases were treated in Department A, but with a lower quality of radiotherapy, as adequate values of qualifying parameters (1-6) were more frequent for the cases treated in Department B (85.3%, 94%, 83.4%, 28.3%, 41.9% and 81.1%) than for those in Department A (67%, 83.4%, 87.8%, 26.1%, 33.2% and 17.7%). The responsible division (including staff and instrumentation), the responsible physician and the type of the disease each exerted a highly significant effect on the quality level of the treatment. Statistical analysis revealed a positive influence of the curative (relative to the palliative/symptomatic) intention of the treatment on the level of quality, but the effect of the first radiotherapy (relative to the second or further one) was statistically significant in only one department. At the same time, the quality parameters did not vary with the referring institution, the auditing person or the serial number of the audit. The external audit relating to the provision of radiotherapeutic care proved feasible with the basic conformity and compliance of the staff and resulted in valuable information to take correction measures.
Isolation and characterization of microsatellite loci in the whale shark (Rhincodon typus)
Ramirez-Macias, D.; Shaw, K.; Ward, R.; Galvan-Magana, F.; Vazquez-Juarez, R.
2009-01-01
In preparation for a study on population structure of the whale shark (Rhincodon typus), nine species-specific polymorphic microsatellite DNA markers were developed. An initial screening of 50 individuals from Holbox Island, Mexico found all nine loci to be polymorphic, with two to 17 alleles observed per locus. Observed and expected heterozygosity per locus ranged from 0.200 to 0.826 and from 0.213 to 0.857, respectively. Neither statistically significant deviations from Hardy–Weinberg expectations nor statistically significant linkage disequilibrium between loci were observed. These microsatellite loci appear suitable for examining population structure, kinship assessment and other applications.
Time-resolved measurements of statistics for a Nd:YAG laser.
Hubschmid, W; Bombach, R; Gerber, T
1994-08-20
Time-resolved measurements of the fluctuating intensity of a multimode frequency-doubled Nd:YAG laser have been performed. For various operating conditions the enhancement factors in nonlinear optical processes that use a fluctuating instead of a single-mode laser have been determined up to the sixth order. In the case of reduced flash-lamp excitation and a switched-off laser amplifier, the intensity fluctuations agree with the normalized Gaussian model for the fluctuations of the fundamental frequency, whereas strong deviations are found under usual operating conditions. The frequencydoubled light has in the latter case enhancement factors not so far from values of Gaussian statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barmin, V. V.; Asratyan, A. E.; Borisov, V. S.
2010-07-15
The data on the charge-exchange reaction K{sup +}Xe {sup {yields}}K{sup 0}pXe', obtained with the bubble chamber DIANA, are reanalyzed using increased statistics and updated selections. Our previous evidence for formation of a narrow pK{sup 0} resonance with mass near 1538 MeV is confirmed. The statistical significance of the signal reaches some 8{sigma} (6{sigma}) standard deviations when estimated as S/{radical}B (S/{radical}B + S. The mass and intrinsic width of the {Theta}{sup +} baryon are measured as m = 1538 {+-} 2 MeV and {Gamma} = 0.39 {+-} 0.10 MeV.
Turbush, Sarah Katherine; Turkyilmaz, Ilser
2012-09-01
Precise treatment planning before implant surgery is necessary to identify vital structures and to ensure a predictable restorative outcome. The purpose of this study was to compare the accuracy of implant placement by using 3 different types of surgical guide: bone-supported, tooth-supported, and mucosa-supported. Thirty acrylic resin mandibles were fabricated with stereolithography (SLA) based on data from the cone beam computerized tomography (CBCT) scan of an edentulous patient. Ten of the mandibles were modified digitally before fabrication with the addition of 4 teeth, and 10 of the mandibles were modified after fabrication with soft acrylic resin to simulate mucosa. Each acrylic resin mandible had 5 implants virtually planned in a 3-D software program. A total of 150 implants were planned and placed by using SLA guides. Presurgical and postsurgical CBCT scans were superimposed to compare the virtual implant placement with the actual implant placement. For statistical analyses, a linear mixed models approach and t-test with the 2-sided alpha level set at .016 were used. All reported P values were adjusted by the Dunn-Sidak method to control the Type I error rate across multiple pairwise comparisons. The mean angular deviation of the long axis between the planned and placed implants was 2.2 ±1.2 degrees; the mean deviations in linear distance between the planned and placed implants were 1.18 ±0.42 mm at the implant neck and 1.44 ±0.67 mm at the implant apex for all 150 implants. After the superimposition procedure, the angular deviation of the placed implants was 2.26 ±1.30 degrees with the tooth-supported, 2.17 ±1.02 degrees with the bone-supported, and 2.29 ±1.28 degrees with the mucosa-supported SLA guide. The mean deviations in linear distance between the planned and placed implants at the neck and apex were 1.00 ±0.33 mm and 1.15 ±0.42 mm for the tooth-supported guides; 1.08 ±0.33 mm and 1.53 ±0.90 mm for the bone-supported guides; and 1.47 ±0.43 mm and 1.65 ±0.48 mm for the mucosa-supported SLA surgical guides. The results of this study show that stereolithographic surgical guides may be reliable in implant placement and that: 1) there was no statistically significant difference among the 3 types of guide when comparing angular deviation and 2) mucosa-supported guides were less accurate than both tooth-supported and bone-supported guides for linear deviation at the implant neck and apex. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Kay, Gary G; Schwartz, Howard I; Wingertzahn, Mark A; Jayawardena, Shyamalie; Rosenberg, Russell P
2016-05-01
Next-day residual effects of a nighttime dose of gabapentin 250 mg were evaluated on simulated driving performance in healthy participants in a randomized, placebo-controlled, double-blind, multicenter, four-period crossover study that included diphenhydramine citrate 76 mg and triazolam 0.5 mg. At treatment visits, participants (n = 59) were dosed at ~23:30, went to bed immediately, and awakened 6.5 h postdose for evaluation. The primary endpoint was the standard deviation of lateral position for the 100-km driving scenario. Additional measures of driving, sleepiness, and cognition were included. Study sensitivity was established with triazolam, which demonstrated significant next-day impairment on all driving endpoints, relative to placebo (p < 0.001). Gabapentin demonstrated noninferiority to placebo on standard deviation of lateral position and speed deviation but not for lane excursions. Diphenhydramine citrate demonstrated significant impairment relative to gabapentin and placebo on speed deviation (p < 0.05). Other comparisons were either nonsignificant or statistically ineligible per planned, sequential comparisons. Secondary endpoints for sleepiness and cognitive performance were supportive of these conclusions. Together, these data suggest that low-dose gabapentin had no appreciable next-day effects on simulated driving performance or cognitive functioning. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Bhatia, Gaurav; Tandon, Arti; Patterson, Nick; Aldrich, Melinda C; Ambrosone, Christine B; Amos, Christopher; Bandera, Elisa V; Berndt, Sonja I; Bernstein, Leslie; Blot, William J; Bock, Cathryn H; Caporaso, Neil; Casey, Graham; Deming, Sandra L; Diver, W Ryan; Gapstur, Susan M; Gillanders, Elizabeth M; Harris, Curtis C; Henderson, Brian E; Ingles, Sue A; Isaacs, William; De Jager, Phillip L; John, Esther M; Kittles, Rick A; Larkin, Emma; McNeill, Lorna H; Millikan, Robert C; Murphy, Adam; Neslund-Dudas, Christine; Nyante, Sarah; Press, Michael F; Rodriguez-Gil, Jorge L; Rybicki, Benjamin A; Schwartz, Ann G; Signorello, Lisa B; Spitz, Margaret; Strom, Sara S; Tucker, Margaret A; Wiencke, John K; Witte, John S; Wu, Xifeng; Yamamura, Yuko; Zanetti, Krista A; Zheng, Wei; Ziegler, Regina G; Chanock, Stephen J; Haiman, Christopher A; Reich, David; Price, Alkes L
2014-10-02
The extent of recent selection in admixed populations is currently an unresolved question. We scanned the genomes of 29,141 African Americans and failed to find any genome-wide-significant deviations in local ancestry, indicating no evidence of selection influencing ancestry after admixture. A recent analysis of data from 1,890 African Americans reported that there was evidence of selection in African Americans after their ancestors left Africa, both before and after admixture. Selection after admixture was reported on the basis of deviations in local ancestry, and selection before admixture was reported on the basis of allele-frequency differences between African Americans and African populations. The local-ancestry deviations reported by the previous study did not replicate in our very large sample, and we show that such deviations were expected purely by chance, given the number of hypotheses tested. We further show that the previous study's conclusion of selection in African Americans before admixture is also subject to doubt. This is because the FST statistics they used were inflated and because true signals of unusual allele-frequency differences between African Americans and African populations would be best explained by selection that occurred in Africa prior to migration to the Americas. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Roy, Tapta Kanchan; Carrington, Tucker; Gerber, R Benny
2014-08-21
Anharmonic vibrational spectroscopy calculations using MP2 and B3LYP computed potential surfaces are carried out for a series of molecules, and frequencies and intensities are compared with those from experiment. The vibrational self-consistent field with second-order perturbation correction (VSCF-PT2) is used in computing the spectra. The test calculations have been performed for the molecules HNO3, C2H4, C2H4O, H2SO4, CH3COOH, glycine, and alanine. Both MP2 and B3LYP give results in good accord with experimental frequencies, though, on the whole, MP2 gives very slightly better agreement. A statistical analysis of deviations in frequencies from experiment is carried out that gives interesting insights. The most probable percentage deviation from experimental frequencies is about -2% (to the red of the experiment) for B3LYP and +2% (to the blue of the experiment) for MP2. There is a higher probability for relatively large percentage deviations when B3LYP is used. The calculated intensities are also found to be in good accord with experiment, but the percentage deviations are much larger than those for frequencies. The results show that both MP2 and B3LYP potentials, used in VSCF-PT2 calculations, account well for anharmonic effects in the spectroscopy of molecules of the types considered.
Cristache, Corina Marilena; Gurbanescu, Silviu
2017-01-01
of this study was to evaluate the accuracy of a stereolithographic template, with sleeve structure incorporated into the design, for computer-guided dental implant insertion in partially edentulous patients. Sixty-five implants were placed in twenty-five consecutive patients with a stereolithographic surgical template. After surgery, digital impression was taken and 3D inaccuracy of implants position at entry point, apex, and angle deviation was measured using an inspection tool software. Mann-Whitney U test was used to compare accuracy between maxillary and mandibular surgical guides. A p value < .05 was considered significant. Mean (and standard deviation) of 3D error at the entry point was 0.798 mm (±0.52), at the implant apex it was 1.17 mm (±0.63), and mean angular deviation was 2.34 (±0.85). A statistically significant reduced 3D error was observed at entry point p = .037, at implant apex p = .008, and also in angular deviation p = .030 in mandible when comparing to maxilla. The surgical template used has proved high accuracy for implant insertion. Within the limitations of the present study, the protocol for comparing a digital file (treatment plan) with postinsertion digital impression may be considered a useful procedure for assessing surgical template accuracy, avoiding radiation exposure, during postoperative CBCT scanning.
Professionalism in medical students at a private medical college in Karachi, Pakistan.
Sobani, Zain-ul-abedeen; Mohyuddin, Muhammad Masaud; Farooq, Fahd; Qaiser, Kanza Noor; Gani, Faiz; Bham, Nida Shahab; Raheem, Ahmed; Mehraj, Vikram; Saeed, Syed Abdul; Sharif, Hasanat; Sheerani, Mughis; Zuberi, Rukhsana Wamiq; Beg, Mohamamd Asim
2013-07-01
To determine levels of professionalism in undergraduate medical students at a private medical college and assess how changes emerge during their training. The study was conducted at Aga Khan University, a tertiary care teaching hospital, during November and December 2011. Freshmen, Year 3 and Year 5 students were requested to fill out a questionnaire. It was designed to assess the participants' levels of professionalism and how they perceived the professional environment around them by incorporating previously described scales. The questionnaire was re-validated on a random sample of practising clinicians at the same hospital. SPSS 17 was used for statistical analysis. The study sample comprised 204 participants. The mean score for level of individual professionalism was 7.72+/-3.43. Only 13 (6.4%) students had a score one standard deviation above the faculty mean. About 24 (11.8%) were one standard deviation and 35 (17.2%) were 2 standard deviations below the faculty mean. The remaining 130 (63.7%) were >2 standard deviations below the faculty mean. Considering the level of education, the mean score for level of professionalism was 8.00+/-3.39 for freshmen, 6.85+/-3.41 for year 3 students, and 8.40+/-3.34 for year 5 students. The currently employed teaching practices inculcating the values of professionalism in medical students are serving as a buffer to maintain the pre-training levels of professionalism from declining.
Fouad, Heba M; Abdelhakim, Mohamad A; Awadein, Ahmed; Elhilali, Hala
2016-10-01
To compare the outcomes of medial rectus (MR) muscle pulley fixation and augmented recession in children with convergence excess esotropia and variable-angle infantile esotropia. This was a prospective randomized interventional study in which children with convergence excess esotropia or variable-angle infantile esotropia were randomly allocated to either augmented MR muscle recession (augmented group) or MR muscle pulley posterior fixation (pulley group). In convergence excess, the MR recession was based on the average of distance and near angles of deviation with distance correction in the augmented group, and on the distance angle of deviation in the pulley group. In variable-angle infantile esotropia, the MR recession was based on the average of the largest and smallest angles in the augmented group and on the smallest angle in the pulley group. Pre- and postoperative ductions, versions, pattern strabismus, smallest and largest angles of deviation, and angle disparity were analyzed. Surgery was performed on 60 patients: 30 underwent bilateral augmented MR recession, and 30 underwent bilateral MR recession with pulley fixation. The success rate was statistically significantly higher (P = 0.037) in the pulley group (70%) than in the augmented group (40%). The postoperative smallest and largest angles and the angle disparity were statistically significantly lower in the pulley group than the augmented group (P < 0.01). Medial rectus muscle pulley fixation is a useful surgical step for addressing marked variability of the angle in variable angle esotropia and convergence excess esotropia. Copyright © 2016 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.
Yang, Fei; Hou, Xianru; Wu, Huijuan; Bao, Yongzhen
2014-02-01
To evaluate the characteristics of postoperative refractive status in age-related cataract patients with shallow anterior chamber and the correlation between pre-operative anterior chamber depth and postoperative refractive status. Prospective case-control study. Sixty-eight cases (90 eyes) with age-related cataract were recruited from October 2010 to January 2012 in People's Hospital Peking University including 28 cases (34 eyes) in control group and 40 cases (56 eyes) in shallow anterior chamber group according to anterior chamber depth (ACD) measured by Pentacam system. Axial length and keratometer were measured by IOL Master and intraocular lens power was calculated using SRK/T formula. Postoperative refraction, ACD and comprehensive eye examination were performed at 1 month and 3 months after cataract surgery. Using SPSS13.0 software to establish a database, the two groups were compared with independent samples t-test and correlation analysis were performed with binary logical regression. The postoperative refractive deviation at 1 month were (-0.39 ± 0.62) D in control group and (+0.73 ± 0.26) D in shallow anterior chamber group respectively which present statistical significance between the two groups (P = 0.00, t = 3.67); the postoperative refractive deviation in 3 month was (-0.37 ± 0.62) D in control group and (+0.79 ± 0.28) D in shallow anterior chamber group operatively which present statistical significance between the two groups (P = 0.00, t = 3.33). In shallow anterior chamber group, with the shallower of ACD, the greater of refractive deviation (P = 0.00, r1 month = -0.57, r3 months = -0.61). Hyperopic shift existed in age-related cataract patients with shallow anterior chamber and the shallower of ACD was, the greater of hyperopic shift happened.
Non-parametric characterization of long-term rainfall time series
NASA Astrophysics Data System (ADS)
Tiwari, Harinarayan; Pandey, Brij Kishor
2018-03-01
The statistical study of rainfall time series is one of the approaches for efficient hydrological system design. Identifying, and characterizing long-term rainfall time series could aid in improving hydrological systems forecasting. In the present study, eventual statistics was applied for the long-term (1851-2006) rainfall time series under seven meteorological regions of India. Linear trend analysis was carried out using Mann-Kendall test for the observed rainfall series. The observed trend using the above-mentioned approach has been ascertained using the innovative trend analysis method. Innovative trend analysis has been found to be a strong tool to detect the general trend of rainfall time series. Sequential Mann-Kendall test has also been carried out to examine nonlinear trends of the series. The partial sum of cumulative deviation test is also found to be suitable to detect the nonlinear trend. Innovative trend analysis, sequential Mann-Kendall test and partial cumulative deviation test have potential to detect the general as well as nonlinear trend for the rainfall time series. Annual rainfall analysis suggests that the maximum changes in mean rainfall is 11.53% for West Peninsular India, whereas the maximum fall in mean rainfall is 7.8% for the North Mountainous Indian region. The innovative trend analysis method is also capable of finding the number of change point available in the time series. Additionally, we have performed von Neumann ratio test and cumulative deviation test to estimate the departure from homogeneity. Singular spectrum analysis has been applied in this study to evaluate the order of departure from homogeneity in the rainfall time series. Monsoon season (JS) of North Mountainous India and West Peninsular India zones has higher departure from homogeneity and singular spectrum analysis shows the results to be in coherence with the same.
Dexter, Franklin; O'Neill, Liam; Xin, Lei; Ledolter, Johannes
2008-12-01
We use resampling of data to explore the basic statistical properties of super-efficient data envelopment analysis (DEA) when used as a benchmarking tool by the manager of a single decision-making unit. Our focus is the gaps in the outputs (i.e., slacks adjusted for upward bias), as they reveal which outputs can be increased. The numerical experiments show that the estimates of the gaps fail to exhibit asymptotic consistency, a property expected for standard statistical inference. Specifically, increased sample sizes were not always associated with more accurate forecasts of the output gaps. The baseline DEA's gaps equaled the mode of the jackknife and the mode of resampling with/without replacement from any subset of the population; usually, the baseline DEA's gaps also equaled the median. The quartile deviations of gaps were close to zero when few decision-making units were excluded from the sample and the study unit happened to have few other units contributing to its benchmark. The results for the quartile deviations can be explained in terms of the effective combinations of decision-making units that contribute to the DEA solution. The jackknife can provide all the combinations contributing to the quartile deviation and only needs to be performed for those units that are part of the benchmark set. These results show that there is a strong rationale for examining DEA results with a sensitivity analysis that excludes one benchmark hospital at a time. This analysis enhances the quality of decision support using DEA estimates for the potential ofa decision-making unit to grow one or more of its outputs.
Self-selection and bias in a large prospective pregnancy cohort in Norway.
Nilsen, Roy M; Vollset, Stein Emil; Gjessing, Håkon K; Skjaerven, Rolv; Melve, Kari K; Schreuder, Patricia; Alsaker, Elin R; Haug, Kjell; Daltveit, Anne Kjersti; Magnus, Per
2009-11-01
Self-selection in epidemiological studies may introduce selection bias and influence the validity of study results. To evaluate potential bias due to self-selection in a large prospective pregnancy cohort in Norway, the authors studied differences in prevalence estimates and association measures between study participants and all women giving birth in Norway. Women who agreed to participate in the Norwegian Mother and Child Cohort Study (43.5% of invited; n = 73 579) were compared with all women giving birth in Norway (n = 398 849) using data from the population-based Medical Birth Registry of Norway in 2000-2006. Bias in the prevalence of 23 exposure and outcome variables was measured as the ratio of relative frequencies, whereas bias in exposure-outcome associations of eight relationships was measured as the ratio of odds ratios. Statistically significant relative differences in prevalence estimates between the cohort participants and the total population were found for all variables, except for maternal epilepsy, chronic hypertension and pre-eclampsia. There was a strong under-representation of the youngest women (<25 years), those living alone, mothers with more than two previous births and with previous stillbirths (relative deviation 30-45%). In addition, smokers, women with stillbirths and neonatal death were markedly under-represented in the cohort (relative deviation 22-43%), while multivitamin and folic acid supplement users were over-represented (relative deviation 31-43%). Despite this, no statistically relative differences in association measures were found between participants and the total population regarding the eight exposure-outcome associations. Using data from the Medical Birth Registry of Norway, this study suggests that prevalence estimates of exposures and outcomes, but not estimates of exposure-outcome associations are biased due to self-selection in the Norwegian Mother and Child Cohort Study.
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Marko, Nicholas F.; Weil, Robert J.
2012-01-01
Introduction Gene expression data is often assumed to be normally-distributed, but this assumption has not been tested rigorously. We investigate the distribution of expression data in human cancer genomes and study the implications of deviations from the normal distribution for translational molecular oncology research. Methods We conducted a central moments analysis of five cancer genomes and performed empiric distribution fitting to examine the true distribution of expression data both on the complete-experiment and on the individual-gene levels. We used a variety of parametric and nonparametric methods to test the effects of deviations from normality on gene calling, functional annotation, and prospective molecular classification using a sixth cancer genome. Results Central moments analyses reveal statistically-significant deviations from normality in all of the analyzed cancer genomes. We observe as much as 37% variability in gene calling, 39% variability in functional annotation, and 30% variability in prospective, molecular tumor subclassification associated with this effect. Conclusions Cancer gene expression profiles are not normally-distributed, either on the complete-experiment or on the individual-gene level. Instead, they exhibit complex, heavy-tailed distributions characterized by statistically-significant skewness and kurtosis. The non-Gaussian distribution of this data affects identification of differentially-expressed genes, functional annotation, and prospective molecular classification. These effects may be reduced in some circumstances, although not completely eliminated, by using nonparametric analytics. This analysis highlights two unreliable assumptions of translational cancer gene expression analysis: that “small” departures from normality in the expression data distributions are analytically-insignificant and that “robust” gene-calling algorithms can fully compensate for these effects. PMID:23118863
Effects of sales promotion on smoking among U.S. ninth graders.
Redmond, W H
1999-03-01
The purpose of this study was to examine the association between tobacco marketing efforts and daily cigarette smoking by adolescents. This was a longitudinal study of uptake of smoking on a daily basis with smoking data from the Monitoring the Future project. Diffusion modeling was used to generate expected rates of daily smoking initiation, which were compared with actual rates. Study data were from a national survey, administered annually from 1978 through 1995. Between 4,416 and 6,099 high school seniors participated per year, for a total of 94,652. The main outcome measure was a deviation score based on expected rates from diffusion modeling vs actual rates of initiation of daily use of cigarettes by ninth graders. Annual data on cigarette marketing expenditures were reported by the Federal Trade Commission. The deviation scores of expected vs actual rates of smoking initiation for ninth graders were correlated with annual changes in marketing expenditures. The correlation between sales promotion expenditures and the deviation score in daily smoking initiation was large (r = 0. 769) and statistically significant (P = 0.009) in the 1983-1992 period. Correlations between sales promotion and smoking initiation were not statistically significant in 1978-1982. Correlations between advertising expenditures and smoking initiation were not significant in either period. In years of high promotional expenditures, the rate of daily smoking initiation among ninth graders was higher than expected from diffusion model predictions. Large promotional pushes by cigarette marketers in the 1980s and 1990s appear to be linked with increased levels of daily smoking initiation among ninth graders. Copyright 1999 American Health Foundation and Academic Press.
Bohner, Lauren Oliveira Lima; De Luca Canto, Graziela; Marció, Bruno Silva; Laganá, Dalva Cruz; Sesma, Newton; Tortamano Neto, Pedro
2017-11-01
The internal and marginal adaptation of a computer-aided design and computer-aided manufacturing (CAD-CAM) prosthesis relies on the quality of the 3-dimensional image. The quality of imaging systems requires evaluation. The purpose of this in vitro study was to evaluate and compare the trueness of intraoral and extraoral scanners in scanning prepared teeth. Ten acrylic resin teeth to be used as a reference dataset were prepared according to standard guidelines and scanned with an industrial computed tomography system. Data were acquired with 4 scanner devices (n=10): the Trios intraoral scanner (TIS), the D250 extraoral scanner (DES), the Cerec Bluecam intraoral scanner (CBIS), and the Cerec InEosX5 extraoral scanner (CIES). For intraoral scanners, each tooth was digitized individually. Extraoral scanning was obtained from dental casts of each prepared tooth. The discrepancy between each scan and its respective reference model was obtained by deviation analysis (μm) and volume/area difference (μm). Statistical analysis was performed using linear models for repeated measurement factors test and 1-way ANOVA (α=.05). No significant differences in deviation values were found among scanners. For CBIS and CIES, the deviation was significantly higher (P<.05) for occlusal and cervical surfaces. With regard to volume differences, no statistically significant differences were found (TIS=340 ±230 μm; DES=380 ±360 μm; CBIS=780 ±770 μm; CIES=340 ±300 μm). Intraoral and extraoral scanners showed similar trueness in scanning prepared teeth. Higher discrepancies are expected to occur in the cervical region and on the occlusal surface. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Schuch, Klaus; Logothetis, Nikos K.; Maass, Wolfgang
2011-01-01
A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the “statistical fingerprint” of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N-methyl-d-aspartate and GABAB synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models. PMID:21106898
ERIC Educational Resources Information Center
Juan, Wu Xiao; Abidin, Mohamad Jafre Zainol; Eng, Lin Siew
2013-01-01
This survey aims at studying the relationship between English vocabulary threshold and word guessing strategy that is used in reading comprehension learning among 80 pre-university Chinese students in Malaysia. T-test is the main statistical test for this research, and the collected data is analysed using SPSS. From the standard deviation test…
ERIC Educational Resources Information Center
Mousavi, Bahareh; Safarzadeh, Sahar
2016-01-01
This study aimed to determine the effectiveness of the group play therapy on the insecure attachment and social skills of orphans in Ahvaz city. Statistical population included all orphans in Ahvaz city, of whom 30 students were selected whose scores in insecure attachment and in social skills were one standard deviation higher and one standard…
Mentors Offering Maternal Support (M.O.M.S.)
2011-08-02
at Sessions 1, 5, and 8. Table 1. Pretest - posttest , randomized, controlled, repeated measured design Experimental Intervention Sessions...theoretical mediators of self-esteem and emotional support (0.6 standard deviation change from pretest to posttest ) with reduction of effect to 0.4...always brought back to the designated topic . In order to have statistically significant results for the outcome variables the study sessions must
JAN transistor and diode characterization test program, JANTX diode 1N5623
NASA Technical Reports Server (NTRS)
Takeda, H.
1977-01-01
A statistical summary of the electrical characterization of diodes and transistors is presented. Each parameter is presented with test conditions, mean, standard deviation, lowest reading, 10% point (where 10% of all readings are equal to or less than the indicated reading), 90% point (where 90% of all readings are equal to or less than indicated reading) and the highest reading.
Gerald E. Rehfeldt
1979-01-01
Growth, phenology and frost tolerance of seedlings from 50 populations of Douglas-fir (Pseudotsuga menziesii var. glauca) were compared in 12 environments. Statistical analyses of six variables (bud burst, bud set, 3-year height, spring and fall frost injuries, and deviation from regression of 3-year height on 2-year height) showed that populations not only differed in...
ERIC Educational Resources Information Center
Gidey, Mu'uz
2015-01-01
This action research is carried out in a practical class room setting to devise an innovative way of administering tutorial classes to improve students' learning competence with particular reference to gendered test scores. A before-after test score analyses of mean and standard deviations along with t-statistical tests of hypotheses of second…
Vadapalli, Sriharsha Babu; Atluri, Kaleswararao; Putcha, Madhu Sudhan; Kondreddi, Sirisha; Kumar, N. Suman; Tadi, Durga Prasad
2016-01-01
Objectives: This in vitro study was designed to compare polyvinyl-siloxane (PVS) monophase and polyether (PE) monophase materials under dry and moist conditions for properties such as surface detail reproduction, dimensional stability, and gypsum compatibility. Materials and Methods: Surface detail reproduction was evaluated using two criteria. Dimensional stability was evaluated according to American Dental Association (ADA) specification no. 19. Gypsum compatibility was assessed by two criteria. All the samples were evaluated, and the data obtained were analyzed by a two-way analysis of variance (ANOVA) and Pearson's Chi-square tests. Results: When surface detail reproduction was evaluated with modification of ADA specification no. 19, both the groups under the two conditions showed no significant difference statistically. When evaluated macroscopically both the groups showed statistically significant difference. Results for dimensional stability showed that the deviation from standard was significant among the two groups, where Aquasil group showed significantly more deviation compared to Impregum group (P < 0.001). Two conditions also showed significant difference, with moist conditions showing significantly more deviation compared to dry condition (P < 0.001). The results of gypsum compatibility when evaluated with modification of ADA specification no. 19 and by giving grades to the casts for both the groups and under two conditions showed no significant difference statistically. Conclusion: Regarding dimensional stability, both impregum and aquasil performed better in dry condition than in moist; impregum performed better than aquasil in both the conditions. When tested for surface detail reproduction according to ADA specification, under dry and moist conditions both of them performed almost equally. When tested according to macroscopic evaluation, impregum and aquasil performed significantly better in dry condition compared to moist condition. In dry condition, both the materials performed almost equally. In moist condition, aquasil performed significantly better than impregum. Regarding gypsum compatibility according to ADA specification, in dry condition both the materials performed almost equally, and in moist condition aquasil performed better than impregum. When tested by macroscopic evaluation, impregum performed better than aquasil in both the conditions. PMID:27583217
Vadapalli, Sriharsha Babu; Atluri, Kaleswararao; Putcha, Madhu Sudhan; Kondreddi, Sirisha; Kumar, N Suman; Tadi, Durga Prasad
2016-01-01
This in vitro study was designed to compare polyvinyl-siloxane (PVS) monophase and polyether (PE) monophase materials under dry and moist conditions for properties such as surface detail reproduction, dimensional stability, and gypsum compatibility. Surface detail reproduction was evaluated using two criteria. Dimensional stability was evaluated according to American Dental Association (ADA) specification no. 19. Gypsum compatibility was assessed by two criteria. All the samples were evaluated, and the data obtained were analyzed by a two-way analysis of variance (ANOVA) and Pearson's Chi-square tests. When surface detail reproduction was evaluated with modification of ADA specification no. 19, both the groups under the two conditions showed no significant difference statistically. When evaluated macroscopically both the groups showed statistically significant difference. Results for dimensional stability showed that the deviation from standard was significant among the two groups, where Aquasil group showed significantly more deviation compared to Impregum group (P < 0.001). Two conditions also showed significant difference, with moist conditions showing significantly more deviation compared to dry condition (P < 0.001). The results of gypsum compatibility when evaluated with modification of ADA specification no. 19 and by giving grades to the casts for both the groups and under two conditions showed no significant difference statistically. Regarding dimensional stability, both impregum and aquasil performed better in dry condition than in moist; impregum performed better than aquasil in both the conditions. When tested for surface detail reproduction according to ADA specification, under dry and moist conditions both of them performed almost equally. When tested according to macroscopic evaluation, impregum and aquasil performed significantly better in dry condition compared to moist condition. In dry condition, both the materials performed almost equally. In moist condition, aquasil performed significantly better than impregum. Regarding gypsum compatibility according to ADA specification, in dry condition both the materials performed almost equally, and in moist condition aquasil performed better than impregum. When tested by macroscopic evaluation, impregum performed better than aquasil in both the conditions.
NASA Astrophysics Data System (ADS)
Leka, K. D.; Barnes, G.
2003-10-01
We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.
Analysis of Statistical Methods Currently used in Toxicology Journals
Na, Jihye; Yang, Hyeri
2014-01-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012
Analysis of Statistical Methods Currently used in Toxicology Journals.
Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min
2014-09-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.
Klein, Christian; Groll-Knapp, Elisabeth; Kundi, Michael; Kinz, Wieland
2009-12-17
Wearing shoes of insufficient length during childhood has often been cited as leading to deformities of the foot, particularly to the development of hallux valgus disorders. Until now, these assumptions have not been confirmed through scientific research. This study aims to investigate whether this association can be statistically proven, and if children who wear shoes of insufficient length actually do have a higher risk of a more pronounced lateral deviation of the hallux. 858 pre-school children were included in the study. The study sample was stratified by sex, urban/rural areas and Austrian province. The hallux angle and the length of the feet were recorded. The inside length of the children's footwear (indoor shoes worn in pre-school and outdoor shoes) were assessed. Personal data and different anthropometric measurements were taken. The risk of hallux valgus deviation was statistically tested by a stepwise logistic regression analysis and the relative risk (odds ratio) for a hallux angle > or = 4 degrees was calculated. Exact examinations of the hallux angle could be conducted on a total of 1,579 individual feet. Only 23.9% out of 1,579 feet presented a straight position of the great toe. The others were characterized by lateral deviations (valgus position) at different degrees, equalling 10 degrees or greater in 14.2% of the children's feet.88.8% of 808 children examined wore indoor footwear that was of insufficient length, and 69.4% of 812 children wore outdoor shoes that were too short. A significant relationship was observed between the lengthwise fit of the shoes and the hallux angle: the shorter the shoe, the higher the value of the hallux angle. The relative risk (odds ratio) of a lateral hallux deviation of > or = 4 degrees in children wearing shoes of insufficient length was significantly increased. There is a significant relationship between the hallux angle in children and footwear that is too short in length. The fact that the majority of the children examined were wearing shoes of insufficient length makes the issue particularly significant. Our results emphasize the importance of ensuring that children's footwear fits properly.
Modeling of adsorption dynamics at air-liquid interfaces using statistical rate theory (SRT).
Biswas, M E; Chatzis, I; Ioannidis, M A; Chen, P
2005-06-01
A large number of natural and technological processes involve mass transfer at interfaces. Interfacial properties, e.g., adsorption, play a key role in such applications as wetting, foaming, coating, and stabilizing of liquid films. The mechanistic understanding of surface adsorption often assumes molecular diffusion in the bulk liquid and subsequent adsorption at the interface. Diffusion is well described by Fick's law, while adsorption kinetics is less understood and is commonly described using Langmuir-type empirical equations. In this study, a general theoretical model for adsorption kinetics/dynamics at the air-liquid interface is developed; in particular, a new kinetic equation based on the statistical rate theory (SRT) is derived. Similar to many reported kinetic equations, the new kinetic equation also involves a number of parameters, but all these parameters are theoretically obtainable. In the present model, the adsorption dynamics is governed by three dimensionless numbers: psi (ratio of adsorption thickness to diffusion length), lambda (ratio of square of the adsorption thickness to the ratio of adsorption to desorption rate constant), and Nk (ratio of the adsorption rate constant to the product of diffusion coefficient and bulk concentration). Numerical simulations for surface adsorption using the proposed model are carried out and verified. The difference in surface adsorption between the general and the diffusion controlled model is estimated and presented graphically as contours of deviation. Three different regions of adsorption dynamics are identified: diffusion controlled (deviation less than 10%), mixed diffusion and transfer controlled (deviation in the range of 10-90%), and transfer controlled (deviation more than 90%). These three different modes predominantly depend on the value of Nk. The corresponding ranges of Nk for the studied values of psi (10(-2)
Reed, Donovan S; Apsey, Douglas; Steigleman, Walter; Townley, James; Caldwell, Matthew
2017-11-01
In an attempt to maximize treatment outcomes, refractive surgery techniques are being directed toward customized ablations to correct not only lower-order aberrations but also higher-order aberrations specific to the individual eye. Measurement of the entirety of ocular aberrations is the most definitive means to establish the true effect of refractive surgery on image quality and visual performance. Whether or not there is a statistically significant difference in induced higher-order corneal aberrations between the VISX Star S4 (Abbott Medical Optics, Santa Ana, California) and the WaveLight EX500 (Alcon, Fort Worth, Texas) lasers was examined. A retrospective analysis was performed to investigate the difference in root-mean-square (RMS) value of the higher-order corneal aberrations postoperatively between two currently available laser platforms, the VISX Star S4 and the WaveLight EX500 lasers. The RMS is a compilation of higher-order corneal aberrations. Data from 240 total eyes of active duty military or Department of Defense beneficiaries who completed photorefractive keratectomy (PRK) or laser in situ keratomileusis (LASIK) refractive surgery at the Wilford Hall Ambulatory Surgical Center Joint Warfighter Refractive Surgery Center were examined. Using SPSS statistics software (IBM Corp., Armonk, New York), the mean changes in RMS values between the two lasers and refractive surgery procedures were determined. A Student t test was performed to compare the RMS of the higher-order aberrations of the subjects' corneas from the lasers being studied. A regression analysis was performed to adjust for preoperative spherical equivalent. The study and a waiver of informed consent have been approved by the Clinical Research Division of the 59th Medical Wing Institutional Review Board (Protocol Number: 20150093H). The mean change in RMS value for PRK using the VISX laser was 0.00122, with a standard deviation of 0.02583. The mean change in RMS value for PRK using the WaveLight EX500 laser was 0.004323, with a standard deviation of 0.02916. The mean change in RMS value for LASIK using the VISX laser was 0.00841, with a standard deviation of 0.03011. The mean change in RMS value for LASIK using the WaveLight EX500 laser was 0.0174, with a standard deviation of 0.02417. When comparing the two lasers for PRK and LASIK procedures, the p values were 0.431 and 0.295, respectively. The results of this study suggest no statistically significant difference concerning induced higher-order aberrations between the two laser platforms for either LASIK or PRK. Overall, the VISX laser did have consistently lower induced higher-order aberrations postoperatively, but this did not reach statistical significance. It is likely the statistical significance of this study was hindered by the power, given the relatively small sample size. Additional limitations of the study include its design, being a retrospective analysis, and the generalizability of the study, as the Department of Defense population may be significantly different from the typical refractive surgery population in terms of overall health and preoperative refractive error. Further investigation of visual outcomes between the two laser platforms should be investigated before determining superiority in terms of visual image and quality postoperatively. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Irreversibility and entanglement spectrum statistics in quantum circuits
NASA Astrophysics Data System (ADS)
Shaffer, Daniel; Chamon, Claudio; Hamma, Alioscia; Mucciolo, Eduardo R.
2014-12-01
We show that in a quantum system evolving unitarily under a stochastic quantum circuit the notions of irreversibility, universality of computation, and entanglement are closely related. As the state evolves from an initial product state, it gets asymptotically maximally entangled. We define irreversibility as the failure of searching for a disentangling circuit using a Metropolis-like algorithm. We show that irreversibility corresponds to Wigner-Dyson statistics in the level spacing of the entanglement eigenvalues, and that this is obtained from a quantum circuit made from a set of universal gates for quantum computation. If, on the other hand, the system is evolved with a non-universal set of gates, the statistics of the entanglement level spacing deviates from Wigner-Dyson and the disentangling algorithm succeeds. These results open a new way to characterize irreversibility in quantum systems.
Comparison of a novel fixation device with standard suturing methods for spinal cord stimulators.
Bowman, Richard G; Caraway, David; Bentley, Ishmael
2013-01-01
Spinal cord stimulation is a well-established treatment for chronic neuropathic pain of the trunk or limbs. Currently, the standard method of fixation is to affix the leads of the neuromodulation device to soft tissue, fascia or ligament, through the use of manually tying general suture. A novel semiautomated device is proposed that may be advantageous to the current standard. Comparison testing in an excised caprine spine and simulated bench top model was performed. Three tests were performed: 1) perpendicular pull from fascia of caprine spine; 2) axial pull from fascia of caprine spine; and 3) axial pull from Mylar film. Six samples of each configuration were tested for each scenario. Standard 2-0 Ethibond was compared with a novel semiautomated device (Anulex fiXate). Upon completion of testing statistical analysis was performed for each scenario. For perpendicular pull in the caprine spine, the failure load for standard suture was 8.95 lbs with a standard deviation of 1.39 whereas for fiXate the load was 15.93 lbs with a standard deviation of 2.09. For axial pull in the caprine spine, the failure load for standard suture was 6.79 lbs with a standard deviation of 1.55 whereas for fiXate the load was 12.31 lbs with a standard deviation of 4.26. For axial pull in Mylar film, the failure load for standard suture was 10.87 lbs with a standard deviation of 1.56 whereas for fiXate the load was 19.54 lbs with a standard deviation of 2.24. These data suggest a novel semiautomated device offers a method of fixation that may be utilized in lieu of standard suturing methods as a means of securing neuromodulation devices. Data suggest the novel semiautomated device in fact may provide a more secure fixation than standard suturing methods. © 2012 International Neuromodulation Society.
Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of themore » material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
Entanglement Entropy of Eigenstates of Quantum Chaotic Hamiltonians.
Vidmar, Lev; Rigol, Marcos
2017-12-01
In quantum statistical mechanics, it is of fundamental interest to understand how close the bipartite entanglement entropy of eigenstates of quantum chaotic Hamiltonians is to maximal. For random pure states in the Hilbert space, the average entanglement entropy is known to be nearly maximal, with a deviation that is, at most, a constant. Here we prove that, in a system that is away from half filling and divided in two equal halves, an upper bound for the average entanglement entropy of random pure states with a fixed particle number and normally distributed real coefficients exhibits a deviation from the maximal value that grows with the square root of the volume of the system. Exact numerical results for highly excited eigenstates of a particle number conserving quantum chaotic model indicate that the bound is saturated with increasing system size.
Erhart, M; Wetzel, R; Krügel, A; Ravens-Sieberer, U
2005-12-01
Within a comprehensive comparison of telephone and postal survey methods the SF-8 was applied to assess adult's health-related quality of life. The 1690 subjects were randomly assigned to a telephone survey and a postal survey. Comparisons across the different modes of administration addressed the response rates, central tendency, deviation, ceiling and floor effects observed in the SF-8 scores as well as the inter-item correlation. The importance of age and gender as moderating factors was investigated. Results indicate no or small statistically significant differences in the responses to the SF-8 depending on the actual mode of administration and the health aspect questioned. It was concluded that further investigations should focus on the exact nature of these deviations and try to generate correction factors.
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-10-03
This report is a six-part statistical summary of surface weather observations for Torrejon AB, Madrid Spain. It contains the following parts: (A) Weather Conditions; Atmospheric Phenomena; (B) Precipitation, Snowfall and Snow Depth (daily amounts and extreme values); (C) Surface winds; (D) Ceiling Versus Visibility; Sky Cover; (E) Psychrometric Summaries (daily maximum and minimum temperatures, extreme maximum and minimum temperatures, psychrometric summary of wet-bulb temperature depression versus dry-bulb temperature, means and standard deviations of dry-bulb, wet-bulb and dew-point temperatures and relative humidity); and (F) Pressure Summary (means, standard, deviations, and observation counts of station pressure and sea-level pressure). Data in thismore » report are presented in tabular form, in most cases in percentage frequency of occurrence or cumulative percentage frequency of occurrence tables.« less
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.; Hubbard, W. B.
1983-01-01
A numerical technique for solving the Thomas-Fermi-Dirac (TED) equation in three dimensions, for an array of ions obeying periodic boundary conditions, is presented. The technique is then used to calculate deviations from ideal mixing for an alloy of hydrogen and helium at zero temperature and high presures. Results are compared with alternative models which apply perturbation theory to calculation of the electron distribution, based upon the assumption of weak response of the electron gas to the ions. The TFD theory, which permits strong electron response, always predicts smaller deviations from ideal mixing than would be predicted by perturbation theory. The results indicate that predicted phase separation curves for hydrogen-helium alloys under conditions prevailing in the metallic zones of Jupiter and Saturn are very model dependent.
Mirkhani, Seyyed Alireza; Gharagheizi, Farhad; Sattari, Mehdi
2012-03-01
Evaluation of diffusion coefficients of pure compounds in air is of great interest for many diverse industrial and air quality control applications. In this communication, a QSPR method is applied to predict the molecular diffusivity of chemical compounds in air at 298.15K and atmospheric pressure. Four thousand five hundred and seventy nine organic compounds from broad spectrum of chemical families have been investigated to propose a comprehensive and predictive model. The final model is derived by Genetic Function Approximation (GFA) and contains five descriptors. Using this dedicated model, we obtain satisfactory results quantified by the following statistical results: Squared Correlation Coefficient=0.9723, Standard Deviation Error=0.003 and Average Absolute Relative Deviation=0.3% for the predicted properties from existing experimental values. Copyright © 2011 Elsevier Ltd. All rights reserved.
Single-Station Sigma for the Iranian Strong Motion Stations
NASA Astrophysics Data System (ADS)
Zafarani, H.; Soghrat, M. R.
2017-11-01
In development of ground motion prediction equations (GMPEs), the residuals are assumed to have a log-normal distribution with a zero mean and a standard deviation, designated as sigma. Sigma has significant effect on evaluation of seismic hazard for designing important infrastructures such as nuclear power plants and dams. Both aleatory and epistemic uncertainties are involved in the sigma parameter. However, ground-motion observations over long time periods are not available at specific sites and the GMPEs have been derived using observed data from multiple sites for a small number of well-recorded earthquakes. Therefore, sigma is dominantly related to the statistics of the spatial variability of ground motion instead of temporal variability at a single point (ergodic assumption). The main purpose of this study is to reduce the variability of the residuals so as to handle it as epistemic uncertainty. In this regard, it is tried to partially apply the non-ergodic assumption by removing repeatable site effects from total variability of six GMPEs driven from the local, Europe-Middle East and worldwide data. For this purpose, we used 1837 acceleration time histories from 374 shallow earthquakes with moment magnitudes ranging from M w 4.0 to 7.3 recorded at 370 stations with at least two recordings per station. According to estimated single-station sigma for the Iranian strong motion stations, the ratio of event-corrected single-station standard deviation ( Φ ss) to within-event standard deviation ( Φ) is about 0.75. In other words, removing the ergodic assumption on site response resulted in 25% reduction of the within-event standard deviation that reduced the total standard deviation by about 15%.
Validation of 10 years of SAO OMI Ozone Profiles with Ozonesonde and MLS Observations
NASA Astrophysics Data System (ADS)
Huang, G.; Liu, X.; Chance, K.; Bhartia, P. K.
2015-12-01
To evaluate the accuracy and long-term stability of the SAO OMI ozone profile product, we validate ~10 years of ozone profile product (Oct. 2004-Dec. 2014) against collocated ozonesonde and MLS data. Ozone profiles as well stratospheric, tropospheric, lower tropospheric ozone columns are compared with ozonesonde data for different latitude bands, and time periods (e.g., 2004-2008/2009-2014 for without/with row anomaly. The mean biases and their standard deviations are also assessed as a function of time to evaluate the long-term stability and bias trends. In the mid-latitude and tropical regions, OMI generally shows good agreement with ozonesonde observations. The mean ozone profile biases are generally within 6% with up to 30% standard deviations. The biases of stratospheric ozone columns (SOC) and tropospheric ozone columns (TOC) are -0.3%-2.2% and -0.2%-3%, while standard deviations are 3.9%-5.8% and 14.4%-16.0%, respectively. However, the retrievals during 2009-2014 show larger standard deviations and larger temporal variations; the standard deviations increase by ~5% in the troposphere and ~2% in the stratosphere. Retrieval biases at individual levels in the stratosphere and upper troposphere show statistically significant trends and different trends for 2004-2008 and 2009-2014 periods. The trends in integrated ozone partial columns are less significant due to cancellation from various layers, except for significant trend in tropical SOC. These results suggest the need to perform time dependent radiometric calibration to maintain the long-term stability of this product. Similarly, we are comparing the OMI stratospheric ozone profiles and SOC with collocated MLS data, and the results will be reported.
Revert Ventura, A J; Sanz Requena, R; Martí-Bonmatí, L; Pallardó, Y; Jornet, J; Gaspar, C
2014-01-01
To study whether the histograms of quantitative parameters of perfusion in MRI obtained from tumor volume and peritumor volume make it possible to grade astrocytomas in vivo. We included 61 patients with histological diagnoses of grade II, III, or IV astrocytomas who underwent T2*-weighted perfusion MRI after intravenous contrast agent injection. We manually selected the tumor volume and peritumor volume and quantified the following perfusion parameters on a voxel-by-voxel basis: blood volume (BV), blood flow (BF), mean transit time (TTM), transfer constant (K(trans)), washout coefficient, interstitial volume, and vascular volume. For each volume, we obtained the corresponding histogram with its mean, standard deviation, and kurtosis (using the standard deviation and kurtosis as measures of heterogeneity) and we compared the differences in each parameter between different grades of tumor. We also calculated the mean and standard deviation of the highest 10% of values. Finally, we performed a multiparametric discriminant analysis to improve the classification. For tumor volume, we found statistically significant differences among the three grades of tumor for the means and standard deviations of BV, BF, and K(trans), both for the entire distribution and for the highest 10% of values. For the peritumor volume, we found no significant differences for any parameters. The discriminant analysis improved the classification slightly. The quantification of the volume parameters of the entire region of the tumor with BV, BF, and K(trans) is useful for grading astrocytomas. The heterogeneity represented by the standard deviation of BF is the most reliable diagnostic parameter for distinguishing between low grade and high grade lesions. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Graham, Daniel J; Field, David J
2008-01-01
Two recent studies suggest that natural scenes and paintings show similar statistical properties. But does the content or region of origin of an artwork affect its statistical properties? We addressed this question by having judges place paintings from a large, diverse collection of paintings into one of three subject-matter categories using a forced-choice paradigm. Basic statistics for images whose caterogization was agreed by all judges showed no significant differences between those judged to be 'landscape' and 'portrait/still-life', but these two classes differed from paintings judged to be 'abstract'. All categories showed basic spatial statistical regularities similar to those typical of natural scenes. A test of the full painting collection (140 images) with respect to the works' place of origin (provenance) showed significant differences between Eastern works and Western ones, differences which we find are likely related to the materials and the choice of background color. Although artists deviate slightly from reproducing natural statistics in abstract art (compared to representational art), the great majority of human art likely shares basic statistical limitations. We argue that statistical regularities in art are rooted in the need to make art visible to the eye, not in the inherent aesthetic value of natural-scene statistics, and we suggest that variability in spatial statistics may be generally imposed by manufacture.
Mathematical aspects of assessing extreme events for the safety of nuclear plants
NASA Astrophysics Data System (ADS)
Potempski, Slawomir; Borysiewicz, Mieczyslaw
2015-04-01
In the paper the review of mathematical methodologies applied for assessing low frequencies of rare natural events like earthquakes, tsunamis, hurricanes or tornadoes, floods (in particular flash floods and surge storms), lightning, solar flares, etc., will be given in the perspective of the safety assessment of nuclear plants. The statistical methods are usually based on the extreme value theory, which deals with the analysis of extreme deviation from the median (or the mean). In this respect application of various mathematical tools can be useful, like: the extreme value theorem of Fisher-Tippett-Gnedenko leading to possible choices of general extreme value distributions, or the Pickands-Balkema-de Haan theorem for tail fitting, or the methods related to large deviation theory. In the paper the most important stochastic distributions relevant for performing rare events statistical analysis will be presented. This concerns, for example, the analysis of the data with the annual extreme values (maxima - "Annual Maxima Series" or minima), or the peak values, exceeding given thresholds at some periods of interest ("Peak Over Threshold"), or the estimation of the size of exceedance. Despite of the fact that there is a lack of sufficient statistical data directly containing rare events, in some cases it is still possible to extract useful information from existing larger data sets. As an example one can consider some data sets available from the web sites for floods, earthquakes or generally natural hazards. Some aspects of such data sets will be also presented taking into account their usefulness for the practical assessment of risk for nuclear power plants coming from extreme weather conditions.
Jährig, K
1990-01-01
Using the official data of WHO statistics, the impact of some social, biological and medical factors on infant mortality rates (IMR) was compared for countries with very high, high, moderate and low IMR: Factors reflecting a low quality of life (illiteracy, low level of women's education, low urbanization, malnutrition etc.) showed a highly significant statistic correlation with increased IMR. The lack of a nationwide family planning program and a low level of medical care (prenatal care, presence of medical personnel during delivery, availability of contraceptives etc.) act in the same direction. In developing countries the GNP per capita did not markedly influence the IMR nor the rate of infants of low birth weight (UGR). One of the main reasons of this phenomenon is probably the wide gap of the income between different social groups in these countries. In contrast to this the GNP in economically developed countries (Europe, Australia, North America) correlates very closely with IMR and UGR. The impact of family planning differs between countries with legally artificial abortion and those with a more restrictive legislation. The nutritional status (i. e. in these countries hyperalimentation) shows a positive correlation with UGR but no impact on IMR. Some countries (in Europe Greece, Spain, Ireland/Yugoslavia, Romania) show a significant deviation (positive/negative) from the mean calculated according to the WHO data. These deviations can be (and should be) analysed for detecting and evaluating factors which could be influenced by strategies of social or/and medical interventions in order of further improvement of IMR.
Fluctuations in air pollution give risk warning signals of asthma hospitalization
NASA Astrophysics Data System (ADS)
Hsieh, Nan-Hung; Liao, Chung-Min
2013-08-01
Recent studies have implicated that air pollution has been associated with asthma exacerbations. However, the key link between specific air pollutant and the consequent impact on asthma has not been shown. The purpose of this study was to quantify the fluctuations in air pollution time-series dynamics to correlate the relationships between statistical indicators and age-specific asthma hospital admissions. An indicators-based regression model was developed to predict the time-trend of asthma hospital admissions in Taiwan in the period 1998-2010. Five major pollutants such as particulate matters with aerodynamic diameter less than 10 μm (PM10), ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO) were included. We used Spearman's rank correlation to detect the relationships between time-series based statistical indicators of standard deviation, coefficient of variation, skewness, and kurtosis and monthly asthma hospitalization. We further used the indicators-guided Poisson regression model to test and predict the impact of target air pollutants on asthma incidence. Here we showed that standard deviation of PM10 data was the most correlated indicators for asthma hospitalization for all age groups, particularly for elderly. The skewness of O3 data gives the highest correlation to adult asthmatics. The proposed regression model shows a better predictability in annual asthma hospitalization trends for pediatrics. Our results suggest that a set of statistical indicators inferred from time-series information of major air pollutants can provide advance risk warning signals in complex air pollution-asthma systems and aid in asthma management that depends heavily on monitoring the dynamics of asthma incidence and environmental stimuli.
Kappa Distribution in a Homogeneous Medium: Adiabatic Limit of a Super-diffusive Process?
NASA Astrophysics Data System (ADS)
Roth, I.
2015-12-01
The classical statistical theory predicts that an ergodic, weakly interacting system like charged particles in the presence of electromagnetic fields, performing Brownian motions (characterized by small range deviations in phase space and short-term microscopic memory), converges into the Gibbs-Boltzmann statistics. Observation of distributions with a kappa-power-law tails in homogeneous systems contradicts this prediction and necessitates a renewed analysis of the basic axioms of the diffusion process: characteristics of the transition probability density function (pdf) for a single interaction, with a possibility of non-Markovian process and non-local interaction. The non-local, Levy walk deviation is related to the non-extensive statistical framework. Particles bouncing along (solar) magnetic field with evolving pitch angles, phases and velocities, as they interact resonantly with waves, undergo energy changes at undetermined time intervals, satisfying these postulates. The dynamic evolution of a general continuous time random walk is determined by pdf of jumps and waiting times resulting in a fractional Fokker-Planck equation with non-integer derivatives whose solution is given by a Fox H-function. The resulting procedure involves the known, although not frequently used in physics fractional calculus, while the local, Markovian process recasts the evolution into the standard Fokker-Planck equation. Solution of the fractional Fokker-Planck equation with the help of Mellin transform and evaluation of its residues at the poles of its Gamma functions results in a slowly converging sum with power laws. It is suggested that these tails form the Kappa function. Gradual vs impulsive solar electron distributions serve as prototypes of this description.
The impact of primary open-angle glaucoma: Quality of life in Indian patients.
Kumar, Suresh; Ichhpujani, Parul; Singh, Roopali; Thakur, Sahil; Sharma, Madhu; Nagpal, Nimisha
2018-03-01
Glaucoma significantly affects the quality of life (QoL) of a patient. Despite the huge number of glaucoma patients in India, not many, QoL studies have been carried out. The purpose of the present study was to evaluate the QoL in Indian patients with varying severity of glaucoma. This was a hospital-based, cross-sectional, analytical study of 180 patients. The QoL was assessed using orally administered QoL instruments comprising of two glaucoma-specific instruments; Glaucoma Quality of Life-15 (GQL-15) and Viswanathan 10 instrument, and 1 vision-specific instrument; National Eye Institute Visual Function Questionnaire-25 (NEIVFQ25). Using NEIVFQ25, the difference between mean QoL scores among cases (88.34 ± 4.53) and controls (95.32 ± 5.76) was statistically significant. In GQL-15, there was a statistically significant difference between mean scores of cases (22.58 ± 5.23) and controls (16.52 ± 1.24). The difference in mean scores with Viswanathan 10 instrument in cases (7.92 ± 0.54) and controls (9.475 ± 0.505) was also statistically significant. QoL scores also showed moderate correlation with mean deviation, pattern standard deviation, and vertical cup-disc ratio. In our study, all the three instruments showed decrease in QoL in glaucoma patients compared to controls. With the increase in severity of glaucoma, corresponding decrease in QoL was observed. It is important for ophthalmologists to understand about the QoL in glaucoma patients so as to have a more holistic approach to patients and for effective delivery of treatment.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E.
2017-01-01
There is accumulating evidence that the brain’s neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (< 2 Hz) and more sharply for high (>2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals. PMID:28575032
NASA Astrophysics Data System (ADS)
Dabanlı, İsmail; Şen, Zekai
2018-04-01
The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.
Results of module electrical measurement of the DOE 46-kilowatt procurement
NASA Technical Reports Server (NTRS)
Curtis, H. B.
1978-01-01
Current-voltage measurements have been made on terrestrial solar cell modules of the DOE/JPL Low Cost Silicon Solar Array procurement. Data on short circuit current, open circuit voltage, and maximum power for the four types of modules are presented in normalized form, showing distribution of the measured values. Standard deviations from the mean values are also given. Tests of the statistical significance of the data are discussed.
Code of Federal Regulations, 2014 CFR
2014-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2013 CFR
2013-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2011 CFR
2011-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Code of Federal Regulations, 2012 CFR
2012-01-01
... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...
Mirmohammadi, Seyyed Jalil; Hafezi, Rahmatollah; Mehrparvar, Amir Houshang; Gerdfaramarzi, Raziyeh Soltani; Mostaghaci, Mehrdad; Nodoushan, Reza Jafari; Rezaeian, Bibiseyedeh
2013-01-01
Anthropometric data can be used to identify the physical dimensions of equipment, furniture, clothing and workstations. The use of poorly designed furniture that fails to fulfil the users' anthropometric dimensions, has a negative impact on human health. In this study, we measured some anthropometric dimensions of Iranian children from different ethnicities. A total of 12,731 Iranian primary school children aged 7-11 years were included in the study and their static anthropometric dimensions were measured. Descriptive statistics such as mean, standard deviation and key percentiles were calculated. All dimensions were compared among different ethnicities and different genders. This study showed significant differences in a set of 22 anthropometric dimensions with regard to gender, age and ethnicity. Turk boys and Arab girls were larger than their contemporaries in different ages. According to the results of this study, difference between genders and among different ethnicities should be taken into account by designers and manufacturers of school furniture. In this study, we measured 22 static anthropometric dimensions of 12,731 Iranian primary school children aged 7-11 years from different ethnicities. Descriptive statistics such as mean, standard deviation and key percentiles were measured for each dimension. This study showed significant differences in a set of 22 anthropometric dimensions in different genders, ages and ethnicities.
NASA Astrophysics Data System (ADS)
Nam, Kyoung Won; Kim, In Young; Kang, Ho Chul; Yang, Hee Kyung; Yoon, Chang Ki; Hwang, Jeong Min; Kim, Young Jae; Kim, Tae Yun; Kim, Kwang Gi
2012-10-01
Accurate measurement of binocular misalignment between both eyes is important for proper preoperative management, surgical planning, and postoperative evaluation of patients with strabismus. In this study, we proposed a new computerized diagnostic algorithm that can calculate the angle of binocular eye misalignment photographically by using a dedicated three-dimensional eye model mimicking the structure of the natural human eye. To evaluate the performance of the proposed algorithm, eight healthy volunteers and eight individuals with strabismus were recruited in this study, the horizontal deviation angle, vertical deviation angle, and angle of eye misalignment were calculated and the angular differences between the healthy and the strabismus groups were evaluated using the nonparametric Mann-Whitney test and the Pearson correlation test. The experimental results demonstrated a statistically significant difference between the healthy and strabismus groups (p = 0.015 < 0.05), but no statistically significant difference between the proposed method and the Krimsky test (p = 0.912 > 0.05). The measurements of the two methods were highly correlated (r = 0.969, p < 0.05). From the experimental results, we believe that the proposed diagnostic method has the potential to be a diagnostic tool that measures the physical disorder of the human eye to diagnose non-invasively the severity of strabismus.
NASA Astrophysics Data System (ADS)
Attili, Antonio; Bisetti, Fabrizio
2015-11-01
Passive scalar and scalar dissipation statistics are investigated in a set of flames achieving a Taylor's scale Reynolds number in the range 100 <=Reλ <= 150 [Attili et al. Comb. Flame 161, 2014; Attili et al. Proc. Comb. Inst. 35, 2015]. The three flames simulated show an increasing level of extinction due to the decrease of the Damköhler number. In the case of negligible extinction, the non-dimensional scalar dissipation is expected to be the same in the three cases. In the present case, the deviations from the aforementioned self-similarity manifests itself as a decrease of the non-dimensional scalar dissipation for increasing level of local extinction, in agreement with recent experiments [Karpetis and Barlow Proc. Comb. Inst. 30, 2005; Sutton and Driscoll Combust. Flame 160, 2013 ]. This is caused by the decrease of molecular diffusion due to the lower temperature in the low Damköhler number cases. Probability density functions of the scalar dissipation χ show rather strong deviations from the log-normal distribution. The left tail of the pdf scales as χ 1 / 2 while the right tail scales as e-cχα, in agreement with results for incompressible turbulence [Schumacher et al. J. Fluid Mech. 531, 2005].
Hierarchy of N-point functions in the ΛCDM and ReBEL cosmologies
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Juszkiewicz, Roman; van de Weygaert, Rien
2010-11-01
In this work we investigate higher-order statistics for the ΛCDM and ReBEL scalar-interacting dark matter models by analyzing 180h-1Mpc dark matter N-body simulation ensembles. The N-point correlation functions and the related hierarchical amplitudes, such as skewness and kurtosis, are computed using the counts-in-cells method. Our studies demonstrate that the hierarchical amplitudes Sn of the scalar-interacting dark matter model significantly deviate from the values in the ΛCDM cosmology on scales comparable and smaller than the screening length rs of a given scalar-interacting model. The corresponding additional forces that enhance the total attractive force exerted on dark matter particles at galaxy scales lower the values of the hierarchical amplitudes Sn. We conclude that hypothetical additional exotic interactions in the dark matter sector should leave detectable markers in the higher-order correlation statistics of the density field. We focused in detail on the redshift evolution of the dark matter field’s skewness and kurtosis. From this investigation we find that the deviations from the canonical ΛCDM model introduced by the presence of the “fifth” force attain a maximum value at redshifts 0.5
NASA Astrophysics Data System (ADS)
Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi
2008-01-01
A thorough simulation and evaluation of phase noise for optical amplification using semiconductor optical amplifier (SOA) is very important for predicting its performance in differential phase-shift keyed (DPSK) applications. In this paper, standard deviation and probability distribution of differential phase noise at the SOA output are obtained from the statistics of simulated differential phase noise. By using a full-wave model of SOA, the noise performance in the entire operation range can be investigated. It is shown that nonlinear phase noise substantially contributes to the total phase noise in case of a noisy signal amplified by a saturated SOA and the nonlinear contribution is larger with shorter SOA carrier lifetime. It is also shown that Gaussian distribution can be useful as a good approximation of the total differential phase noise statistics in the whole operation range. Power penalty due to differential phase noise is evaluated using a semi-analytical probability density function (PDF) of receiver noise. Obvious increase of power penalty at high signal input powers can be found for low input OSNR, which is due to both the large nonlinear differential phase noise and the dependence of BER vs. receiving power curvature on differential phase noise standard deviation.
Path profiles of Cn2 derived from radiometer temperature measurements and geometrical ray tracing
NASA Astrophysics Data System (ADS)
Vyhnalek, Brian E.
2017-02-01
Atmospheric turbulence has significant impairments on the operation of Free-Space Optical (FSO) communication systems, in particular temporal and spatial intensity fluctuations at the receiving aperture resulting in power surges and fades, changes in angle of arrival, spatial coherence degradation, etc. The refractive index structure parameter C 2 n is a statistical measure of the strength of turbulence in the atmosphere and is highly dependent upon vertical height. Therefore to understand atmospheric turbulence effects on vertical FSO communication links such as space-to-ground links, it is necessary to specify C 2 n profiles along the atmospheric propagation path. To avoid the limitations on the applicability of classical approaches, propagation simulation through geometrical ray tracing is applied. This is achieved by considering the atmosphere along the optical propagation path as a spatial distribution of spherical bubbles with varying relative refractive index deviations representing turbulent eddies. The relative deviations of the refractive index are statistically determined from altitude-dependent and time varying temperature fluctuations, as measured by a microwave profiling radiometer. For each representative atmosphere ray paths are analyzed using geometrical optics, which is particularly advantageous in situations of strong turbulence where there is severe wavefront distortion and discontinuity. The refractive index structure parameter is then determined as a function of height and time.
Path Profiles of Cn2 Derived from Radiometer Temperature Measurements and Geometrical Ray Tracing
NASA Technical Reports Server (NTRS)
Vyhnalek, Brian E.
2017-01-01
Atmospheric turbulence has significant impairments on the operation of Free-Space Optical (FSO) communication systems, in particular temporal and spatial intensity fluctuations at the receiving aperture resulting in power surges and fades, changes in angle of arrival, spatial coherence degradation, etc. The refractive index structure parameter Cn2 is a statistical measure of the strength of turbulence in the atmosphere and is highly dependent upon vertical height. Therefore to understand atmospheric turbulence effects on vertical FSO communication links such as space-to-ground links, it is necessary to specify Cn2 profiles along the atmospheric propagation path. To avoid the limitations on the applicability of classical approaches, propagation simulation through geometrical ray tracing is applied. This is achieved by considering the atmosphere along the optical propagation path as a spatial distribution of spherical bubbles with varying relative refractive index deviations representing turbulent eddies. The relative deviations of the refractive index are statistically determined from altitude-dependent and time-varying temperature fluctuations, as measured by a microwave profiling radiometer. For each representative atmosphere ray paths are analyzed using geometrical optics, which is particularly advantageous in situations of strong turbulence where there is severe wavefront distortion and discontinuity. The refractive index structure parameter is then determined as a function of height and time.
NASA Astrophysics Data System (ADS)
Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.
2011-11-01
We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.
Improved Bayesian Infrasonic Source Localization for regional infrasound
Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.
2015-10-20
The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P
2008-05-20
Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.
Statistical physics of the symmetric group.
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
Statistical physics of the symmetric group
NASA Astrophysics Data System (ADS)
Williams, Mobolaji
2017-04-01
Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.
NASA Astrophysics Data System (ADS)
Lavely, Adam; Vijayakumar, Ganesh; Brasseur, James; Paterson, Eric; Kinzel, Michael
2011-11-01
Using large-eddy simulation (LES) of the neutral and moderately convective atmospheric boundary layers (NBL, MCBL), we analyze the impact of coherent turbulence structure of the atmospheric surface layer on the short-time statistics that are commonly collected from wind turbines. The incoming winds are conditionally sampled with a filtering and thresholding algorithm into high/low horizontal and vertical velocity fluctuation coherent events. The time scales of these events are ~5 - 20 blade rotations and are roughly twice as long in the MCBL as the NBL. Horizontal velocity events are associated with greater variability in rotor power, lift and blade-bending moment than vertical velocity events. The variability in the industry standard 10 minute average for rotor power, sectional lift and wind velocity had a standard deviation of ~ 5% relative to the ``infinite time'' statistics for the NBL and ~10% for the MCBL. We conclude that turbulence structure associated with atmospheric stability state contributes considerable, quantifiable, variability to wind turbine statistics. Supported by NSF and DOE.
NASA Astrophysics Data System (ADS)
Pickard, William F.
2004-10-01
The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
NASA Astrophysics Data System (ADS)
Matsubara, Takahiko
2003-02-01
We formulate a general method for perturbative evaluations of statistics of smoothed cosmic fields and provide useful formulae for application of the perturbation theory to various statistics. This formalism is an extensive generalization of the method used by Matsubara, who derived a weakly nonlinear formula of the genus statistic in a three-dimensional density field. After describing the general method, we apply the formalism to a series of statistics, including genus statistics, level-crossing statistics, Minkowski functionals, and a density extrema statistic, regardless of the dimensions in which each statistic is defined. The relation between the Minkowski functionals and other geometrical statistics is clarified. These statistics can be applied to several cosmic fields, including three-dimensional density field, three-dimensional velocity field, two-dimensional projected density field, and so forth. The results are detailed for second-order theory of the formalism. The effect of the bias is discussed. The statistics of smoothed cosmic fields as functions of rescaled threshold by volume fraction are discussed in the framework of second-order perturbation theory. In CDM-like models, their functional deviations from linear predictions plotted against the rescaled threshold are generally much smaller than that plotted against the direct threshold. There is still a slight meatball shift against rescaled threshold, which is characterized by asymmetry in depths of troughs in the genus curve. A theory-motivated asymmetry factor in the genus curve is proposed.
Adaptive convergence nonuniformity correction algorithm.
Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua
2011-01-01
Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Steinmetz, G. G.
1979-01-01
A recent modification of the methodology of profile analysis, which allows the testing for differences between two functions as a whole with a single test, rather than point by point with multiple tests is discussed. The modification is applied to the examination of the issue of motion/no motion conditions as shown by the lateral deviation curve as a function of engine cut speed of a piloted 737-100 simulator. The results of this application are presented along with those of more conventional statistical test procedures on the same simulator data.
Statistical Deviations From the Theoretical Only-SBU Model to Estimate MCU Rates in SRAMs
NASA Astrophysics Data System (ADS)
Franco, Francisco J.; Clemente, Juan Antonio; Baylac, Maud; Rey, Solenne; Villa, Francesca; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul
2017-08-01
This paper addresses a well-known problem that occurs when memories are exposed to radiation: the determination if a bit flip is isolated or if it belongs to a multiple event. As it is unusual to know the physical layout of the memory, this paper proposes to evaluate the statistical properties of the sets of corrupted addresses and to compare the results with a mathematical prediction model where all of the events are single bit upsets. A set of rules easy to implement in common programming languages can be iteratively applied if anomalies are observed, thus yielding a classification of errors quite closer to reality (more than 80% accuracy in our experiments).
Increasing market efficiency in the stock markets
NASA Astrophysics Data System (ADS)
Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook
2008-01-01
We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.
NASA Astrophysics Data System (ADS)
Issaadi, N.; Hamami, A. A.; Belarbi, R.; Aït-Mokhtar, A.
2017-10-01
In this paper, spatial variabilities of some transfer and storage properties of a concrete wall were assessed. The studied parameters deal with water porosity, water vapor permeability, intrinsic permeability and water vapor sorption isotherms. For this purpose, a concrete wall was built in the laboratory and specimens were periodically taken and tested. The obtained results allow highlighting a statistical estimation of the mean value, the standard deviation and the spatial correlation length of the studied fields for each parameter. These results were discussed and a statistical analysis was performed in order to assess for each of these parameters the appropriate probability density function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Öztürk, Hande; Noyan, I. Cevdet
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
Level statistics of a noncompact cosmological billiard
NASA Astrophysics Data System (ADS)
Csordas, Andras; Graham, Robert; Szepfalusy, Peter
1991-08-01
A noncompact chaotic billiard on a two-dimensional space of constant negative curvature, the infinite equilateral triangle describing anisotropy oscillations in the very early universe, is studied quantum-mechanically. A Weyl formula with a logarithmic correction term is derived for the smoothed number of states function. For one symmetry class of the eigenfunctions, the level spacing distribution, the spectral rigidity Delta3, and the Sigma2 statistics are determined numerically using the finite matrix approximation. Systematic deviations are found both from the Gaussian orthogonal ensemble (GOE) and the Poissonian ensemble. However, good agreement with the GOE is found if the fundamental triangle is deformed in such a way that it no longer tiles the space.
Öztürk, Hande; Noyan, I. Cevdet
2017-08-24
A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less
A search for spectral lines in gamma-ray bursts using TGRS
NASA Astrophysics Data System (ADS)
Kurczynski, P.; Palmer, D.; Seifert, H.; Teegarden, B. J.; Gehrels, N.; Cline, T. L.; Ramaty, R.; Hurley, K.; Madden, N. W.; Pehl, R. H.
1998-05-01
We present the results of an ongoing search for narrow spectral lines in gamma-ray burst data. TGRS, the Transient Gamma-Ray Spectrometer aboard the Wind satellite is a high energy-resolution Ge device. Thus it is uniquely situated among the array of space-based, burst sensitive instruments to look for line features in gamma-ray burst spectra. Our search strategy adopts a two tiered approach. An automated `quick look' scan searches spectra for statistically significant deviations from the continuum. We analyzed all possible time accumulations of spectra as well as individual spectra for each burst. Follow-up analysis of potential line candidates uses model fitting with F-test and χ2 tests for statistical significance.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 189 stations west of the Continental Divide in Colorado are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explain the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Large deviations of a long-time average in the Ehrenfest urn model
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
Deviation-based spam-filtering method via stochastic approach
NASA Astrophysics Data System (ADS)
Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun
2018-03-01
In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.
Phase-I monitoring of standard deviations in multistage linear profiles
NASA Astrophysics Data System (ADS)
Kalaei, Mahdiyeh; Soleimani, Paria; Niaki, Seyed Taghi Akhavan; Atashgar, Karim
2018-03-01
In most modern manufacturing systems, products are often the output of some multistage processes. In these processes, the stages are dependent on each other, where the output quality of each stage depends also on the output quality of the previous stages. This property is called the cascade property. Although there are many studies in multistage process monitoring, there are fewer works on profile monitoring in multistage processes, especially on the variability monitoring of a multistage profile in Phase-I for which no research is found in the literature. In this paper, a new methodology is proposed to monitor the standard deviation involved in a simple linear profile designed in Phase I to monitor multistage processes with the cascade property. To this aim, an autoregressive correlation model between the stages is considered first. Then, the effect of the cascade property on the performances of three types of T 2 control charts in Phase I with shifts in standard deviation is investigated. As we show that this effect is significant, a U statistic is next used to remove the cascade effect, based on which the investigated control charts are modified. Simulation studies reveal good performances of the modified control charts.
Sarfraz, Muhammad Haroon; Mehboob, Mohammad Asim; Haq, Rana Intisar Ul
2017-01-01
To evaluate the correlation between Central Corneal Thickness (CCT) and Visual Field (VF) defect parameters like Mean Deviation (MD) and Pattern Standard Deviation (PSD), Cup-to-Disc Ratio (CDR) and Retinal Nerve Fibre Layer Thickness (RNFL-T) in Primary Open-Angle Glaucoma (POAG) patients. This cross sectional study was conducted at Armed Forces Institute of Ophthalmology (AFIO), Rawalpindi from September 2015 to September 2016. Sixty eyes of 30 patients with diagnosed POAG were analysed. Correlation of CCT with other variables was studied. Mean age of study population was 43.13±7.54 years. Out of 30 patients, 19 (63.33%) were males and 11 (36.67%) were females. Mean CCT, MD, PSD, CDR and RNFL-T of study population was 528.57±25.47µm, -9.11±3.07, 6.93±2.73, 0.63±0.13 and 77.79±10.44µm respectively. There was significant correlation of CCT with MD, PSD and CDR (r=-0.52, p<0.001; r=-0.59, p<0.001;r=-0.41, p=0.001 respectively). The correlation of CCT with RNFL-T was not statistically significant (r=-0.14, p=0.284). Central corneal thickness had significant correlation with visual field parameters like mean deviation and pattern standard deviation, as well as with cup-to-disc ratio. However, central corneal thickness had no significant relationship with retinal nerve fibre layer thickness.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2017-04-15
In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential. Mice and non-human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies. Here we investigated the structure and statistics of the vestibular input experienced by mice versus non-human primates during natural behaviours, and found important differences. Our data establish that the structure and statistics of natural signals in non-human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input. These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self-motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self-motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self-motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power-law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self-motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self-motion stimuli are fundamentally different in rodents and primates. © 2017 The Authors. The Journal of Physiology © 2017 The Physiological Society.
Vetter, Thomas R
2017-11-01
Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2017-01-01
Key points In order to understand how the brain's coding strategies are adapted to the statistics of the sensory stimuli experienced during everyday life, the use of animal models is essential.Mice and non‐human primates have become common models for furthering our knowledge of the neuronal coding of natural stimuli, but differences in their natural environments and behavioural repertoire may impact optimal coding strategies.Here we investigated the structure and statistics of the vestibular input experienced by mice versus non‐human primates during natural behaviours, and found important differences.Our data establish that the structure and statistics of natural signals in non‐human primates more closely resemble those observed previously in humans, suggesting similar coding strategies for incoming vestibular input.These results help us understand how the effects of active sensing and biomechanics will differentially shape the statistics of vestibular stimuli across species, and have important implications for sensory coding in other systems. Abstract It is widely believed that sensory systems are adapted to the statistical structure of natural stimuli, thereby optimizing coding. Recent evidence suggests that this is also the case for the vestibular system, which senses self‐motion and in turn contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. However, little is known about the statistics of self‐motion stimuli actually experienced by freely moving animals in their natural environments. Accordingly, here we examined the natural self‐motion signals experienced by mice and monkeys: two species commonly used to study vestibular neural coding. First, we found that probability distributions for all six dimensions of motion (three rotations, three translations) in both species deviated from normality due to long tails. Interestingly, the power spectra of natural rotational stimuli displayed similar structure for both species and were not well fitted by power laws. This result contrasts with reports that the natural spectra of other sensory modalities (i.e. vision, auditory and tactile) instead show a power‐law relationship with frequency, which indicates scale invariance. Analysis of natural translational stimuli revealed important species differences as power spectra deviated from scale invariance for monkeys but not for mice. By comparing our results to previously published data for humans, we found the statistical structure of natural self‐motion stimuli in monkeys and humans more closely resemble one another. Our results thus predict that, overall, neural coding strategies used by vestibular pathways to encode natural self‐motion stimuli are fundamentally different in rodents and primates. PMID:28083981
Statistics in the pharmacy literature.
Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R
2004-09-01
Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.
NASA Technical Reports Server (NTRS)
Mozer, F. S.
1976-01-01
A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.
Asymptotic inference in system identification for the atom maser.
Catana, Catalin; van Horssen, Merlijn; Guta, Madalin
2012-11-28
System identification is closely related to control theory and plays an increasing role in quantum engineering. In the quantum set-up, system identification is usually equated to process tomography, i.e. estimating a channel by probing it repeatedly with different input states. However, for quantum dynamical systems such as quantum Markov processes, it is more natural to consider the estimation based on continuous measurements of the output, with a given input that may be stationary. We address this problem using asymptotic statistics tools, for the specific example of estimating the Rabi frequency of an atom maser. We compute the Fisher information of different measurement processes as well as the quantum Fisher information of the atom maser, and establish the local asymptotic normality of these statistical models. The statistical notions can be expressed in terms of spectral properties of certain deformed Markov generators, and the connection to large deviations is briefly discussed.
Generalized Majority Logic Criterion to Analyze the Statistical Strength of S-Boxes
NASA Astrophysics Data System (ADS)
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-05-01
The majority logic criterion is applicable in the evaluation process of substitution boxes used in the advanced encryption standard (AES). The performance of modified or advanced substitution boxes is predicted by processing the results of statistical analysis by the majority logic criteria. In this paper, we use the majority logic criteria to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, the majority logic criterion is applied to AES, affine power affine (APA), Gray, Lui J, residue prime, S8 AES, Skipjack, and Xyi substitution boxes. The majority logic criterion is further extended into a generalized majority logic criterion which has a broader spectrum of analyzing the effectiveness of substitution boxes in image encryption applications. The integral components of the statistical analyses used for the generalized majority logic criterion are derived from results of entropy analysis, contrast analysis, correlation analysis, homogeneity analysis, energy analysis, and mean of absolute deviation (MAD) analysis.
Velocity bias in the distribution of dark matter halos
NASA Astrophysics Data System (ADS)
Baldauf, Tobias; Desjacques, Vincent; Seljak, Uroš
2015-12-01
The standard formalism for the coevolution of halos and dark matter predicts that any initial halo velocity bias rapidly decays to zero. We argue that, when the purpose is to compute statistics like power spectra etc., the coupling in the momentum conservation equation for the biased tracers must be modified. Our new formulation predicts the constancy in time of any statistical halo velocity bias present in the initial conditions, in agreement with peak theory. We test this prediction by studying the evolution of a conserved halo population in N -body simulations. We establish that the initial simulated halo density and velocity statistics show distinct features of the peak model and, thus, deviate from the simple local Lagrangian bias. We demonstrate, for the first time, that the time evolution of their velocity is in tension with the rapid decay expected in the standard approach.
A New Goodness of Fit Test for Normality with Mean and Variance Unknown.
1981-12-01
be realized, since fewer random deviates may have to be generated in order to get consistent critical values at the desired a levels . Plotting... a - levels n * -straightforward .20 .15 .10 .05 .01 * =reflection ..... 10 * .5710 .5120 .4318 .3208 .1612 10 ** .3670 .2914 .2206 .1388 .0390 25...Population Is Cauchy Actual Population: Cauchy Statistic: Kolmogorov-Smirnov Calculation method Powers at a - levels n = straightforwar .20 .15 .10 .05
Kurokawa, Masami; Nakano, Takeshi; Kondo, Hisashi
1954-01-01
Three diphtheria toxoid preparations, fractionated at various concentrations of ammonium sulfate, having various grades of purity, and showing striking differences in immunizing potency when compared at the same Lf dose, were examined for similarity of the effective constituents in the fractions. No evidence of deviations from parallelism of the dose-response regression lines was observed; thus the statistical criteria for qualitative similarity were satisfactory met. PMID:13199660
Manufacturing Execution Systems: Examples of Performance Indicator and Operational Robustness Tools.
Gendre, Yannick; Waridel, Gérard; Guyon, Myrtille; Demuth, Jean-François; Guelpa, Hervé; Humbert, Thierry
Manufacturing Execution Systems (MES) are computerized systems used to measure production performance in terms of productivity, yield, and quality. In the first part, performance indicator and overall equipment effectiveness (OEE), process robustness tools and statistical process control are described. The second part details some tools to help process robustness and control by operators by preventing deviations from target control charts. MES was developed by Syngenta together with CIMO for automation.
1977-01-01
balanced at the mean, with the central part steeper ( platykurtic : broad mode or truncated tails) -r flatter (leptokurtic: peaked mode or extended...and NUPUR, have negative kurtosis (they are platykurtic , with truncated tails and/or broad modes relative to their standard deviations) FERRO, on the...the other areas, and its gradients are platykurtic but almost unskewed. Hence the square root of sine transformation (Fig,15) and the log tangent
Shot Group Statistics for Small Arms Applications
2017-06-01
standard deviation. Analysis is presented as applied to one , n-round shot group and then is extended to treat multiple, n-round shot groups. A...dispersion measure for multiple, n-round shot groups can be constructed by selecting one of the dispersion measures listed above, measuring the dispersion of...as applied to one , n-round shot group and then is extended to treat multiple, n-round shot groups. A dispersion measure for multiple, n- round shot
Stochastic model of temporal changes of wind spectra in the free atmosphere
NASA Technical Reports Server (NTRS)
Huang, Y. H.
1974-01-01
Data for wind profile spectra changes with respect to time from Cape Kennedy, Florida for the time period from 28 November 1964 to 11 May 1967 have been analyzed. A universal statistical distribution of the spectral change which encompasses all vertical wave numbers, wind speed categories, and elapsed time has been developed for the standard deviation of the time changes of detailed wind profile spectra as a function of wave number.
2016-07-01
Predicted variation in (a) hot-spot number density , (b) hot-spot volume fraction, and (c) hot-spot specific surface area for each ensemble with piston speed...packing density , characterized by its effective solid volume fraction φs,0, affects hot-spot statistics for pressure dominated waves corresponding to...distribution in solid volume fraction within each ensemble was nearly Gaussian, and its standard deviation decreased with increasing density . Analysis of
1980-12-01
to sound pressure level in decibels assuming a fre- quency of 1000 Hz. 249 The perceived noisiness values are derived from a formula specified in...Analyses .......... 244 6.i.16 Perceived Noise Level Analysis .............249 6.1.17 Acoustic Weighting Networks ................250 6.2 DERIVATIONS...BAND ANALYSIS BASIC STATISTICAL ANALYSES: *OCTAVE ANALYSIS MEAN *THIRD OCTAVE ANALYSIS VARIANCE *PERCEIVED NOISE LEVEL STANDARD DEVIATION CALCULATION
2009-10-20
standard deviation. The y axis indicates the scaled MB, MB95 MB 1 N N j51 (O j O)2 2 4 3 5 1/2 , (12) or the biweight version, MBbw9 5 MBbw hhO j iibw...RMSEbw9unbiased 5 RMSEbwunbiased hhO j iibw . (15) To investigate the impact of outliers, results from both the Gaussian statistics [Eqs. (12) and
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2012 CFR
2012-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.
Code of Federal Regulations, 2013 CFR
2013-07-01
... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...
2007-12-01
acknowledged that Continuous Improvement (CI), or Kaizen in Japanese, is practiced in some way, shape, or form by most if not all Fortune 500 companies...greater resistance in the individualistic U.S. culture. Kaizen generally involves methodical examination and testing, followed by the adoption of new...or streamlined procedures, including scrupulous measurement and changes based on statistical deviation formulas. Kaizen appears to be a perfect fit
Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.
Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J
2016-03-01
Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.
The MSFC Solar Activity Future Estimation (MSAFE) Model
NASA Technical Reports Server (NTRS)
Suggs, Ron
2017-01-01
The MSAFE model provides forecasts for the solar indices SSN, F10.7, and Ap. These solar indices are used as inputs to space environment models used in orbital spacecraft operations and space mission analysis. Forecasts from the MSAFE model are provided on the MSFC Natural Environments Branch's solar web page and are updated as new monthly observations become available. The MSAFE prediction routine employs a statistical technique that calculates deviations of past solar cycles from the mean cycle and performs a regression analysis to calculate the deviation from the mean cycle of the solar index at the next future time interval. The forecasts are initiated for a given cycle after about 8 to 9 monthly observations from the start of the cycle are collected. A forecast made at the beginning of cycle 24 using the MSAFE program captured the cycle fairly well with some difficulty in discerning the double peak that occurred at solar cycle maximum.
Micro-bubbles and Micro-particles are Not Faithful Tracers of Turbulent Acceleration
NASA Astrophysics Data System (ADS)
Sun, Chao; Mathai, Varghese; Calzavarini, Enrico; Brons, Jon; Lohse, Detlef
2016-11-01
We report on the Lagrangian statistics of acceleration of small (sub-Kolmogorov) bubbles and tracer particles with Stokes number St <<1 in turbulent flow. At decreasing Reynolds number, the bubble accelerations show deviations from that of tracer particles, i.e. they deviate from the Heisenberg-Yaglom prediction and show a quicker decorrelation despite their small size and minute St. Using direct numerical simulations, we show that these effects arise due the drift of these particles through the turbulent flow. We theoretically predict this gravity-driven effect for developed isotropic turbulence, with the ratio of Stokes to Froude number or equivalently the particle drift-velocity governing the enhancement of acceleration variance and the reductions in correlation time and intermittency. Our predictions are in good agreement with experimental and numerical results. The present findings are relevant to a range of scenarios encompassing tiny bubbles and droplets that drift through the turbulent oceans and the atmosphere.
NASA Technical Reports Server (NTRS)
Sepehry-Fard, F.; Coulthard, Maurice H.
1995-01-01
The process to predict the values of the maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle cost spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability, and maintenance support costs. It is the objective of this report to identify the magnitude of the expected enhancement in the accuracy of the results for the International Space Station reliability and maintainability data packages by providing examples. These examples partially portray the necessary information hy evaluating the impact of the said enhancements on the life cycle cost and the availability of the International Space Station.
Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft
NASA Technical Reports Server (NTRS)
Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.
1987-01-01
Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.
Routine sampling and the control of Legionella spp. in cooling tower water systems.
Bentham, R H
2000-10-01
Cooling water samples from 31 cooling tower systems were cultured for Legionella over a 16-week summer period. The selected systems were known to be colonized by Legionella. Mean Legionella counts and standard deviations were calculated and time series correlograms prepared for each system. The standard deviations of Legionella counts in all the systems were very large, indicating great variability in the systems over the time period. Time series analyses demonstrated that in the majority of cases there was no significant relationship between the Legionella counts in the cooling tower at time of collection and the culture result once it was available. In the majority of systems (25/28), culture results from Legionella samples taken from the same systems 2 weeks apart were not statistically related. The data suggest that determinations of health risks from cooling towers cannot be reliably based upon single or infrequent Legionella tests.
A time to be stressed? Time perspectives and cortisol dynamics among healthy adults.
Olivera-Figueroa, Lening A; Juster, Robert-Paul; Morin-Major, Julie Katia; Marin, Marie-France; Lupien, Sonia J
2015-10-01
Perceptions of past, present, and future events may be related to stress pathophysiology. We assessed whether Time Perspective (TP) is associated with cortisol dynamics among healthy adults (N=61, Ages=18-35, M=22.9, SD=4.1) exposed to the Trier Social Stress Test (TSST). TP was measured according to two profiles: maladaptive Deviation from Balanced TP (DBTP) and adaptive Deviation from Negative TP (DNTP). Eight salivary cortisol samples were analyzed using area under the curve with respect to ground (AUCg) and to increase (AUCi). Statistic analyses involved partial correlations controlling for depressive symptoms. Results for both sexes showed that higher DBTP scores were associated with lower cortisol AUCg scores, while higher DNTP scores were associated with higher cortisol AUCg scores. These novel findings suggest that maladaptive TP profiles influence hypocortisolism, whereas adaptive TP profiles influence hypercortisolism. Thus, TP profiles may impact conditions characterized by altered cortisol concentrations. Published by Elsevier B.V.
Large Fluctuations for Spatial Diffusion of Cold Atoms
NASA Astrophysics Data System (ADS)
Aghion, Erez; Kessler, David A.; Barkai, Eli
2017-06-01
We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biassoni, Pietro
2009-01-01
In this thesis work we have measured the following upper limits at 90% of confidence level, for B meson decays (in units of 10 -6), using a statistics of 465.0 x 10 6 Bmore » $$\\bar{B}$$ pairs: β(B 0 → ηK 0) < 1.6 β(B 0 → ηη) < 1.4 β(B 0 → η'η') < 2.1 β(B 0 → ηΦ) < 0.52 β(B 0 → ηω) < 1.6 β(B 0 → η'Φ) < 1.2 β(B 0 → η'ω) < 1.7 We have no observation of any decay mode, statistical significance for our measurements is in the range 1.3-3.5 standard deviation. We have a 3.5σ evidence for B → ηω and a 3.1 σ evidence for B → η'ω. The absence of observation of the B 0 → ηK 0 open an issue related to the large difference compared to the charged mode B + → ηK + branching fraction, which is measured to be 3.7 ± 0.4 ± 0.1 [118]. Our results represent substantial improvements of the previous ones [109, 110, 111] and are consistent with theoretical predictions. All these results were presented at Flavor Physics and CP Violation (FPCP) 2008 Conference, that took place in Taipei, Taiwan. They will be soon included into a paper to be submitted to Physical Review D. For time-dependent analysis, we have reconstructed 1820 ± 48 flavor-tagged B 0 → η'K 0 events, using the final BABAR statistic of 467.4 x 10 6 B$$\\bar{B}$$ pairs. We use these events to measure the time-dependent asymmetry parameters S and C. We find S = 0.59 ± 0.08 ± 0.02, and C = -0.06 ± 0.06 ± 0.02. A non-zero value of C would represent a directly CP non-conserving component in B 0 → η'K 0, while S would be equal to sin2β measured in B 0 → J/ΨK s 0 [108], a mixing-decay interference effect, provided the decay is dominated by amplitudes of a single weak phase. The new measured value of S can be considered in agreement with the expectations of the 'Standard Model', inside the experimental and theoretical uncertainties. Inconsistency of our result for S with CP conservation (S = 0) has a significance of 7.1 standard deviations (statistical and systematics included). Our result for the direct-CP violation parameter C is 0.9 standard deviations from zero (statistical and systematics included). Our results are in agreement with the previous ones [18]. Despite the statistics is only 20% larger than the one used in previous measurement, we improved of 20% the error on S and of 14% the error on C. This error is the smaller ever achieved, by both BABAR and Belle, in Time-Dependent CP Violation Parameters measurement is a b → s transition.« less
Nour-Eldein, Hebatallah
2016-01-01
With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.
Nour-Eldein, Hebatallah
2016-01-01
Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839
Colegrave, Nick
2017-01-01
A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
Permutation tests for goodness-of-fit testing of mathematical models to experimental data.
Fişek, M Hamit; Barlas, Zeynep
2013-03-01
This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.
Nunes, J M; Riccio, M E; Buhler, S; Di, D; Currat, M; Ries, F; Almada, A J; Benhamamouch, S; Benitez, O; Canossi, A; Fadhlaoui-Zid, K; Fischer, G; Kervaire, B; Loiseau, P; de Oliveira, D C M; Papasteriades, C; Piancatelli, D; Rahal, M; Richard, L; Romero, M; Rousseau, J; Spiroski, M; Sulcebe, G; Middleton, D; Tiercy, J-M; Sanchez-Mazas, A
2010-07-01
During the 15th International Histocompatibility and Immunogenetics Workshop (IHIWS), 14 human leukocyte antigen (HLA) laboratories participated in the Analysis of HLA Population Data (AHPD) project where 18 new population samples were analyzed statistically and compared with data available from previous workshops. To that aim, an original methodology was developed and used (i) to estimate frequencies by taking into account ambiguous genotypic data, (ii) to test for Hardy-Weinberg equilibrium (HWE) by using a nested likelihood ratio test involving a parameter accounting for HWE deviations, (iii) to test for selective neutrality by using a resampling algorithm, and (iv) to provide explicit graphical representations including allele frequencies and basic statistics for each series of data. A total of 66 data series (1-7 loci per population) were analyzed with this standard approach. Frequency estimates were compliant with HWE in all but one population of mixed stem cell donors. Neutrality testing confirmed the observation of heterozygote excess at all HLA loci, although a significant deviation was established in only a few cases. Population comparisons showed that HLA genetic patterns were mostly shaped by geographic and/or linguistic differentiations in Africa and Europe, but not in America where both genetic drift in isolated populations and gene flow in admixed populations led to a more complex genetic structure. Overall, a fruitful collaboration between HLA typing laboratories and population geneticists allowed finding useful solutions to the problem of estimating gene frequencies and testing basic population diversity statistics on highly complex HLA data (high numbers of alleles and ambiguities), with promising applications in either anthropological, epidemiological, or transplantation studies.
An Optical Flow-Based Full Reference Video Quality Assessment Algorithm.
K, Manasa; Channappayya, Sumohana S
2016-06-01
We present a simple yet effective optical flow-based full-reference video quality assessment (FR-VQA) algorithm for assessing the perceptual quality of natural videos. Our algorithm is based on the premise that local optical flow statistics are affected by distortions and the deviation from pristine flow statistics is proportional to the amount of distortion. We characterize the local flow statistics using the mean, the standard deviation, the coefficient of variation (CV), and the minimum eigenvalue ( λ min ) of the local flow patches. Temporal distortion is estimated as the change in the CV of the distorted flow with respect to the reference flow, and the correlation between λ min of the reference and of the distorted patches. We rely on the robust multi-scale structural similarity index for spatial quality estimation. The computed temporal and spatial distortions, thus, are then pooled using a perceptually motivated heuristic to generate a spatio-temporal quality score. The proposed method is shown to be competitive with the state-of-the-art when evaluated on the LIVE SD database, the EPFL Polimi SD database, and the LIVE Mobile HD database. The distortions considered in these databases include those due to compression, packet-loss, wireless channel errors, and rate-adaptation. Our algorithm is flexible enough to allow for any robust FR spatial distortion metric for spatial distortion estimation. In addition, the proposed method is not only parameter-free but also independent of the choice of the optical flow algorithm. Finally, we show that the replacement of the optical flow vectors in our proposed method with the much coarser block motion vectors also results in an acceptable FR-VQA algorithm. Our algorithm is called the flow similarity index.
Time density curve analysis for C-arm FDCT PBV imaging.
Kamran, Mudassar; Byrne, James V
2016-04-01
Parenchymal blood volume (PBV) estimation using C-arm flat detector computed tomography (FDCT) assumes a steady-state contrast concentration in cerebral vasculature for the scan duration. Using time density curve (TDC) analysis, we explored if the steady-state assumption is met for C-arm CT PBV scans, and how consistent the contrast-material dynamics in cerebral vasculature are across patients. Thirty C-arm FDCT datasets of 26 patients with aneurysmal-SAH, acquired as part of a prospective study comparing C-arm CT PBV with MR-PWI, were analysed. TDCs were extracted from the 2D rotational projections. Goodness-of-fit of TDCs to a steady-state horizontal-line-model and the statistical similarity among the individual TDCs were tested. Influence of the differences in TDC characteristics on the agreement of resulting PBV measurements with MR-CBV was calculated. Despite identical scan parameters and contrast-injection-protocol, the individual TDCs were statistically non-identical (p < 0.01). Using Dunn's multiple comparisons test, of the total 435 individual comparisons among the 30 TDCs, 330 comparisons (62%) reached statistical significance for difference. All TDCs deviated significantly (p < 0.01) from the steady-state horizontal-line-model. PBV values of those datasets for which the TDCs showed largest deviations from the steady-state model demonstrated poor agreement and correlation with MR-CBV, compared with the PBV values of those datasets for which the TDCs were closer to steady-state. For clinical C-arm CT PBV examinations, the administered contrast material does not reach the assumed 'ideal steady-state' for the duration of scan. Using a prolonged injection protocol, the degree to which the TDCs approximate the ideal steady-state influences the agreement of resulting PBV measurements with MR-CBV. © The Author(s) 2016.
Time density curve analysis for C-arm FDCT PBV imaging
Byrne, James V
2016-01-01
Introduction Parenchymal blood volume (PBV) estimation using C-arm flat detector computed tomography (FDCT) assumes a steady-state contrast concentration in cerebral vasculature for the scan duration. Using time density curve (TDC) analysis, we explored if the steady-state assumption is met for C-arm CT PBV scans, and how consistent the contrast-material dynamics in cerebral vasculature are across patients. Methods Thirty C-arm FDCT datasets of 26 patients with aneurysmal-SAH, acquired as part of a prospective study comparing C-arm CT PBV with MR-PWI, were analysed. TDCs were extracted from the 2D rotational projections. Goodness-of-fit of TDCs to a steady-state horizontal-line-model and the statistical similarity among the individual TDCs were tested. Influence of the differences in TDC characteristics on the agreement of resulting PBV measurements with MR-CBV was calculated. Results Despite identical scan parameters and contrast-injection-protocol, the individual TDCs were statistically non-identical (p < 0.01). Using Dunn's multiple comparisons test, of the total 435 individual comparisons among the 30 TDCs, 330 comparisons (62%) reached statistical significance for difference. All TDCs deviated significantly (p < 0.01) from the steady-state horizontal-line-model. PBV values of those datasets for which the TDCs showed largest deviations from the steady-state model demonstrated poor agreement and correlation with MR-CBV, compared with the PBV values of those datasets for which the TDCs were closer to steady-state. Conclusion For clinical C-arm CT PBV examinations, the administered contrast material does not reach the assumed ‘ideal steady-state’ for the duration of scan. Using a prolonged injection protocol, the degree to which the TDCs approximate the ideal steady-state influences the agreement of resulting PBV measurements with MR-CBV. PMID:26769736
The impact of primary open-angle glaucoma: Quality of life in Indian patients
Kumar, Suresh; Ichhpujani, Parul; Singh, Roopali; Thakur, Sahil; Sharma, Madhu; Nagpal, Nimisha
2018-01-01
Purpose: Glaucoma significantly affects the quality of life (QoL) of a patient. Despite the huge number of glaucoma patients in India, not many, QoL studies have been carried out. The purpose of the present study was to evaluate the QoL in Indian patients with varying severity of glaucoma. Methods: This was a hospital-based, cross-sectional, analytical study of 180 patients. The QoL was assessed using orally administered QoL instruments comprising of two glaucoma-specific instruments; Glaucoma Quality of Life-15 (GQL-15) and Viswanathan 10 instrument, and 1 vision-specific instrument; National Eye Institute Visual Function Questionnaire-25 (NEIVFQ25). Results: Using NEIVFQ25, the difference between mean QoL scores among cases (88.34 ± 4.53) and controls (95.32 ± 5.76) was statistically significant. In GQL-15, there was a statistically significant difference between mean scores of cases (22.58 ± 5.23) and controls (16.52 ± 1.24). The difference in mean scores with Viswanathan 10 instrument in cases (7.92 ± 0.54) and controls (9.475 ± 0.505) was also statistically significant. QoL scores also showed moderate correlation with mean deviation, pattern standard deviation, and vertical cup-disc ratio. Conclusion: In our study, all the three instruments showed decrease in QoL in glaucoma patients compared to controls. With the increase in severity of glaucoma, corresponding decrease in QoL was observed. It is important for ophthalmologists to understand about the QoL in glaucoma patients so as to have a more holistic approach to patients and for effective delivery of treatment. PMID:29480254
Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol
2013-12-01
The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.
Effect of Variable Spatial Scales on USLE-GIS Computations
NASA Astrophysics Data System (ADS)
Patil, R. J.; Sharma, S. K.
2017-12-01
Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.
NASA Astrophysics Data System (ADS)
Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.
2016-12-01
Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.
Laser acupuncture versus reflexology therapy in elderly with rheumatoid arthritis.
Adly, Afnan Sedky; Adly, Aya Sedky; Adly, Mahmoud Sedky; Serry, Zahra M H
2017-07-01
The purposes of this study are to determine and compare efficacy of laser acupuncture versus reflexology in elderly with rheumatoid arthritis. Thirty elderly patients with rheumatoid arthritis aged between 60 and 70 years were classified into two groups, 15 patients each. Group A received laser acupuncture therapy (904 nm, beam area of 1cm 2 , power 100 mW, power density 100 mW/cm 2 , energy dosage 4 J, energy density 4 J/cm 2 , irradiation time 40 s, and frequency 100,000 Hz). The acupuncture points that were exposed to laser radiation are LR3, ST25, ST36, SI3, SI4, LI4, LI11, SP6, SP9, GB25, GB34, and HT7. While group B received reflexology therapy, both offered 12 sessions over 4 weeks. The changes in RAQoL, HAQ, IL-6, MDA, ATP, and ROM at wrist and ankle joints were measured at the beginning and end of treatment. There was significant decrease in RAQoL, HAQ, IL-6, and MDA pre/posttreatment for both groups (p < 0.05); significant increase in ATP pre/posttreatment for both groups (p < 0.05); significant increase in ankle dorsi-flexion, plantar-flexion, wrist flexion, extension, and ulnar deviation ROM pre/posttreatment in group A (p < 0.05); and significant increase in ankle dorsi-flexion and ankle plantar-flexion ROM pre/posttreatment in group B (p < 0.05). Comparison between both groups showed a statistical significant decrease in MDA and a statistical significant increase in ATP in group A than group B. Percent of changes in MDA was 41.82%↓ in group A versus 21.68%↓ in group B; changes in ATP was 226.97%↑ in group A versus 67.02%↑ in group B. Moreover, there was a statistical significant increase in ankle dorsi-flexion, ankle plantar-flexion, wrist flexion, wrist extension, and radial deviation in group A than group B. Laser therapy is associated with significant improvement in MDA and ATP greater than reflexology. In addition, it is associated with significant improvement in ankle dorsi-flexion, ankle plantar-flexion, wrist flexion, wrist extension, and radial deviation greater than reflexology in elderly patients with rheumatoid arthritis.
Statistical core design methodology using the VIPRE thermal-hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, M.W.; Feltus, M.A.
1994-12-31
This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less
Design, analysis, and interpretation of field quality-control data for water-sampling projects
Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.
2015-01-01
The report provides extensive information about statistical methods used to analyze quality-control data in order to estimate potential bias and variability in environmental data. These methods include construction of confidence intervals on various statistical measures, such as the mean, percentiles and percentages, and standard deviation. The methods are used to compare quality-control results with the larger set of environmental data in order to determine whether the effects of bias and variability might interfere with interpretation of these data. Examples from published reports are presented to illustrate how the methods are applied, how bias and variability are reported, and how the interpretation of environmental data can be qualified based on the quality-control analysis.
Robust non-Gaussian statistics and long-range correlation of total ozone
NASA Astrophysics Data System (ADS)
Toumi, R.; Syroka, J.; Barnes, C.; Lewis, P.
2001-01-01
Three long-term total ozone time series at Camborne, Lerwick and Arosa are examined for their statistical properties. Non-Gaussian behaviour is seen for all locations. There are large interannual fluctuations in the higher moments of the probability distribution. However, only the mean for all stations and summer standard deviation at Lerwick show significant trends. This suggests that there has been no long-term change in the stratospheric circulation, but there are decadal variations. The time series can be also characterised as scale invariant with a Hurst exponent of about 0.8 for all three sites. The Arosa time series was found to be weakly intermittent, in agreement with the non-Gaussian characteristics of the data set
A neurophysiological explanation for biases in visual localization.
Moreland, James C; Boynton, Geoffrey M
2017-02-01
Observers show small but systematic deviations from equal weighting of all elements when asked to localize the center of an array of dots. Counter-intuitively, with small numbers of dots drawn from a Gaussian distribution, this bias results in subjects overweighting the influence of outlier dots - inconsistent with traditional statistical estimators of central tendency. Here we show that this apparent statistical anomaly can be explained by the observation that outlier dots also lie in regions of lower dot density. Using a standard model of V1 processing, which includes spatial integration followed by a compressive static nonlinearity, we can successfully predict the finding that dots in less dense regions of an array have a relatively greater influence on the perceived center.
Carossa, S; Catapano, S; Previgliano, V; Preti, G
1993-05-01
The aim of this research was to measure the incidence of craniomandibular disorders in a group of patients with functional-type cervical alterations. The group consisted of 50 patients undergoing treatment for disorders of the cervical sectors of the spine. Each patient was subjected to a medical examination to investigate the presence of CMD signs or symptoms. From the data statistical analysis a higher percentage of cases with muscular and joint pain, limited mouth opening, deviation and deflection, were found in comparison with the percentage found among the general population. This demonstrates an overloading of the entire masticatory apparatus. Joint noise was less frequent, probably due to its exclusion from our sample of patients with arthrosis-type degenerative pathology.
Petsch, Harold E.
1979-01-01
Statistical summaries of daily streamflow data for 246 stations east of the Continental Divide in Colorado and adjacent States are presented in this report. Duration tables, high-flow sequence tables, and low-flow sequence tables provide information about daily mean discharge. The mean, variance, standard deviation, skewness, and coefficient of variation are provided for monthly and annual flows. Percentages of average flow are provided for monthly flows and first-order serial-correlation coefficients are provided for annual flows. The text explains the nature and derivation of the data and illustrates applications of the tabulated information by examples. The data may be used by agencies and individuals engaged in water studies. (USGS)
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
A novel measure of ewe efficiency for breeding and benchmarking purposes.
McHugh, Nóirín; Pabiou, Thierry; McDermott, Kevin; Wall, Eamon; Berry, Donagh P
2018-06-04
Ewe efficiency has traditionally been defined as the ratio of litter weight to ewe weight; given the statistical properties of ratio traits, an alternative strategy is proposed in the present study. The concept of using the deviation in performance of an animal from the population norm has grown in popularity as a measure of animal-level efficiency. The objective of the present study was to define novel measures of efficiency for sheep, which considers the combined weight of a litter of lambs relative to the weight of their dam, and vice versa. Two novel traits, representing the deviation in total litter weight at 40 d (DEV40L) or weaning (DEVweanL), were calculated as the residuals of a statistical model, with litter weight as the dependent variable and with the fixed effects of litter rearing size, contemporary group, and ewe weight. The deviation in ewe weight at 40-d postlambing (DEV40E) or weaning (DEVweanE) was derived using a similar approach but with ewe weight and litter weight interchanged as the dependent variable. Variance components for each trait were estimated by first deriving the litter or ewe weight deviation phenotype and subsequently estimating the variance components. The phenotypic SD in DEV40L and DEVweanL was 8.46 and 15.37 kg, respectively; the mean litter weight at 40 d and weaning was 30.97 and 47.68 kg, respectively. The genetic SD and heritability for DEV40L was 2.65 kg and 0.12, respectively. For DEVweanL, the genetic SD and heritability was 4.94 kg and 0.13, respectively. The average ewe weight at 40-d postlambing and at weaning was 66.43 and 66.87 kg, respectively. The genetic SD and heritability for DEV40E was 4.33 kg and 0.24, respectively. The heritability estimated for DEVweanE was 0.31. The traits derived in the present study may be useful not only for phenotypic benchmarking of ewes within flock on performance but also for benchmarking flocks against each other; furthermore, the extent of genetic variability in all traits, coupled with the fact that the data required to generate these novel phenotypes are usually readily available, signals huge potential within sheep breeding programs.
Tsallis statistics and neurodegenerative disorders
NASA Astrophysics Data System (ADS)
Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.
2016-08-01
In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, F.A.; Khaleel, M.A.
This paper describes a statistical evaluation of the through-thickness copper variation for welds in reactor pressure vessels, and reviews the historical basis for the static and arrest fracture toughness (K{sub Ic} and K{sub Ia}) equations used in the VISA-II code. Copper variability in welds is due to fabrication procedures with copper contents being randomly distributed, variable from one location to another through the thickness of the vessel. The VISA-II procedure of sampling the copper content from a statistical distribution for every 6.35- to 12.7-mm (1/4- to 1/2-in.) layer through the thickness was found to be consistent with the statistical observations.more » However, the parameters of the VISA-II distribution and statistical limits required further investigation. Copper contents at few locations through the thickness were found to exceed the 0.4% upper limit of the VISA-II code. The data also suggest that the mean copper content varies systematically through the thickness. While, the assumption of normality is not clearly supported by the available data, a statistical evaluation based on all the available data results in mean and standard deviations within the VISA-II code limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fresquez, Philip R.
Field mice are effective indicators of contaminant presence. This paper reports the concentrations of various radionuclides, heavy metals, polychlorinated biphenyls, high explosives, perchlorate, and dioxin/furans in field mice (mostly deer mice) collected from regional background areas in northern New Mexico. These data, represented as the regional statistical reference level (the mean plus three standard deviations = 99% confidence level), are used to compare with data from field mice collected from areas potentially impacted by Laboratory operations, as per the Environmental Surveillance Program at Los Alamos National Laboratory.
Neilson, Jennifer R.; Lamb, Berton Lee; Swann, Earlene M.; Ratz, Joan; Ponds, Phadrea D.; Liverca, Joyce
2005-01-01
The findings presented in this report represent the basic results derived from the attitude assessment survey conducted in the last quarter of 2004. The findings set forth in this report are the frequency distributions for each question in the survey instrument for all respondents. The only statistics provided are descriptive in character - namely, means and associated standard deviations.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Babchishin, Kelly M; Helmus, Leslie-Maaike
2016-09-01
Correlations are the simplest and most commonly understood effect size statistic in psychology. The purpose of the current paper was to use a large sample of real-world data (109 correlations with 60,415 participants) to illustrate the base rate dependence of correlations when applied to dichotomous or ordinal data. Specifically, we examined the influence of the base rate on different effect size metrics. Correlations decreased when the dichotomous variable did not have a 50 % base rate. The higher the deviation from a 50 % base rate, the smaller the observed Pearson's point-biserial and Kendall's tau correlation coefficients. In contrast, the relationship between base rate deviations and the more commonly proposed alternatives (i.e., polychoric correlation coefficients, AUCs, Pearson/Thorndike adjusted correlations, and Cohen's d) were less remarkable, with AUCs being most robust to attenuation due to base rates. In other words, the base rate makes a marked difference in the magnitude of the correlation. As such, when using dichotomous data, the correlation may be more sensitive to base rates than is optimal for the researcher's goals. Given the magnitude of the association between the base rate and point-biserial correlations (r = -.81) and Kendall's tau (r = -.80), we recommend that AUCs, Pearson/Thorndike adjusted correlations, Cohen's d, or polychoric correlations should be considered as alternate effect size statistics in many contexts.
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
NASA Astrophysics Data System (ADS)
Moreno de Castro, Maria; Schartau, Markus; Wirtz, Kai
2017-04-01
Mesocosm experiments on phytoplankton dynamics under high CO2 concentrations mimic the response of marine primary producers to future ocean acidification. However, potential acidification effects can be hindered by the high standard deviation typically found in the replicates of the same CO2 treatment level. In experiments with multiple unresolved factors and a sub-optimal number of replicates, post-processing statistical inference tools might fail to detect an effect that is present. We propose that in such cases, data-based model analyses might be suitable tools to unearth potential responses to the treatment and identify the uncertainties that could produce the observed variability. As test cases, we used data from two independent mesocosm experiments. Both experiments showed high standard deviations and, according to statistical inference tools, biomass appeared insensitive to changing CO2 conditions. Conversely, our simulations showed earlier and more intense phytoplankton blooms in modeled replicates at high CO2 concentrations and suggested that uncertainties in average cell size, phytoplankton biomass losses, and initial nutrient concentration potentially outweigh acidification effects by triggering strong variability during the bloom phase. We also estimated the thresholds below which uncertainties do not escalate to high variability. This information might help in designing future mesocosm experiments and interpreting controversial results on the effect of acidification or other pressures on ecosystem functions.
Assessment of Uncertainties Related to Seismic Hazard Using Fuzzy Analysis
NASA Astrophysics Data System (ADS)
Jorjiashvili, N.; Yokoi, T.; Javakhishvili, Z.
2013-05-01
Seismic hazard analysis in last few decades has been become very important issue. Recently, new technologies and available data have been improved that helped many scientists to understand where and why earthquakes happen, physics of earthquakes, etc. They have begun to understand the role of uncertainty in Seismic hazard analysis. However, there is still significant problem how to handle existing uncertainty. The same lack of information causes difficulties to quantify uncertainty accurately. Usually attenuation curves are obtained in statistical way: regression analysis. Statistical and probabilistic analysis show overlapped results for the site coefficients. This overlapping takes place not only at the border between two neighboring classes, but also among more than three classes. Although the analysis starts from classifying sites using the geological terms, these site coefficients are not classified at all. In the present study, this problem is solved using Fuzzy set theory. Using membership functions the ambiguities at the border between neighboring classes can be avoided. Fuzzy set theory is performed for southern California by conventional way. In this study standard deviations that show variations between each site class obtained by Fuzzy set theory and classical way are compared. Results on this analysis show that when we have insufficient data for hazard assessment site classification based on Fuzzy set theory shows values of standard deviations less than obtained by classical way which is direct proof of less uncertainty.
Ogle, K.M.; Lee, R.W.
1994-01-01
Radon-222 activity was measured for 27 water samples from streams, an alluvial aquifer, bedrock aquifers, and a geothermal system, in and near the 510-square mile area of Owl Creek Basin, north- central Wyoming. Summary statistics of the radon- 222 activities are compiled. For 16 stream-water samples, the arithmetic mean radon-222 activity was 20 pCi/L (picocuries per liter), geometric mean activity was 7 pCi/L, harmonic mean activity was 2 pCi/L and median activity was 8 pCi/L. The standard deviation of the arithmetic mean is 29 pCi/L. The activities in the stream-water samples ranged from 0.4 to 97 pCi/L. The histogram of stream-water samples is left-skewed when compared to a normal distribution. For 11 ground-water samples, the arithmetic mean radon- 222 activity was 486 pCi/L, geometric mean activity was 280 pCi/L, harmonic mean activity was 130 pCi/L and median activity was 373 pCi/L. The standard deviation of the arithmetic mean is 500 pCi/L. The activity in the ground-water samples ranged from 25 to 1,704 pCi/L. The histogram of ground-water samples is left-skewed when compared to a normal distribution. (USGS)
Grueber, Catherine E; Hogg, Carolyn J; Ivy, Jamie A; Belov, Katherine
2015-04-01
Maintaining genetic diversity is a crucial goal of intensive management of threatened species, particularly for those populations that act as sources for translocation or re-introduction programmes. Most captive genetic management is based on pedigrees and a neutral theory of inheritance, an assumption that may be violated by selective forces operating in captivity. Here, we explore the conservation consequences of early viability selection: differential offspring survival that occurs prior to management or research observations, such as embryo deaths in utero. If early viability selection produces genotypic deviations from Mendelian predictions, it may undermine management strategies intended to minimize inbreeding and maintain genetic diversity. We use empirical examples to demonstrate that straightforward approaches, such as comparing litter sizes of inbred vs. noninbred breeding pairs, can be used to test whether early viability selection likely impacts estimates of inbreeding depression. We also show that comparing multilocus genotype data to pedigree predictions can reveal whether early viability selection drives systematic biases in genetic diversity, patterns that would not be detected using pedigree-based statistics alone. More sophisticated analysis combining genomewide molecular data with pedigree information will enable conservation scientists to test whether early viability selection drives deviations from neutrality across wide stretches of the genome, revealing whether this form of selection biases the pedigree-based statistics and inference upon which intensive management is based. © 2015 John Wiley & Sons Ltd.
Lutz, Werner K; Vamvakas, Spyros; Kopp-Schneider, Annette; Schlatter, Josef; Stopper, Helga
2002-12-01
Sublinear dose-response relationships are often seen in toxicity testing, particularly with bioassays for carcinogenicity. This is the result of a superimposition of various effects that modulate and contribute to the process of cancer formation. Examples are saturation of detoxification pathways or DNA repair with increasing dose, or regenerative hyperplasia and indirect DNA damage as a consequence of high-dose cytotoxicity and cell death. The response to a combination treatment can appear to be supra-additive, although it is in fact dose-additive along a sublinear dose-response curve for the single agents. Because environmental exposure of humans is usually in a low-dose range and deviation from linearity is less likely at the low-dose end, combination effects should be tested at the lowest observable effect levels (LOEL) of the components. This principle has been applied to combinations of genotoxic agents in various cellular models. For statistical analysis, all experiments were analyzed for deviation from additivity with an n-factor analysis of variance with an interaction term, n being the number of components tested in combination. Benzo[a]pyrene, benz[a]anthracene, and dibenz[a,c]anthracene were tested at the LOEL, separately and in combination, for the induction of revertants in the Ames test, using Salmonella typhimurium TA100 and rat liver S9 fraction. Combined treatment produced no deviation from additivity. The induction of micronuclei in vitro was investigated with ionizing radiation from a 137Cs source and ethyl methanesulfonate. Mouse lymphoma L5178Y cells revealed a significant 40% supra-additive combination effect in an experiment based on three independent replicates for controls and single and combination treatments. On the other hand, two human lymphoblastoid cell lines (TK6 and WTK1) as well as a pilot study with human primary fibroblasts from fetal lung did not show deviation from additivity. Data derived from one cell line should therefore not be generalized. Regarding the testing of mixtures for deviation from additive toxicity, the suggested experimental protocol is easily followed by toxicologists.
Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig
2018-06-16
To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.
Summary Statistics for Fun Dough Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
Assessment of corneal epithelial thickness in dry eye patients.
Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang
2014-12-01
To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.
The Predisposing Factors between Dental Caries and Deviations from Normal Weight.
Chopra, Amandeep; Rao, Nanak Chand; Gupta, Nidhi; Vashisth, Shelja; Lakhanpal, Manav
2015-04-01
Dental caries and deviations from normal weight are two conditions which share several broadly predisposing factors. So it's important to understand any relationship between dental state and body weight if either is to be managed appropriately. The study was done to find out the correlation between body mass index (BMI), diet, and dental caries among 12-15-year-old schoolgoing children in Panchkula District. A multistage sample of 12-15-year-old school children (n = 810) in Panchkula district, Haryana was considered. Child demographic details and diet history for 5 days was recorded. Data regarding dental caries status was collected using World Health Organization (1997) format. BMI was calculated and categorized according to the World Health Organization classification system for BMI. The data were subjected to statistical analysis using chi-square test and binomial regression developed using the Statistical Package for Social Sciences (SPSS) 20.0. The mean Decayed Missing Filled Teeth (DMFT) score was found to be 1.72 with decayed, missing, and filled teeth to be 1.22, 0.04, and 0.44, respectively. When the sample was assessed based on type of diet, it was found that vegetarians had higher mean DMFT (1.72) as compared to children having mixed diet. Overweight children had highest DMFT (3.21) which was followed by underweight (2.31) and obese children (2.23). Binomial regression revealed that females were 1.293 times at risk of developing caries as compared to males. Fair and poor Simplified-Oral Hygiene Index (OHI-S) showed 3.920 and 4.297 times risk of developing caries as compared to good oral hygiene, respectively. Upper high socioeconomic status (SES) is at most risk of developing caries. Underweight, overweight, and obese are at 2.7, 2.5, and 3 times risk of developing caries as compared to children with normal BMI, respectively. Dental caries and deviations from normal weight are two conditions which share several broadly predisposing factors such as diet, SES, lifestyle and other environmental factors.
Rescaled earthquake recurrence time statistics: application to microrepeaters
NASA Astrophysics Data System (ADS)
Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru
2009-01-01
Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.
Aswehlee, Amel M; Hattori, Mariko; Elbashti, Mahmoud E; Sumita, Yuka I; Taniguchi, Hisashi
This study aimed (1) to geometrically evaluate areas of facial asymmetry in patients with two different types of maxillectomy defect compared to a control group, (2) to geometrically evaluate the effect of an obturator prosthesis on facial asymmetry, and (3) to investigate the correlation between three-dimensional (3D) deviation values and number of missing teeth. Facial data from 13 normal control participants and 26 participants with two types of maxillectomy defect (groups 1 and 2) were acquired with a noncontact 3D digitizer. Facial asymmetry was evaluated by superimposing a facial scan onto its mirror scan using 3D evaluation software. Facial scans with and without obturator prostheses were also superimposed to evaluate the obturator effect. The correlation between 3D deviation values and number of missing teeth was also evaluated. Statistical analyses were performed. Facial asymmetry was significantly different between the control group and each maxillectomy defect group (group 1: P < .0001 and P = .020 without and with obturator, respectively; group 2: P < .0001 for both conditions). There were no significant differences in asymmetry between groups 1 and 2 either without or with obturator (P = .457 and P = .980, respectively). There was a significant difference in the obturator effect between groups 1 and 2 (P = .038). 3D deviation values were positively correlated with number of missing teeth in group 1 (r = 0.594, P = .032), but not in group 2. A noncontact 3D digitizer and 3D deviation assessment were effective for analyzing facial data of maxillectomy patients. Obturators were effective for improving facial deformities in these patients.
López-Valcárcel, Beatriz G; González-Martel, Christian; Peiro, Salvador
2018-01-01
Objective Newcomb-Benford’s Law (NBL) proposes a regular distribution for first digits, second digits and digit combinations applicable to many different naturally occurring sources of data. Testing deviations from NBL is used in many datasets as a screening tool for identifying data trustworthiness problems. This study aims to compare public available waiting lists (WL) data from Finland and Spain for testing NBL as an instrument to flag up potential manipulation in WLs. Design Analysis of the frequency of Finnish and Spanish WLs first digits to determine if their distribution is similar to the pattern documented by NBL. Deviations from the expected first digit frequency were analysed using Pearson’s χ2, mean absolute deviation and Kuiper tests. Setting/participants Publicly available WL data from Finland and Spain, two countries with universal health insurance and National Health Systems but characterised by different levels of transparency and good governance standards. Main outcome measures Adjustment of the observed distribution of the numbers reported in Finnish and Spanish WL data to the expected distribution according to NBL. Results WL data reported by the Finnish health system fits first digit NBL according to all statistical tests used (p=0.6519 in χ2 test). For Spanish data, this hypothesis was rejected in all tests (p<0.0001 in χ2 test). Conclusions Testing deviations from NBL distribution can be a useful tool to identify problems with WL data trustworthiness and signalling the need for further testing. PMID:29743333
Measuring (subglacial) bedform orientation, length, and longitudinal asymmetry - Method assessment.
Jorge, Marco G; Brennand, Tracy A
2017-01-01
Geospatial analysis software provides a range of tools that can be used to measure landform morphometry. Often, a metric can be computed with different techniques that may give different results. This study is an assessment of 5 different methods for measuring longitudinal, or streamlined, subglacial bedform morphometry: orientation, length and longitudinal asymmetry, all of which require defining a longitudinal axis. The methods use the standard deviational ellipse (not previously applied in this context), the longest straight line fitting inside the bedform footprint (2 approaches), the minimum-size footprint-bounding rectangle, and Euler's approximation. We assess how well these methods replicate morphometric data derived from a manually mapped (visually interpreted) longitudinal axis, which, though subjective, is the most typically used reference. A dataset of 100 subglacial bedforms covering the size and shape range of those in the Puget Lowland, Washington, USA is used. For bedforms with elongation > 5, deviations from the reference values are negligible for all methods but Euler's approximation (length). For bedforms with elongation < 5, most methods had small mean absolute error (MAE) and median absolute deviation (MAD) for all morphometrics and thus can be confidently used to characterize the central tendencies of their distributions. However, some methods are better than others. The least precise methods are the ones based on the longest straight line and Euler's approximation; using these for statistical dispersion analysis is discouraged. Because the standard deviational ellipse method is relatively shape invariant and closely replicates the reference values, it is the recommended method. Speculatively, this study may also apply to negative-relief, and fluvial and aeolian bedforms.
Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence
NASA Astrophysics Data System (ADS)
Laurie, J.; Bouchet, F.; Zaboronski, O.
2012-12-01
We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.
Phonological skills and disfluency levels in preschool children who stutter.
Gregg, Brent Andrew; Yairi, Ehud
2007-01-01
The relation between stuttering and aspects of language, including phonology, has been investigated for many years. Whereas past literature reported that the incidence of phonological difficulties is higher for children who stutter when compared to normally fluent children, the suggestion of association between the two disorders also drew several critical evaluations. Nevertheless, only a limited amount of information exists concerning the manner and extent to which the speech sound errors exhibited by young children who stutter, close to stuttering onset, is related to the characteristics of their stuttering, such as its severity. Conversely, information is limited regarding the effects a child's phonological skills may have on his/her stuttering severity. The current study investigated the mutual relations between these two factors in 28 carefully selected preschool children near the onset of their stuttering. The children, 20 boys and 8 girls, ranged in age from 25 to 38 months, with a mean of 32.2 months. The phonological skills of two groups with different ratings of stuttering were compared. Similarly, the stuttering severities of two groups with different levels of phonological skills (minimal deviations-moderate deviations) were compared. No statistically significant differences were found for either of the two factors. Inspection of the data revealed interesting individual differences. The reader will be able to list: (1) differences in the phonological skills of preschool children whose stuttering is severe as compared to children whose stuttering is mild and (2) differences in stuttering severity in preschool children with minimal phonological deviations as compared to children with moderate phonological deviations.
Danielsen, J C; Karimian, K; Ciarlantini, R; Melsen, B; Kjær, I
2015-12-01
This was to elucidate dental and skeletal findings in individuals with unilateral and bilateral maxillary dental transpositions. The sample comprised of radiographic materials from 63 individuals with maxillary dental transpositions from the Departments of Odontology at the Universities of Copenhagen and Aarhus and by the Danish municipal orthodontic service. The cases were divided into three groups: unilateral transposition of the canine and first premolar (Type 1U), bilateral transposition of canine and first premolar (Type 1B), and unilateral transposition of canine and lateral incisor (Type 2). The dentitions were analysed regarding agenesis and dental morphological anomalies on panoramic radiographs, and craniofacial aspects were cephalometrically analysed on profile images The results were statistically evaluated. All groups demonstrated increased occurrences of agenesis (Type 1U and Type 1B: 31 agenesis in 15 patients; and Type 2 three agenesis in three patients). Taurodontic root morphology was most dominant in Type 1U. Peg-shaped lateral incisors showed an increased occurrence, though not in Type 1U. Skeletally, Type 1B and Type 1U demonstrated maxillary retrognathia (more pronounced in Type 1B). Type 2 showed a significant posterior inclination of the maxilla. Transpositions of maxillary canines involve dental and skeletal deviations. Dental deviations were predominantly taurodontic root morphology and agenesis. Regarding skeletal deviations, bilateral transpositions of the canines and the first premolars are associated with skeletal changes. Unilateral transpositions are possibly a localised deviation with minor or no skeletal involvements. The results indicate a possible difference in the aetiologies of unilateral and bilateral transpositions.
Braithwaite, Susan S; Umpierrez, Guillermo E; Chase, J Geoffrey
2013-09-01
Group metrics are described to quantify blood glucose (BG) variability of hospitalized patients. The "multiplicative surrogate standard deviation" (MSSD) is the reverse-transformed group mean of the standard deviations (SDs) of the logarithmically transformed BG data set of each patient. The "geometric group mean" (GGM) is the reverse-transformed group mean of the means of the logarithmically transformed BG data set of each patient. Before reverse transformation is performed, the mean of means and mean of SDs each has its own SD, which becomes a multiplicative standard deviation (MSD) after reverse transformation. Statistical predictions and comparisons of parametric or nonparametric tests remain valid after reverse transformation. A subset of a previously published BG data set of 20 critically ill patients from the first 72 h of treatment under the SPRINT protocol was transformed logarithmically. After rank ordering according to the SD of the logarithmically transformed BG data of each patient, the cohort was divided into two equal groups, those having lower or higher variability. For the entire cohort, the GGM was 106 (÷/× 1.07) mg/dl, and MSSD was 1.24 (÷/× 1.07). For the subgroups having lower and higher variability, respectively, the GGM did not differ, 104 (÷/× 1.07) versus 109 (÷/× 1.07) mg/dl, but the MSSD differed, 1.17 (÷/× 1.03) versus 1.31 (÷/× 1.05), p = .00004. By using the MSSD with its MSD, groups can be characterized and compared according to glycemic variability of individual patient members. © 2013 Diabetes Technology Society.