Sample records for obtain good statistics

  1. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  2. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  3. A global goodness-of-fit statistic for Cox regression models.

    PubMed

    Parzen, M; Lipsitz, S R

    1999-06-01

    In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.

  4. Residuals and the Residual-Based Statistic for Testing Goodness of Fit of Structural Equation Models

    ERIC Educational Resources Information Center

    Foldnes, Njal; Foss, Tron; Olsson, Ulf Henning

    2012-01-01

    The residuals obtained from fitting a structural equation model are crucial ingredients in obtaining chi-square goodness-of-fit statistics for the model. The authors present a didactic discussion of the residuals, obtaining a geometrical interpretation by recognizing the residuals as the result of oblique projections. This sheds light on the…

  5. 22 CFR 92.80 - Obtaining American vital statistics records.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Obtaining American vital statistics records. 92... statistics records. Individuals who inquire as to means of obtaining copies of or extracts from American... Vital Statistics Office at the place where the record is kept, which is usually in the capital city of...

  6. 22 CFR 92.80 - Obtaining American vital statistics records.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Obtaining American vital statistics records. 92... statistics records. Individuals who inquire as to means of obtaining copies of or extracts from American... Vital Statistics Office at the place where the record is kept, which is usually in the capital city of...

  7. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  8. Statistical alignment: computational properties, homology testing and goodness-of-fit.

    PubMed

    Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G

    2000-09-08

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.

  9. Obtaining Streamflow Statistics for Massachusetts Streams on the World Wide Web

    USGS Publications Warehouse

    Ries, Kernell G.; Steeves, Peter A.; Freeman, Aleda; Singh, Raj

    2000-01-01

    A World Wide Web application has been developed to make it easy to obtain streamflow statistics for user-selected locations on Massachusetts streams. The Web application, named STREAMSTATS (available at http://water.usgs.gov/osw/streamstats/massachusetts.html ), can provide peak-flow frequency, low-flow frequency, and flow-duration statistics for most streams in Massachusetts. These statistics describe the magnitude (how much), frequency (how often), and duration (how long) of flow in a stream. The U.S. Geological Survey (USGS) has published streamflow statistics, such as the 100-year peak flow, the 7-day, 10-year low flow, and flow-duration statistics, for its data-collection stations in numerous reports. Federal, State, and local agencies need these statistics to plan and manage use of water resources and to regulate activities in and around streams. Engineering and environmental consulting firms, utilities, industry, and others use the statistics to design and operate water-supply systems, hydropower facilities, industrial facilities, wastewater treatment facilities, and roads, bridges, and other structures. Until now, streamflow statistics for data-collection stations have often been difficult to obtain because they are scattered among many reports, some of which are not readily available to the public. In addition, streamflow statistics are often needed for locations where no data are available. STREAMSTATS helps solve these problems. STREAMSTATS was developed jointly by the USGS and MassGIS, the State Geographic Information Systems (GIS) agency, in cooperation with the Massachusetts Departments of Environmental Management and Environmental Protection. The application consists of three major components: (1) a user interface that displays maps and allows users to select stream locations for which they want streamflow statistics (fig. 1), (2) a data base of previously published streamflow statistics and descriptive information for 725 USGS data

  10. StatisticAl Characteristics of Cloud over Beijing, China Obtained FRom Ka band Doppler Radar Observation

    NASA Astrophysics Data System (ADS)

    LIU, J.; Bi, Y.; Duan, S.; Lu, D.

    2017-12-01

    It is well-known that cloud characteristics, such as top and base heights and their layering structure of micro-physical parameters, spatial coverage and temporal duration are very important factors influencing both radiation budget and its vertical partitioning as well as hydrological cycle through precipitation data. Also, cloud structure and their statistical distribution and typical values will have respective characteristics with geographical and seasonal variation. Ka band radar is a powerful tool to obtain above parameters around the world, such as ARM cloud radar at the Oklahoma US, Since 2006, Cloudsat is one of NASA's A-Train satellite constellation, continuously observe the cloud structure with global coverage, but only twice a day it monitor clouds over same local site at same local time.By using IAP Ka band Doppler radar which has been operating continuously since early 2013 over the roof of IAP building in Beijing, we obtained the statistical characteristic of clouds, including cloud layering, cloud top and base heights, as well as the thickness of each cloud layer and their distribution, and were analyzed monthly and seasonal and diurnal variation, statistical analysis of cloud reflectivity profiles is also made. The analysis covers both non-precipitating clouds and precipitating clouds. Also, some preliminary comparison of the results with Cloudsat/Calipso products for same period and same area are made.

  11. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    PubMed

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  12. Sensitivity of goodness-of-fit statistics to rainfall data rounding off

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Puliga, Michelangelo

    An analysis based on the L-moments theory suggests of adopting the generalized Pareto distribution to interpret daily rainfall depths recorded by the rain-gauge network of the Hydrological Survey of the Sardinia Region. Nevertheless, a big problem, not yet completely resolved, arises in the estimation of a left-censoring threshold able to assure a good fitting of rainfall data with the generalized Pareto distribution. In order to detect an optimal threshold, keeping the largest possible number of data, we chose to apply a “failure-to-reject” method based on goodness-of-fit tests, as it was proposed by Choulakian and Stephens [Choulakian, V., Stephens, M.A., 2001. Goodness-of-fit tests for the generalized Pareto distribution. Technometrics 43, 478-484]. Unfortunately, the application of the test, using percentage points provided by Choulakian and Stephens (2001), did not succeed in detecting a useful threshold value in most analyzed time series. A deeper analysis revealed that these failures are mainly due to the presence of large quantities of rounding off values among sample data, affecting the distribution of goodness-of-fit statistics and leading to significant departures from percentage points expected for continuous random variables. A procedure based on Monte Carlo simulations is thus proposed to overcome these problems.

  13. Comparison of contact conditions obtained by direct simulation with statistical analysis for normally distributed isotropic surfaces

    NASA Astrophysics Data System (ADS)

    Uchidate, M.

    2018-09-01

    In this study, with the aim of establishing a systematic knowledge on the impact of summit extraction methods and stochastic model selection in rough contact analysis, the contact area ratio (A r /A a ) obtained by statistical contact models with different summit extraction methods was compared with a direct simulation using the boundary element method (BEM). Fifty areal topography datasets with different autocorrelation functions in terms of the power index and correlation length were used for investigation. The non-causal 2D auto-regressive model which can generate datasets with specified parameters was employed in this research. Three summit extraction methods, Nayak’s theory, 8-point analysis and watershed segmentation, were examined. With regard to the stochastic model, Bhushan’s model and BGT (Bush-Gibson-Thomas) model were applied. The values of A r /A a from the stochastic models tended to be smaller than BEM. The discrepancy between the Bhushan’s model with the 8-point analysis and BEM was slightly smaller than Nayak’s theory. The results with the watershed segmentation was similar to those with the 8-point analysis. The impact of the Wolf pruning on the discrepancy between the stochastic analysis and BEM was not very clear. In case of the BGT model which employs surface gradients, good quantitative agreement against BEM was obtained when the Nayak’s bandwidth parameter was large.

  14. Computation of large-scale statistics in decaying isotropic turbulence

    NASA Technical Reports Server (NTRS)

    Chasnov, Jeffrey R.

    1993-01-01

    We have performed large-eddy simulations of decaying isotropic turbulence to test the prediction of self-similar decay of the energy spectrum and to compute the decay exponents of the kinetic energy. In general, good agreement between the simulation results and the assumption of self-similarity were obtained. However, the statistics of the simulations were insufficient to compute the value of gamma which corrects the decay exponent when the spectrum follows a k(exp 4) wave number behavior near k = 0. To obtain good statistics, it was found necessary to average over a large ensemble of turbulent flows.

  15. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    NASA Astrophysics Data System (ADS)

    Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.

    2013-04-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a

  16. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  17. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  18. Flat Plate Wake Velocity Statistics Obtained With Circular And Elliptic Trailing Edges

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2016-01-01

    The near wake of a flat plate with circular and elliptic trailing edges is investigated with data from direct numerical simulations. The plate length and thickness are the same in both cases. The separating boundary layers are turbulent and statistically identical. Therefore the wake is symmetric in the two cases. The emphasis in this study is on a comparison of the wake-distributions of velocity components, normal intensity and fluctuating shear stress obtained in the two cases.

  19. Summary goodness-of-fit statistics for binary generalized linear models with noncanonical link functions.

    PubMed

    Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J

    2016-05-01

    Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.

  20. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  1. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  2. Evaluation of a statistics-based Ames mutagenicity QSAR model and interpretation of the results obtained.

    PubMed

    Barber, Chris; Cayley, Alex; Hanser, Thierry; Harding, Alex; Heghes, Crina; Vessey, Jonathan D; Werner, Stephane; Weiner, Sandy K; Wichard, Joerg; Giddings, Amanda; Glowienke, Susanne; Parenty, Alexis; Brigo, Alessandro; Spirkl, Hans-Peter; Amberg, Alexander; Kemper, Ray; Greene, Nigel

    2016-04-01

    The relative wealth of bacterial mutagenicity data available in the public literature means that in silico quantitative/qualitative structure activity relationship (QSAR) systems can readily be built for this endpoint. A good means of evaluating the performance of such systems is to use private unpublished data sets, which generally represent a more distinct chemical space than publicly available test sets and, as a result, provide a greater challenge to the model. However, raw performance metrics should not be the only factor considered when judging this type of software since expert interpretation of the results obtained may allow for further improvements in predictivity. Enough information should be provided by a QSAR to allow the user to make general, scientifically-based arguments in order to assess and overrule predictions when necessary. With all this in mind, we sought to validate the performance of the statistics-based in vitro bacterial mutagenicity prediction system Sarah Nexus (version 1.1) against private test data sets supplied by nine different pharmaceutical companies. The results of these evaluations were then analysed in order to identify findings presented by the model which would be useful for the user to take into consideration when interpreting the results and making their final decision about the mutagenic potential of a given compound. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Targeted estimation of nuisance parameters to obtain valid statistical inference.

    PubMed

    van der Laan, Mark J

    2014-01-01

    In order to obtain concrete results, we focus on estimation of the treatment specific mean, controlling for all measured baseline covariates, based on observing independent and identically distributed copies of a random variable consisting of baseline covariates, a subsequently assigned binary treatment, and a final outcome. The statistical model only assumes possible restrictions on the conditional distribution of treatment, given the covariates, the so-called propensity score. Estimators of the treatment specific mean involve estimation of the propensity score and/or estimation of the conditional mean of the outcome, given the treatment and covariates. In order to make these estimators asymptotically unbiased at any data distribution in the statistical model, it is essential to use data-adaptive estimators of these nuisance parameters such as ensemble learning, and specifically super-learning. Because such estimators involve optimal trade-off of bias and variance w.r.t. the infinite dimensional nuisance parameter itself, they result in a sub-optimal bias/variance trade-off for the resulting real-valued estimator of the estimand. We demonstrate that additional targeting of the estimators of these nuisance parameters guarantees that this bias for the estimand is second order and thereby allows us to prove theorems that establish asymptotic linearity of the estimator of the treatment specific mean under regularity conditions. These insights result in novel targeted minimum loss-based estimators (TMLEs) that use ensemble learning with additional targeted bias reduction to construct estimators of the nuisance parameters. In particular, we construct collaborative TMLEs (C-TMLEs) with known influence curve allowing for statistical inference, even though these C-TMLEs involve variable selection for the propensity score based on a criterion that measures how effective the resulting fit of the propensity score is in removing bias for the estimand. As a particular special

  4. Artificial Intelligence Approach to Support Statistical Quality Control Teaching

    ERIC Educational Resources Information Center

    Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno

    2006-01-01

    Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…

  5. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  6. APPLICATION OF STATISTICAL ENERGY ANALYSIS TO VIBRATIONS OF MULTI-PANEL STRUCTURES.

    DTIC Science & Technology

    cylindrical shell are compared with predictions obtained from statistical energy analysis . Generally good agreement is observed. The flow of mechanical...the coefficients of proportionality between power flow and average modal energy difference, which one must know in order to apply statistical energy analysis . No

  7. Goodness-of-Fit Assessment of Item Response Theory Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto

    2013-01-01

    The article provides an overview of goodness-of-fit assessment methods for item response theory (IRT) models. It is now possible to obtain accurate "p"-values of the overall fit of the model if bivariate information statistics are used. Several alternative approaches are described. As the validity of inferences drawn on the fitted model…

  8. Record statistics of financial time series and geometric random walks

    NASA Astrophysics Data System (ADS)

    Sabir, Behlool; Santhanam, M. S.

    2014-09-01

    The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.

  9. Using Technology to Prompt Good Questions about Distributions in Statistics

    ERIC Educational Resources Information Center

    Nabbout-Cheiban, Marie; Fisher, Forest; Edwards, Michael Todd

    2017-01-01

    The Common Core State Standards for Mathematics envisions data analysis as a key component of K-grade 12 mathematics instruction with statistics introduced in the early grades. Nonetheless, deficiencies in statistical learning persist throughout elementary school and beyond. Too often, mathematics teachers lack the statistical knowledge for…

  10. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  11. Field Penetration in a Rectangular Box Using Numerical Techniques: An Effort to Obtain Statistical Shielding Effectiveness

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.; Yu, Shih-Pin

    2006-01-01

    This paper emphasizes the application of numerical methods to explore the ideas related to shielding effectiveness from a statistical view. An empty rectangular box is examined using a hybrid modal/moment method. The basic computational method is presented followed by the results for single- and multiple observation points within the over-moded empty structure. The statistics of the field are obtained by using frequency stirring, borrowed from the ideas connected with reverberation chamber techniques, and extends the ideas of shielding effectiveness well into the multiple resonance regions. The study presented in this paper will address the average shielding effectiveness over a broad spatial sample within the enclosure as the frequency is varied.

  12. Exact goodness-of-fit tests for Markov chains.

    PubMed

    Besag, J; Mondal, D

    2013-06-01

    Goodness-of-fit tests are useful in assessing whether a statistical model is consistent with available data. However, the usual χ² asymptotics often fail, either because of the paucity of the data or because a nonstandard test statistic is of interest. In this article, we describe exact goodness-of-fit tests for first- and higher order Markov chains, with particular attention given to time-reversible ones. The tests are obtained by conditioning on the sufficient statistics for the transition probabilities and are implemented by simple Monte Carlo sampling or by Markov chain Monte Carlo. They apply both to single and to multiple sequences and allow a free choice of test statistic. Three examples are given. The first concerns multiple sequences of dry and wet January days for the years 1948-1983 at Snoqualmie Falls, Washington State, and suggests that standard analysis may be misleading. The second one is for a four-state DNA sequence and lends support to the original conclusion that a second-order Markov chain provides an adequate fit to the data. The last one is six-state atomistic data arising in molecular conformational dynamics simulation of solvated alanine dipeptide and points to strong evidence against a first-order reversible Markov chain at 6 picosecond time steps. © 2013, The International Biometric Society.

  13. What Good Are Statistics that Don't Generalize?

    ERIC Educational Resources Information Center

    Shaffer, David Williamson; Serlin, Ronald C.

    2004-01-01

    Quantitative and qualitative inquiry are sometimes portrayed as distinct and incompatible paradigms for research in education. Approaches to combining qualitative and quantitative research typically "integrate" the two methods by letting them co-exist independently within a single research study. Here we describe intra-sample statistical analysis…

  14. Use of multivariate statistics to identify unreliable data obtained using CASA.

    PubMed

    Martínez, Luis Becerril; Crispín, Rubén Huerta; Mendoza, Maximino Méndez; Gallegos, Oswaldo Hernández; Martínez, Andrés Aragón

    2013-06-01

    In order to identify unreliable data in a dataset of motility parameters obtained from a pilot study acquired by a veterinarian with experience in boar semen handling, but without experience in the operation of a computer assisted sperm analysis (CASA) system, a multivariate graphical and statistical analysis was performed. Sixteen boar semen samples were aliquoted then incubated with varying concentrations of progesterone from 0 to 3.33 µg/ml and analyzed in a CASA system. After standardization of the data, Chernoff faces were pictured for each measurement, and a principal component analysis (PCA) was used to reduce the dimensionality and pre-process the data before hierarchical clustering. The first twelve individual measurements showed abnormal features when Chernoff faces were drawn. PCA revealed that principal components 1 and 2 explained 63.08% of the variance in the dataset. Values of principal components for each individual measurement of semen samples were mapped to identify differences among treatment or among boars. Twelve individual measurements presented low values of principal component 1. Confidence ellipses on the map of principal components showed no statistically significant effects for treatment or boar. Hierarchical clustering realized on two first principal components produced three clusters. Cluster 1 contained evaluations of the two first samples in each treatment, each one of a different boar. With the exception of one individual measurement, all other measurements in cluster 1 were the same as observed in abnormal Chernoff faces. Unreliable data in cluster 1 are probably related to the operator inexperience with a CASA system. These findings could be used to objectively evaluate the skill level of an operator of a CASA system. This may be particularly useful in the quality control of semen analysis using CASA systems.

  15. Statistical Properties of Echosignal Obtained from Human Dermis In Vivo

    NASA Astrophysics Data System (ADS)

    Piotrzkowska, Hanna; Litniewski, Jerzy; Nowicki, Andrzej; Szymańska, Elżbieta

    The paper presents the classification of the healthy skin and the skin lesions (basal cell carcinoma and actinic keratosis), basing on the statistical parameters of the envelope of ultrasonic echoes. The envelope was modeled using Rayleigh and non-Rayleigh (K-distribution) statistics. Furthermore, the characteristic parameter of the K-distribution, the effective number of scatterers was investigated. Also the attenuation coefficient was used for the skin lesion assessment.

  16. Method for obtaining a collimated near-unity aspect ratio output beam from a DFB-GSE laser with good beam quality.

    PubMed

    Liew, S K; Carlson, N W

    1992-05-20

    A simple method for obtaining a collimated near-unity aspect ratio output beam from laser sources with extremely large (> 100:1) aspect ratios is demonstrated by using a distributed-feedback grating-surfaceemitting laser. Far-field power-in-the-bucket measurements of the laser indicate good beam quality with a high Strehl ratio.

  17. Reflexion on linear regression trip production modelling method for ensuring good model quality

    NASA Astrophysics Data System (ADS)

    Suprayitno, Hitapriya; Ratnasari, Vita

    2017-11-01

    Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.

  18. Statistical evaluation of accelerated stability data obtained at a single temperature. I. Effect of experimental errors in evaluation of stability data obtained.

    PubMed

    Yoshioka, S; Aso, Y; Takeda, Y

    1990-06-01

    Accelerated stability data obtained at a single temperature is statistically evaluated, and the utility of such data for assessment of stability is discussed focussing on the chemical stability of solution-state dosage forms. The probability that the drug content of a product is observed to be within the lower specification limit in the accelerated test is interpreted graphically. This probability depends on experimental errors in the assay and temperature control, as well as the true degradation rate and activation energy. Therefore, the observation that the drug content meets the specification in the accelerated testing can provide only limited information on the shelf-life of the drug, without the knowledge of the activation energy and the accuracy and precision of the assay and temperature control.

  19. Analytical review based on statistics on good and poor financial performance of LPD in Bangli regency.

    NASA Astrophysics Data System (ADS)

    Yasa, I. B. A.; Parnata, I. K.; Susilawati, N. L. N. A. S.

    2018-01-01

    This study aims to apply analytical review model to analyze the influence of GCG, accounting conservatism, financial distress models and company size on good and poor financial performance of LPD in Bangli Regency. Ordinal regression analysis is used to perform analytical review, so that obtained the influence and relationship between variables to be considered further audit. Respondents in this study were LPDs in Bangli Regency, which amounted to 159 LPDs of that number 100 LPDs were determined as randomly selected samples. The test results found GCG and company size have a significant effect on both the good and poor financial performance, while the conservatism and financial distress model has no significant effect. The influence of the four variables on the overall financial performance of 58.8%, while the remaining 41.2% influenced by other variables. Size, FDM and accounting conservatism are variables, which are further recommended to be audited.

  20. The statistical treatment implemented to obtain the planetary protection bioburdens for the Mars Science Laboratory mission

    NASA Astrophysics Data System (ADS)

    Beaudet, Robert A.

    2013-06-01

    NASA Planetary Protection Policy requires that Category IV missions such as those going to the surface of Mars include detailed assessment and documentation of the bioburden on the spacecraft at launch. In the prior missions to Mars, the approaches used to estimate the bioburden could easily be conservative without penalizing the project because spacecraft elements such as the descent and landing stages had relatively small surface areas and volumes. With the advent of a large spacecraft such as Mars Science Laboratory (MSL), it became necessary for a modified—still conservative but more pragmatic—statistical treatment be used to obtain the standard deviations and the bioburden densities at about the 99.9% confidence limits. This article describes both the Gaussian and Poisson statistics that were implemented to analyze the bioburden data from the MSL spacecraft prior to launch. The standard deviations were weighted by the areas sampled with each swab or wipe. Some typical cases are given and discussed.

  1. Statistical evaluation of fatty acid profile and cholesterol content in fish (common carp) lipids obtained by different sample preparation procedures.

    PubMed

    Spiric, Aurelija; Trbovic, Dejana; Vranic, Danijela; Djinovic, Jasna; Petronijevic, Radivoj; Matekalo-Sverak, Vesna

    2010-07-05

    Studies performed on lipid extraction from animal and fish tissues do not provide information on its influence on fatty acid composition of the extracted lipids as well as on cholesterol content. Data presented in this paper indicate the impact of extraction procedures on fatty acid profile of fish lipids extracted by the modified Soxhlet and ASE (accelerated solvent extraction) procedure. Cholesterol was also determined by direct saponification method, too. Student's paired t-test used for comparison of the total fat content in carp fish population obtained by two extraction methods shows that differences between values of the total fat content determined by ASE and modified Soxhlet method are not statistically significant. Values obtained by three different methods (direct saponification, ASE and modified Soxhlet method), used for determination of cholesterol content in carp, were compared by one-way analysis of variance (ANOVA). The obtained results show that modified Soxhlet method gives results which differ significantly from the results obtained by direct saponification and ASE method. However the results obtained by direct saponification and ASE method do not differ significantly from each other. The highest quantities for cholesterol (37.65 to 65.44 mg/100 g) in the analyzed fish muscle were obtained by applying direct saponification method, as less destructive one, followed by ASE (34.16 to 52.60 mg/100 g) and modified Soxhlet extraction method (10.73 to 30.83 mg/100 g). Modified Soxhlet method for extraction of fish lipids gives higher values for n-6 fatty acids than ASE method (t(paired)=3.22 t(c)=2.36), while there is no statistically significant difference in the n-3 content levels between the methods (t(paired)=1.31). The UNSFA/SFA ratio obtained by using modified Soxhlet method is also higher than the ratio obtained using ASE method (t(paired)=4.88 t(c)=2.36). Results of Principal Component Analysis (PCA) showed that the highest positive impact to

  2. Self-consistent mean-field approach to the statistical level density in spherical nuclei

    NASA Astrophysics Data System (ADS)

    Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.

    2018-06-01

    A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.

  3. Peaks Over Threshold (POT): A methodology for automatic threshold estimation using goodness of fit p-value

    NASA Astrophysics Data System (ADS)

    Solari, Sebastián.; Egüen, Marta; Polo, María. José; Losada, Miguel A.

    2017-04-01

    Threshold estimation in the Peaks Over Threshold (POT) method and the impact of the estimation method on the calculation of high return period quantiles and their uncertainty (or confidence intervals) are issues that are still unresolved. In the past, methods based on goodness of fit tests and EDF-statistics have yielded satisfactory results, but their use has not yet been systematized. This paper proposes a methodology for automatic threshold estimation, based on the Anderson-Darling EDF-statistic and goodness of fit test. When combined with bootstrapping techniques, this methodology can be used to quantify both the uncertainty of threshold estimation and its impact on the uncertainty of high return period quantiles. This methodology was applied to several simulated series and to four precipitation/river flow data series. The results obtained confirmed its robustness. For the measured series, the estimated thresholds corresponded to those obtained by nonautomatic methods. Moreover, even though the uncertainty of the threshold estimation was high, this did not have a significant effect on the width of the confidence intervals of high return period quantiles.

  4. A method for obtaining a statistically stationary turbulent free shear flow

    NASA Technical Reports Server (NTRS)

    Timson, Stephen F.; Lele, S. K.; Moser, R. D.

    1994-01-01

    The long-term goal of the current research is the study of Large-Eddy Simulation (LES) as a tool for aeroacoustics. New algorithms and developments in computer hardware are making possible a new generation of tools for aeroacoustic predictions, which rely on the physics of the flow rather than empirical knowledge. LES, in conjunction with an acoustic analogy, holds the promise of predicting the statistics of noise radiated to the far-field of a turbulent flow. LES's predictive ability will be tested through extensive comparison of acoustic predictions based on a Direct Numerical Simulation (DNS) and LES of the same flow, as well as a priori testing of DNS results. The method presented here is aimed at allowing simulation of a turbulent flow field that is both simple and amenable to acoustic predictions. A free shear flow is homogeneous in both the streamwise and spanwise directions and which is statistically stationary will be simulated using equations based on the Navier-Stokes equations with a small number of added terms. Studying a free shear flow eliminates the need to consider flow-surface interactions as an acoustic source. The homogeneous directions and the flow's statistically stationary nature greatly simplify the application of an acoustic analogy.

  5. Statistics for wildlifers: how much and what kind?

    USGS Publications Warehouse

    Johnson, D.H.; Shaffer, T.L.; Newton, W.E.

    2001-01-01

    Quantitative methods are playing increasingly important roles in wildlife ecology and, ultimately, management. This change poses a challenge for wildlife practitioners and students who are not well-educated in mathematics and statistics. Here we give our opinions on what wildlife biologists should know about statistics, while recognizing that not everyone is inclined mathematically. For those who are, we recommend that they take mathematics coursework at least through calculus and linear algebra. They should take statistics courses that are focused conceptually , stressing the Why rather than the How of doing statistics. For less mathematically oriented wildlifers, introductory classes in statistical techniques will furnish some useful background in basic methods but may provide little appreciation of when the methods are appropriate. These wildlifers will have to rely much more on advice from statisticians. Far more important than knowing how to analyze data is an understanding of how to obtain and recognize good data. Regardless of the statistical education they receive, all wildlife biologists should appreciate the importance of controls, replication, and randomization in studies they conduct. Understanding these concepts requires little mathematical sophistication, but is critical to advancing the science of wildlife ecology.

  6. What are the attributes of a good health educator?

    PubMed

    Ilic, Dragan; Harding, Jessica; Allan, Christie; Diug, Basia

    2016-06-28

    The purpose of this study was to examine the attributes that students and educators believe are important to being a good health educator in a non-clinical setting. A cross-sectional survey of first-year health science students and educators involved with a Health Science course in Melbourne, Australia was performed. A convenience sampling approach was implemented, with participants were required to rate the importance of teaching attributes on a previously developed 15-item written questionnaire. Descriptive statistics were generated, with Pearson's chi-square statistics used to examine differences between groups. In total 94/147 (63.9%) of students and 15/15 (100%) of educators participated in the study. Of the 15 attributes, only 'scholarly activity' was not deemed to be not as an important attribute to define a good educator. Knowledge base (50% vs. 13.3%) and feedback skills (22.3% vs. 0%) were rated as important attributes by students in comparison to educators. Professionalism (20% vs. 5.3%), scholarly activity (20% vs. 3.2%) and role modelling (26.7% vs. 3.2%) were rated as the most important attributes by educators in comparison to students. No single attribute makes a good health educator; rather health educators are required to have a rounded approach to teaching. Students have greater focus on the educator providing a transfer of knowledge. Educators are additionally focused on professionalism attributes, which may not be valued by students. Students and educators must enter into a clearer understanding of expectations, from both parties, to obtain optimal education outcomes.

  7. Permutation tests for goodness-of-fit testing of mathematical models to experimental data.

    PubMed

    Fişek, M Hamit; Barlas, Zeynep

    2013-03-01

    This paper presents statistical procedures for improving the goodness-of-fit testing of theoretical models to data obtained from laboratory experiments. We use an experimental study in the expectation states research tradition which has been carried out in the "standardized experimental situation" associated with the program to illustrate the application of our procedures. We briefly review the expectation states research program and the fundamentals of resampling statistics as we develop our procedures in the resampling context. The first procedure we develop is a modification of the chi-square test which has been the primary statistical tool for assessing goodness of fit in the EST research program, but has problems associated with its use. We discuss these problems and suggest a procedure to overcome them. The second procedure we present, the "Average Absolute Deviation" test, is a new test and is proposed as an alternative to the chi square test, as being simpler and more informative. The third and fourth procedures are permutation versions of Jonckheere's test for ordered alternatives, and Kendall's tau(b), a rank order correlation coefficient. The fifth procedure is a new rank order goodness-of-fit test, which we call the "Deviation from Ideal Ranking" index, which we believe may be more useful than other rank order tests for assessing goodness-of-fit of models to experimental data. The application of these procedures to the sample data is illustrated in detail. We then present another laboratory study from an experimental paradigm different from the expectation states paradigm - the "network exchange" paradigm, and describe how our procedures may be applied to this data set. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  9. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  10. Towards good practice for health statistics: lessons from the Millennium Development Goal health indicators.

    PubMed

    Murray, Christopher J L

    2007-03-10

    Health statistics are at the centre of an increasing number of worldwide health controversies. Several factors are sharpening the tension between the supply and demand for high quality health information, and the health-related Millennium Development Goals (MDGs) provide a high-profile example. With thousands of indicators recommended but few measured well, the worldwide health community needs to focus its efforts on improving measurement of a small set of priority areas. Priority indicators should be selected on the basis of public-health significance and several dimensions of measurability. Health statistics can be divided into three types: crude, corrected, and predicted. Health statistics are necessary inputs to planning and strategic decision making, programme implementation, monitoring progress towards targets, and assessment of what works and what does not. Crude statistics that are biased have no role in any of these steps; corrected statistics are preferred. For strategic decision making, when corrected statistics are unavailable, predicted statistics can play an important part. For monitoring progress towards agreed targets and assessment of what works and what does not, however, predicted statistics should not be used. Perhaps the most effective method to decrease controversy over health statistics and to encourage better primary data collection and the development of better analytical methods is a strong commitment to provision of an explicit data audit trail. This initiative would make available the primary data, all post-data collection adjustments, models including covariates used for farcasting and forecasting, and necessary documentation to the public.

  11. Statistical issues in quality control of proteomic analyses: good experimental design and planning.

    PubMed

    Cairns, David A

    2011-03-01

    Quality control is becoming increasingly important in proteomic investigations as experiments become more multivariate and quantitative. Quality control applies to all stages of an investigation and statistics can play a key role. In this review, the role of statistical ideas in the design and planning of an investigation is described. This involves the design of unbiased experiments using key concepts from statistical experimental design, the understanding of the biological and analytical variation in a system using variance components analysis and the determination of a required sample size to perform a statistically powerful investigation. These concepts are described through simple examples and an example data set from a 2-D DIGE pilot experiment. Each of these concepts can prove useful in producing better and more reproducible data. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force.

    PubMed

    Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P

    2016-06-01

    Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  13. Statistical Systems with Z

    NASA Astrophysics Data System (ADS)

    William, Peter

    In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.

  14. The Statistics of Visual Representation

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.

    2002-01-01

    The experience of retinex image processing has prompted us to reconsider fundamental aspects of imaging and image processing. Foremost is the idea that a good visual representation requires a non-linear transformation of the recorded (approximately linear) image data. Further, this transformation appears to converge on a specific distribution. Here we investigate the connection between numerical and visual phenomena. Specifically the questions explored are: (1) Is there a well-defined consistent statistical character associated with good visual representations? (2) Does there exist an ideal visual image? And (3) what are its statistical properties?

  15. Look good feel better workshops: a "big lift" for women with cancer.

    PubMed

    Taggart, Linda R; Ozolins, Laura; Hardie, Heather; Nyhof-Young, Joyce

    2009-01-01

    Look Good Feel Better (LGFB) aims to help women manage appearance-related side effects of cancer and its treatment. In this pilot study, we assessed the impact of LGFB workshops on self-image, social interactions, perceived social support, and anxiety. We administered scales preworkshop and postworkshop participation. We conducted semistructured telephone interviews following attendance. Statistically and qualitatively, subjects experienced significant improvement in self-image, social interaction, and anxiety. Participant anxiety decreased, but greater social support was anticipated than actually obtained. LGFB workshops increase self-image, improve social interactions, and reduce anxiety.

  16. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    PubMed

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  17. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  18. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically

  19. Path attenuation statistics influenced by orientation of rain cells

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1976-01-01

    The influence of path azimuth on fade and space diversity statistics associated with propagation along earth-satellite paths at a frequency of 18 GHz is examined. A radar rain reflectivity data base obtained during the summer of 1973 is injected into a modeling program and the attenuation along parallel earth-satellite paths are obtained for a conglomeration of azimuths. Statistics are separated into two groupings: one pertaining to earth-satellite paths oriented in the northwest-southeast and the other in the northeast-southwest quadrants using a fixed elevation angle of 45 deg. The latter case shows fading to be greater with a degraded space diversity suggesting rain cells to be elongated along this direction. Cell dimensions are analyzed for both sets of quadrants and are found to have average values larger by 2 km in the northeast-southwest quadrants; a result consistent with the fade and space diversity results. Examination of the wind direction for the 14 rain days of data analyzed shows good correlation of the average or median wind directions with the directions of maximum fading and degraded space diversity.

  20. Effects of the magnetic field direction on the Tsallis statistic

    NASA Astrophysics Data System (ADS)

    González-Casanova, Diego F.; Lazarian, A.; Cho, J.

    2018-04-01

    We extend the use of the Tsallis statistic to measure the differences in gas dynamics relative to the mean magnetic field present from natural eddy-type motions existing in magnetohydrodynamical (MHD) turbulence. The variation in gas dynamics was estimated using the Tsallis parameters on the incremental probability distribution function of the observables (intensity and velocity centroid) obtained from compressible MHD simulations. We find that the Tsallis statistic is susceptible to the anisotropy produced by the magnetic field, even when anisotropy is present the Tsallis statistic can be used to determine MHD parameters such as the Sonic Mach number. We quantize the goodness of the Tsallis parameters using the coefficient of determination to measure the differences in the gas dynamics. These parameters also determine the level of magnetization and compressibility of the medium. To further simulate realistic spectroscopic observational data, we introduced smoothing, noise, and cloud boundaries to the MHD simulations.

  1. Statistics of lattice animals

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping; Nadler, Walder; Grassberger, Peter

    2005-07-01

    The scaling behavior of randomly branched polymers in a good solvent is studied in two to nine dimensions, modeled by lattice animals on simple hypercubic lattices. For the simulations, we use a biased sequential sampling algorithm with re-sampling, similar to the pruned-enriched Rosenbluth method (PERM) used extensively for linear polymers. We obtain high statistics of animals with up to several thousand sites in all dimension 2⩽d⩽9. The partition sum (number of different animals) and gyration radii are estimated. In all dimensions we verify the Parisi-Sourlas prediction, and we verify all exactly known critical exponents in dimensions 2, 3, 4, and ⩾8. In addition, we present the hitherto most precise estimates for growth constants in d⩾3. For clusters with one site attached to an attractive surface, we verify the superuniversality of the cross-over exponent at the adsorption transition predicted by Janssen and Lyssy.

  2. Zubarev's Nonequilibrium Statistical Operator Method in the Generalized Statistics of Multiparticle Systems

    NASA Astrophysics Data System (ADS)

    Glushak, P. A.; Markiv, B. B.; Tokarchuk, M. V.

    2018-01-01

    We present a generalization of Zubarev's nonequilibrium statistical operator method based on the principle of maximum Renyi entropy. In the framework of this approach, we obtain transport equations for the basic set of parameters of the reduced description of nonequilibrium processes in a classical system of interacting particles using Liouville equations with fractional derivatives. For a classical systems of particles in a medium with a fractal structure, we obtain a non-Markovian diffusion equation with fractional spatial derivatives. For a concrete model of the frequency dependence of a memory function, we obtain generalized Kettano-type diffusion equation with the spatial and temporal fractality taken into account. We present a generalization of nonequilibrium thermofield dynamics in Zubarev's nonequilibrium statistical operator method in the framework of Renyi statistics.

  3. Statistical theory of dynamo

    NASA Astrophysics Data System (ADS)

    Kim, E.; Newton, A. P.

    2012-04-01

    numbers obtained in recent years 1795-1995 on a short time scale. Monte Carlo simulations are performed on these data to obtain PDFs of the solar activity on both long and short time scales. These PDFs are then compared with predicted PDFs from numerical simulation of our α-Ω dynamo model, where α is assumed to have both mean α0 and fluctuating α' parts. By varying the correlation time of fluctuating α', the ratio of the amplitude of the fluctuating to mean alpha <α'2>/α02 (where angular brackets <> denote ensemble average), and the ratio of poloidal to toroidal magnetic fields, we show that the results from our stochastic dynamo model can match the PDFs of solar activity on both long and short time scales. In particular, a good agreement is obtained when the fluctuation in alpha is roughly equal to the mean part with a correlation time shorter than the solar period.

  4. Does bad inference drive out good?

    PubMed

    Marozzi, Marco

    2015-07-01

    The (mis)use of statistics in practice is widely debated, and a field where the debate is particularly active is medicine. Many scholars emphasize that a large proportion of published medical research contains statistical errors. It has been noted that top class journals like Nature Medicine and The New England Journal of Medicine publish a considerable proportion of papers that contain statistical errors and poorly document the application of statistical methods. This paper joins the debate on the (mis)use of statistics in the medical literature. Even though the validation process of a statistical result may be quite elusive, a careful assessment of underlying assumptions is central in medicine as well as in other fields where a statistical method is applied. Unfortunately, a careful assessment of underlying assumptions is missing in many papers, including those published in top class journals. In this paper, it is shown that nonparametric methods are good alternatives to parametric methods when the assumptions for the latter ones are not satisfied. A key point to solve the problem of the misuse of statistics in the medical literature is that all journals have their own statisticians to review the statistical method/analysis section in each submitted paper. © 2015 Wiley Publishing Asia Pty Ltd.

  5. RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2007-01-01

    Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253

  6. [Clinical research=design*measurements*statistical analyses].

    PubMed

    Furukawa, Toshiaki

    2012-06-01

    A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.

  7. Allelic frequencies and statistical data obtained from 48 AIM INDEL loci in an admixed population from the Brazilian Amazon.

    PubMed

    Francez, Pablo Abdon da Costa; Ribeiro-Rodrigues, Elzemar Martins; dos Santos, Sidney Emanuel Batista

    2012-01-01

    Allelic frequencies of 48 informative insert-delete (INDEL) loci were obtained from a sample set of 130 unrelated individuals living in Macapá, a city located in the northern Amazon region, in Brazil. The values of heterozygosity (H), polymorphic information content (PIC), power of discrimination (PD), power of exclusion (PE), matching probability (MP) and typical paternity index (TPI) were calculated and showed the forensic efficiency of these genetic markers. Based on the allele frequency obtained for the population of Macapá, we estimated an interethnic admixture for the three parental groups (European, Native American and African) of, respectively, 50%, 21% and 29%. Comparing these allele frequencies with those of other Brazilian populations and the parental populations, statistically significant distances were found. The interpopulation genetic distance (F(ST) coefficients) to the present database ranged from F(ST)=0.0431 (p<0.00001) between Macapá and Belém to F(ST)=0.266 (p<0.00001) between Macapá and the Native American group. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. A smoothed residual based goodness-of-fit statistic for nest-survival models

    Treesearch

    Rodney X. Sturdivant; Jay J. Rotella; Robin E. Russell

    2008-01-01

    Estimating nest success and identifying important factors related to nest-survival rates is an essential goal for many wildlife researchers interested in understanding avian population dynamics. Advances in statistical methods have led to a number of estimation methods and approaches to modeling this problem. Recently developed models allow researchers to include a...

  9. Earth-space links and fade-duration statistics

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz

    1995-01-01

    In recent years, fade-duration statistics have been the subject of several experimental investigations. A good knowledge of the fade-duration distribution is important for the assessment of a satellite communication system's channel dynamics: What is a typical link outage duration? How often do link outages exceeding a given duration occur? Unfortunately there is yet no model that can universally answer the above questions. The available field measurements mainly come from temperate climatic zones and only from a few sites. Furthermore, the available statistics are also limited in the choice of frequency and path elevation angle. Yet, much can be learned from the available information. For example, we now know that the fade-duration distribution is approximately lognormal. Under certain conditions, we can even determine the median and other percentiles of the distribution. This paper reviews the available data obtained by several experimenters in different parts of the world. Areas of emphasis are mobile and fixed satellite links. Fades in mobile links are due to roadside-tree shadowing, whereas fades in fixed links are due to rain attenuation.

  10. Earth-Space Links and Fade-Duration Statistics

    NASA Technical Reports Server (NTRS)

    Davarian, Faramaz

    1996-01-01

    In recent years, fade-duration statistics have been the subject of several experimental investigations. A good knowledge of the fade-duration distribution is important for the assessment of a satellite communication system's channel dynamics: What is a typical link outage duration? How often do link outages exceeding a given duration occur? Unfortunately there is yet no model that can universally answer the above questions. The available field measurements mainly come from temperate climatic zones and only from a few sites. Furthermore, the available statistics are also limited in the choice of frequency and path elevation angle. Yet, much can be learned from the available information. For example, we now know that the fade-duration distribution is approximately lognormal. Under certain conditions, we can even determine the median and other percentiles of the distribution. This paper reviews the available data obtained by several experimenters in different parts of the world. Areas of emphasis are mobile and fixed satellite links. Fades in mobile links are due to roadside-tree shadowing, whereas fades in fixed links are due to rain attenuation.

  11. A novel approach for choosing summary statistics in approximate Bayesian computation.

    PubMed

    Aeschbacher, Simon; Beaumont, Mark A; Futschik, Andreas

    2012-11-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θ(anc) = 4N(e)u) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L(2)-loss performs best. Applying that method to the ibex data, we estimate θ(anc)≈ 1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10(-4) and 3.5 × 10(-3) per locus per generation. The proportion of males with access to matings is estimated as ω≈ 0.21, which is in good agreement with recent independent estimates.

  12. A Novel Approach for Choosing Summary Statistics in Approximate Bayesian Computation

    PubMed Central

    Aeschbacher, Simon; Beaumont, Mark A.; Futschik, Andreas

    2012-01-01

    The choice of summary statistics is a crucial step in approximate Bayesian computation (ABC). Since statistics are often not sufficient, this choice involves a trade-off between loss of information and reduction of dimensionality. The latter may increase the efficiency of ABC. Here, we propose an approach for choosing summary statistics based on boosting, a technique from the machine-learning literature. We consider different types of boosting and compare them to partial least-squares regression as an alternative. To mitigate the lack of sufficiency, we also propose an approach for choosing summary statistics locally, in the putative neighborhood of the true parameter value. We study a demographic model motivated by the reintroduction of Alpine ibex (Capra ibex) into the Swiss Alps. The parameters of interest are the mean and standard deviation across microsatellites of the scaled ancestral mutation rate (θanc = 4Neu) and the proportion of males obtaining access to matings per breeding season (ω). By simulation, we assess the properties of the posterior distribution obtained with the various methods. According to our criteria, ABC with summary statistics chosen locally via boosting with the L2-loss performs best. Applying that method to the ibex data, we estimate θ^anc≈1.288 and find that most of the variation across loci of the ancestral mutation rate u is between 7.7 × 10−4 and 3.5 × 10−3 per locus per generation. The proportion of males with access to matings is estimated as ω^≈0.21, which is in good agreement with recent independent estimates. PMID:22960215

  13. Good experimental design and statistics can save animals, but how can it be promoted?

    PubMed

    Festing, Michael F W

    2004-06-01

    Surveys of published papers show that there are many errors both in the design of the experiments and in the statistical analysis of the resulting data. This must result in a waste of animals and scientific resources, and it is surely unethical. Scientific quality might be improved, to some extent, by journal editors, but they are constrained by lack of statistical referees and inadequate statistical training of those referees that they do use. Other parties, such as welfare regulators, ethical review committees and individual scientists also have an interest in scientific quality, but they do not seem to be well placed to make the required changes. However, those who fund research would have the power to do something if they could be convinced that it is in their best interests to do so. More examples of the way in which better experimental design has led to improved experiments would be helpful in persuading these funding organisations to take further action.

  14. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  15. Statistical analysis of NaOH pretreatment effects on sweet sorghum bagasse characteristics

    NASA Astrophysics Data System (ADS)

    Putri, Ary Mauliva Hada; Wahyuni, Eka Tri; Sudiyani, Yanni

    2017-01-01

    We analyze the behavior of sweet sorghum bagasse characteristics before and after NaOH pretreatments by statistical analysis. These characteristics include the percentages of lignocellulosic materials and the degree of crystallinity. We use the chi-square method to get the values of fitted parameters, and then deploy student's t-test to check whether they are significantly different from zero at 99.73% confidence level (C.L.). We obtain, in the cases of hemicellulose and lignin, that their percentages after pretreatment decrease statistically. On the other hand, crystallinity does not possess similar behavior as the data proves that all fitted parameters in this case might be consistent with zero. Our statistical result is then cross examined with the observations from X-ray diffraction (XRD) and Fourier Transform Infrared (FTIR) Spectroscopy, showing pretty good agreement. This result may indicate that the 10% NaOH pretreatment might not be sufficient in changing the crystallinity index of the sweet sorghum bagasse.

  16. Comparing international crash statistics

    DOT National Transportation Integrated Search

    1999-12-01

    In order to examine national developments in traffic safety, crash statistics from several of the more safety, crash statistics from several of the more United States. Data obtained from the Fatality Analysis Reporting System (FARS) and the Internati...

  17. Prediction of transmission loss through an aircraft sidewall using statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Ming, Ruisen; Sun, Jincai

    1989-06-01

    The transmission loss of randomly incident sound through an aircraft sidewall is investigated using statistical energy analysis. Formulas are also obtained for the simple calculation of sound transmission loss through single- and double-leaf panels. Both resonant and nonresonant sound transmissions can be easily calculated using the formulas. The formulas are used to predict sound transmission losses through a Y-7 propeller airplane panel. The panel measures 2.56 m x 1.38 m and has two windows. The agreement between predicted and measured values through most of the frequency ranges tested is quite good.

  18. Modelling category goodness judgments in children with residual sound errors.

    PubMed

    Dugan, Sarah Hamilton; Silbert, Noah; McAllister, Tara; Preston, Jonathan L; Sotto, Carolyn; Boyce, Suzanne E

    2018-05-24

    This study investigates category goodness judgments of /r/ in adults and children with and without residual speech errors (RSEs) using natural speech stimuli. Thirty adults, 38 children with RSE (ages 7-16) and 35 age-matched typically developing (TD) children provided category goodness judgments on whole words, recorded from 27 child speakers, with /r/ in various phonetic environments. The salient acoustic property of /r/ - the lowered third formant (F3) - was normalized in two ways. A logistic mixed-effect model quantified the relationships between listeners' responses and the third formant frequency, vowel context and clinical group status. Goodness judgments from the adult group showed a statistically significant interaction with the F3 parameter when compared to both child groups (p < 0.001) using both normalization methods. The RSE group did not differ significantly from the TD group in judgments of /r/. All listeners were significantly more likely to judge /r/ as correct in a front-vowel context. Our results suggest that normalized /r/ F3 is a statistically significant predictor of category goodness judgments for both adults and children, but children do not appear to make adult-like judgments. Category goodness judgments do not have a clear relationship with /r/ production abilities in children with RSE. These findings may have implications for clinical activities that include category goodness judgments in natural speech, especially for recorded productions.

  19. Statistical inference based on the nonparametric maximum likelihood estimator under double-truncation.

    PubMed

    Emura, Takeshi; Konno, Yoshihiko; Michimae, Hirofumi

    2015-07-01

    Doubly truncated data consist of samples whose observed values fall between the right- and left- truncation limits. With such samples, the distribution function of interest is estimated using the nonparametric maximum likelihood estimator (NPMLE) that is obtained through a self-consistency algorithm. Owing to the complicated asymptotic distribution of the NPMLE, the bootstrap method has been suggested for statistical inference. This paper proposes a closed-form estimator for the asymptotic covariance function of the NPMLE, which is computationally attractive alternative to bootstrapping. Furthermore, we develop various statistical inference procedures, such as confidence interval, goodness-of-fit tests, and confidence bands to demonstrate the usefulness of the proposed covariance estimator. Simulations are performed to compare the proposed method with both the bootstrap and jackknife methods. The methods are illustrated using the childhood cancer dataset.

  20. Survey mode matters: adults' self-reported statistical confidence, ability to obtain health information, and perceptions of patient-health-care provider communication.

    PubMed

    Wallace, Lorraine S; Chisolm, Deena J; Abdel-Rasoul, Mahmoud; DeVoe, Jennifer E

    2013-08-01

    This study examined adults' self-reported understanding and formatting preferences of medical statistics, confidence in self-care and ability to obtain health advice or information, and perceptions of patient-health-care provider communication measured through dual survey modes (random digital dial and mail). Even while controlling for sociodemographic characteristics, significant differences in regard to adults' responses to survey variables emerged as a function of survey mode. While the analyses do not allow us to pinpoint the underlying causes of the differences observed, they do suggest that mode of administration should be carefully adjusted for and considered.

  1. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  2. Statistical mechanics of Fermi-Pasta-Ulam chains with the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Demirel, Melik C.; Sayar, Mehmet; Atılgan, Ali R.

    1997-03-01

    Low-energy vibrations of a Fermi-Pasta-Ulam-Β (FPU-Β) chain with 16 repeat units are analyzed with the aid of numerical experiments and the statistical mechanics equations of the canonical ensemble. Constant temperature numerical integrations are performed by employing the cubic coupling scheme of Kusnezov et al. [Ann. Phys. 204, 155 (1990)]. Very good agreement is obtained between numerical results and theoretical predictions for the probability distributions of the generalized coordinates and momenta both of the chain and of the thermal bath. It is also shown that the average energy of the chain scales linearly with the bath temperature.

  3. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  4. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  5. Probabilities and statistics for backscatter estimates obtained by a scatterometer

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.

  6. Quick Statistics

    MedlinePlus

    ... population, or about 25 million Americans, has experienced tinnitus lasting at least five minutes in the past ... by NIDCD Epidemiology and Statistics Program staff: (1) tinnitus prevalence was obtained from the 2008 National Health ...

  7. Statistical testing of association between menstruation and migraine.

    PubMed

    Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G

    2015-02-01

    To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.

  8. Regulatory theory: commercially sustainable markets rely upon satisfying the public interest in obtaining credible goods.

    PubMed

    Warren-Jones, Amanda

    2017-10-01

    Regulatory theory is premised on the failure of markets, prompting a focus on regulators and industry from economic perspectives. This article argues that overlooking the public interest in the sustainability of commercial markets risks markets failing completely. This point is exemplified through health care markets - meeting an essential need - and focuses upon innovative medicines as the most desired products in that market. If this seemingly invulnerable market risks failure, there is a pressing need to consider the public interest in sustainable markets within regulatory literature and practice. Innovative medicines are credence goods, meaning that the sustainability of the market fundamentally relies upon the public trusting regulators to vouch for product quality. Yet, quality is being eroded by patent bodies focused on economic benefits from market growth, rather than ensuring innovatory value. Remunerative bodies are not funding medicines relative to market value, and market authorisation bodies are not vouching for robust safety standards or confining market entry to products for 'unmet medical need'. Arguably, this failure to assure quality heightens the risk of the market failing where it cannot be substituted by the reputation or credibility of providers of goods and/or information such as health care professionals/institutions, patient groups or industry.

  9. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  10. Correlation between radio-induced lymphocyte apoptosis measurements obtained from two French centres.

    PubMed

    Mirjolet, C; Merlin, J L; Dalban, C; Maingon, P; Azria, D

    2016-07-01

    In the era of modern treatment delivery, increasing the dose delivered to the target to improve local control might be modulated by the patient's intrinsic radio-sensitivity. A predictive assay based on radio-induced lymphocyte apoptosis quantification highlighted the significant correlation between CD4 and CD8 T-lymphocyte apoptosis and grade 2 or 3 radiation-induced late toxicities. By conducting this assay at several technical platforms, the aim of this study was to demonstrate that radio-induced lymphocyte apoptosis values obtained from two different platforms were comparable. For 25 patients included in the PARATOXOR trial running in Dijon the radio-induced lymphocyte apoptosis results obtained from the laboratory of Montpellier (IRCM, Inserm U1194, France), considered as the reference (referred to as Lab 1), were compared with those from the laboratory located at the Institut de cancérologie de Lorraine (ICL, France), referred to as Lab 2. Different statistical methods were used to measure the agreement between the radio-induced lymphocyte apoptosis data from the two laboratories (quantitative data). The Bland-Altman plot was used to identify potential bias. All statistical tests demonstrated good agreement between radio-induced lymphocyte apoptosis values obtained from both sites and no major bias was identified. Since radio-induced lymphocyte apoptosis values, which predict tolerance to radiotherapy, could be assessed by two laboratories and showed a high level of robustness and consistency, we can suggest that this assay be extended to any laboratories that use the same technique. Copyright © 2016 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  11. The ethics of big data as a public good: which public? Whose good?

    PubMed

    Taylor, Linnet

    2016-12-28

    International development and humanitarian organizations are increasingly calling for digital data to be treated as a public good because of its value in supplementing scarce national statistics and informing interventions, including in emergencies. In response to this claim, a 'responsible data' movement has evolved to discuss guidelines and frameworks that will establish ethical principles for data sharing. However, this movement is not gaining traction with those who hold the highest-value data, particularly mobile network operators who are proving reluctant to make data collected in low- and middle-income countries accessible through intermediaries. This paper evaluates how the argument for 'data as a public good' fits with the corporate reality of big data, exploring existing models for data sharing. I draw on the idea of corporate data as an ecosystem involving often conflicting rights, duties and claims, in comparison to the utilitarian claim that data's humanitarian value makes it imperative to share them. I assess the power dynamics implied by the idea of data as a public good, and how differing incentives lead actors to adopt particular ethical positions with regard to the use of data.This article is part of the themed issue 'The ethical impact of data science'. © 2016 The Author(s).

  12. Good Mathematics Teaching from Mexican High School Students' Perspective

    ERIC Educational Resources Information Center

    Martinez-Sierra, Gustavo

    2014-01-01

    This paper reports a qualitative research that identifies the characteristics of good mathematics teaching from the perspective of Mexican high school students. For this purpose, the social representations of a good mathematics teacher and a good mathematics class were identified in a group of 67 students. In order to obtain information, a…

  13. Interpretation of statistical results.

    PubMed

    García Garmendia, J L; Maroto Monserrat, F

    2018-02-21

    The appropriate interpretation of the statistical results is crucial to understand the advances in medical science. The statistical tools allow us to transform the uncertainty and apparent chaos in nature to measurable parameters which are applicable to our clinical practice. The importance of understanding the meaning and actual extent of these instruments is essential for researchers, the funders of research and for professionals who require a permanent update based on good evidence and supports to decision making. Various aspects of the designs, results and statistical analysis are reviewed, trying to facilitate his comprehension from the basics to what is most common but no better understood, and bringing a constructive, non-exhaustive but realistic look. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  14. Air Combat Training: Good Stick Index Validation. Final Report for Period 3 April 1978-1 April 1979.

    ERIC Educational Resources Information Center

    Moore, Samuel B.; And Others

    A study was conducted to investigate and statistically validate a performance measuring system (the Good Stick Index) in the Tactical Air Command Combat Engagement Simulator I (TAC ACES I) Air Combat Maneuvering (ACM) training program. The study utilized a twelve-week sample of eighty-nine student pilots to statistically validate the Good Stick…

  15. Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium

    NASA Astrophysics Data System (ADS)

    Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.

    2015-06-01

    Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.

  16. Comparative analysis on the probability of being a good payer

    NASA Astrophysics Data System (ADS)

    Mihova, V.; Pavlov, V.

    2017-10-01

    Credit risk assessment is crucial for the bank industry. The current practice uses various approaches for the calculation of credit risk. The core of these approaches is the use of multiple regression models, applied in order to assess the risk associated with the approval of people applying for certain products (loans, credit cards, etc.). Based on data from the past, these models try to predict what will happen in the future. Different data requires different type of models. This work studies the causal link between the conduct of an applicant upon payment of the loan and the data that he completed at the time of application. A database of 100 borrowers from a commercial bank is used for the purposes of the study. The available data includes information from the time of application and credit history while paying off the loan. Customers are divided into two groups, based on the credit history: Good and Bad payers. Linear and logistic regression are applied in parallel to the data in order to estimate the probability of being good for new borrowers. A variable, which contains value of 1 for Good borrowers and value of 0 for Bad candidates, is modeled as a dependent variable. To decide which of the variables listed in the database should be used in the modelling process (as independent variables), a correlation analysis is made. Due to the results of it, several combinations of independent variables are tested as initial models - both with linear and logistic regression. The best linear and logistic models are obtained after initial transformation of the data and following a set of standard and robust statistical criteria. A comparative analysis between the two final models is made and scorecards are obtained from both models to assess new customers at the time of application. A cut-off level of points, bellow which to reject the applications and above it - to accept them, has been suggested for both the models, applying the strategy to keep the same Accept Rate as

  17. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  18. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network

    PubMed Central

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-01

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006

  19. Intelligent Condition Diagnosis Method Based on Adaptive Statistic Test Filter and Diagnostic Bayesian Network.

    PubMed

    Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing

    2016-01-08

    A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.

  20. Comparison of biometric measurements obtained by the Verion Image-Guided System versus the auto-refracto-keratometer.

    PubMed

    Velasco-Barona, Cecilio; Cervantes-Coste, Guadalupe; Mendoza-Schuster, Erick; Corredor-Ortega, Claudia; Casillas-Chavarín, Nadia L; Silva-Moreno, Alejandro; Garza-León, Manuel; Gonzalez-Salinas, Roberto

    2018-06-01

    To compare the biometric measurements obtained from the Verion Image-Guided System to those obtained by auto-refracto-keratometer in normal eyes. This is a prospective, observational, comparative study conducted at the Asociación para Evitar la Ceguera en México I.A.P., Mexico. Three sets of keratometry measurements were obtained using the image-guided system to assess the coefficient of variation, the within-subject standard deviation and intraclass correlation coefficient (ICC). A paired Student t test was used to assess statistical significance between the Verion and the auto-refracto-keratometer. A Pearson's correlation coefficient (r) was obtained for all measurements, and the level of agreement was verified using Bland-Altman plots. The right eyes of 73 patients were evaluated by each platform. The Verion coefficient of variation was 0.3% for the flat and steep keratometry, with the ICC being greater than 0.9 for all parameters measured. Paired t test showed statistically significant differences between groups (P = 0.0001). A good correlation was evidenced for keratometry values between platforms (r = 0.903, P = 0.0001 for K1, and r = 0.890, P = 0.0001). Bland-Altman plots showed a wide data spread for all variables. The image-guided system provided highly repeatable corneal power and keratometry measurements. However, significant differences were evidenced between the two platforms, and although values were highly correlated, they showed a wide data spread for all analysed variables; therefore, their interchangeable use for biometry assessment is not advisable.

  1. Statistical modeling of competitive threshold collision-induced dissociation

    NASA Astrophysics Data System (ADS)

    Rodgers, M. T.; Armentrout, P. B.

    1998-08-01

    Collision-induced dissociation of (R1OH)Li+(R2OH) with xenon is studied using guided ion beam mass spectrometry. R1OH and R2OH include the following molecules: water, methanol, ethanol, 1-propanol, 2-propanol, and 1-butanol. In all cases, the primary products formed correspond to endothermic loss of one of the neutral alcohols, with minor products that include those formed by ligand exchange and loss of both ligands. The cross-section thresholds are interpreted to yield 0 and 298 K bond energies for (R1OH)Li+-R2OH and relative Li+ binding affinities of the R1OH and R2OH ligands after accounting for the effects of multiple ion-molecule collisions, internal energy of the reactant ions, and dissociation lifetimes. We introduce a means to simultaneously analyze the cross sections for these competitive dissociations using statistical theories to predict the energy dependent branching ratio. Thermochemistry in good agreement with previous work is obtained in all cases. In essence, this statistical approach provides a detailed means of correcting for the "competitive shift" inherent in multichannel processes.

  2. Statistical correlations in an ideal gas of particles obeying fractional exclusion statistics.

    PubMed

    Pellegrino, F M D; Angilella, G G N; March, N H; Pucci, R

    2007-12-01

    After a brief discussion of the concepts of fractional exchange and fractional exclusion statistics, we report partly analytical and partly numerical results on thermodynamic properties of assemblies of particles obeying fractional exclusion statistics. The effect of dimensionality is one focal point, the ratio mu/k_(B)T of chemical potential to thermal energy being obtained numerically as a function of a scaled particle density. Pair correlation functions are also presented as a function of the statistical parameter, with Friedel oscillations developing close to the fermion limit, for sufficiently large density.

  3. IMPLEMENTATION AND VALIDATION OF STATISTICAL TESTS IN RESEARCH'S SOFTWARE HELPING DATA COLLECTION AND PROTOCOLS ANALYSIS IN SURGERY.

    PubMed

    Kuretzki, Carlos Henrique; Campos, Antônio Carlos Ligocki; Malafaia, Osvaldo; Soares, Sandramara Scandelari Kusano de Paula; Tenório, Sérgio Bernardo; Timi, Jorge Rufino Ribas

    2016-03-01

    The use of information technology is often applied in healthcare. With regard to scientific research, the SINPE(c) - Integrated Electronic Protocols was created as a tool to support researchers, offering clinical data standardization. By the time, SINPE(c) lacked statistical tests obtained by automatic analysis. Add to SINPE(c) features for automatic realization of the main statistical methods used in medicine . The study was divided into four topics: check the interest of users towards the implementation of the tests; search the frequency of their use in health care; carry out the implementation; and validate the results with researchers and their protocols. It was applied in a group of users of this software in their thesis in the strict sensu master and doctorate degrees in one postgraduate program in surgery. To assess the reliability of the statistics was compared the data obtained both automatically by SINPE(c) as manually held by a professional in statistics with experience with this type of study. There was concern for the use of automatic statistical tests, with good acceptance. The chi-square, Mann-Whitney, Fisher and t-Student were considered as tests frequently used by participants in medical studies. These methods have been implemented and thereafter approved as expected. The incorporation of the automatic SINPE (c) Statistical Analysis was shown to be reliable and equal to the manually done, validating its use as a research tool for medical research.

  4. GOODS Far Infrared Imaging with Herschel

    NASA Astrophysics Data System (ADS)

    Frayer, David T.; Elbaz, D.; Dickinson, M.; GOODS-Herschel Team

    2010-01-01

    Most of the stars in galaxies formed at high redshift in dusty environments, where their energy was absorbed and re-radiated at infrared wavelengths. Similarly, much of the growth of nuclear black holes in active galactic nuclei (AGN) was also obscured from direct view at UV/optical and X-ray wavelengths. The Great Observatories Origins Deep Survey Herschel (GOODS-H) open time key program will obtain the deepest far-infrared view of the distant universe, mapping the history of galaxy growth and AGN activity over a broad swath of cosmic time. GOODS-H will image the GOODS-North field with the PACS and SPIRE instruments at 100 to 500 microns, matching the deep survey of GOODS-South in the guaranteed time key program. GOODS-H will also observe an ultradeep sub-field within GOODS-South with PACS, reaching the deepest flux limits planned for Herschel (0.6 mJy at 100 microns with S/N=5). GOODS-H data will detect thousands of luminous and ultraluminous infrared galaxies out to z=4 or beyond, measuring their far-infrared luminosities and spectral energy distributions, and providing the best constraints on star formation rates and AGN activity during this key epoch of galaxy and black hole growth in the young universe.

  5. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  6. Algorithm for computing descriptive statistics for very large data sets and the exa-scale era

    NASA Astrophysics Data System (ADS)

    Beekman, Izaak

    2017-11-01

    An algorithm for Single-point, Parallel, Online, Converging Statistics (SPOCS) is presented. It is suited for in situ analysis that traditionally would be relegated to post-processing, and can be used to monitor the statistical convergence and estimate the error/residual in the quantity-useful for uncertainty quantification too. Today, data may be generated at an overwhelming rate by numerical simulations and proliferating sensing apparatuses in experiments and engineering applications. Monitoring descriptive statistics in real time lets costly computations and experiments be gracefully aborted if an error has occurred, and monitoring the level of statistical convergence allows them to be run for the shortest amount of time required to obtain good results. This algorithm extends work by Pébay (Sandia Report SAND2008-6212). Pébay's algorithms are recast into a converging delta formulation, with provably favorable properties. The mean, variance, covariances and arbitrary higher order statistical moments are computed in one pass. The algorithm is tested using Sillero, Jiménez, & Moser's (2013, 2014) publicly available UPM high Reynolds number turbulent boundary layer data set, demonstrating numerical robustness, efficiency and other favorable properties.

  7. Determining the Statistical Power of the Kolmogorov-Smirnov and Anderson-Darling Goodness-of-Fit Tests via Monte Carlo Simulation

    DTIC Science & Technology

    2016-12-01

    KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination

  8. Robust data enables managers to promote good practice.

    PubMed

    Bassett, Sally; Westmore, Kathryn

    2012-11-01

    This is the third in a series of articles examining the components of good corporate governance. The effective and efficient use of information and sources of information is crucial for good governance. This article explores the ways in which boards and management can obtain and use information to monitor performance and promote good practice, and how boards can be assured about the quality of information on which they rely. The final article in this series will look at the role of accountability in corporate governance.

  9. Statistical Model to Analyze Quantitative Proteomics Data Obtained by 18O/16O Labeling and Linear Ion Trap Mass Spectrometry

    PubMed Central

    Jorge, Inmaculada; Navarro, Pedro; Martínez-Acedo, Pablo; Núñez, Estefanía; Serrano, Horacio; Alfranca, Arántzazu; Redondo, Juan Miguel; Vázquez, Jesús

    2009-01-01

    Statistical models for the analysis of protein expression changes by stable isotope labeling are still poorly developed, particularly for data obtained by 16O/18O labeling. Besides large scale test experiments to validate the null hypothesis are lacking. Although the study of mechanisms underlying biological actions promoted by vascular endothelial growth factor (VEGF) on endothelial cells is of considerable interest, quantitative proteomics studies on this subject are scarce and have been performed after exposing cells to the factor for long periods of time. In this work we present the largest quantitative proteomics study to date on the short term effects of VEGF on human umbilical vein endothelial cells by 18O/16O labeling. Current statistical models based on normality and variance homogeneity were found unsuitable to describe the null hypothesis in a large scale test experiment performed on these cells, producing false expression changes. A random effects model was developed including four different sources of variance at the spectrum-fitting, scan, peptide, and protein levels. With the new model the number of outliers at scan and peptide levels was negligible in three large scale experiments, and only one false protein expression change was observed in the test experiment among more than 1000 proteins. The new model allowed the detection of significant protein expression changes upon VEGF stimulation for 4 and 8 h. The consistency of the changes observed at 4 h was confirmed by a replica at a smaller scale and further validated by Western blot analysis of some proteins. Most of the observed changes have not been described previously and are consistent with a pattern of protein expression that dynamically changes over time following the evolution of the angiogenic response. With this statistical model the 18O labeling approach emerges as a very promising and robust alternative to perform quantitative proteomics studies at a depth of several thousand proteins

  10. Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.

    PubMed

    Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R

    2018-06-01

    Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.

  11. A statistical theory for sound radiation and reflection from a duct

    NASA Technical Reports Server (NTRS)

    Cho, Y. C.

    1979-01-01

    A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.

  12. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  13. Forecasting daily source air quality using multivariate statistical analysis and radial basis function networks.

    PubMed

    Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A

    2008-12-01

    It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.

  14. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  15. ASSESSMENT OF GOOD PRACTICES IN HOSPITAL FOOD SERVICE BY COMPARING EVALUATION TOOLS.

    PubMed

    Macedo Gonçalves, Juliana; Lameiro Rodrigues, Kelly; Santiago Almeida, Ângela Teresinha; Pereira, Giselda Maria; Duarte Buchweitz, Márcia Rúbia

    2015-10-01

    since food service in hospitals complements medical treatment, it should be produced in proper hygienic and sanitary conditions. It is a well-known fact that food-transmitted illnesses affect with greater severity hospitalized and immunosuppressed patients. good practices in hospital food service are evaluated by comparing assessment instruments. good practices were evaluated by a verification list following Resolution of Collegiate Directory n. 216 of the Brazilian Agency for Sanitary Vigilance. Interpretation of listed items followed parameters of RCD 216 and the Brazilian Association of Collective Meals Enterprises (BACME). Fisher's exact test was applied to detect whether there were statistically significant differences. Analysis of data grouping was undertaken with Unweighted Pair-group using Arithmetic Averages, coupled to a correlation study between dissimilarity matrixes to verify disagreement between the two methods. Good Practice was classified with mean total rates above 75% by the two methods. There were statistically significant differences between services and food evaluated by BACME instrument. Hospital Food Services have proved to show conditions of acceptable good practices. the comparison of interpretation tools based on RCD n. 216 and BACME provided similar results for the two classifications. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  16. Hand and goods judgment algorithm based on depth information

    NASA Astrophysics Data System (ADS)

    Li, Mingzhu; Zhang, Jinsong; Yan, Dan; Wang, Qin; Zhang, Ruiqi; Han, Jing

    2016-03-01

    A tablet computer with a depth camera and a color camera is loaded on a traditional shopping cart. The inside information of the shopping cart is obtained by two cameras. In the shopping cart monitoring field, it is very important for us to determine whether the customer with goods in or out of the shopping cart. This paper establishes a basic framework for judging empty hand, it includes the hand extraction process based on the depth information, process of skin color model building based on WPCA (Weighted Principal Component Analysis), an algorithm for judging handheld products based on motion and skin color information, statistical process. Through this framework, the first step can ensure the integrity of the hand information, and effectively avoids the influence of sleeve and other debris, the second step can accurately extract skin color and eliminate the similar color interference, light has little effect on its results, it has the advantages of fast computation speed and high efficiency, and the third step has the advantage of greatly reducing the noise interference and improving the accuracy.

  17. "Inclusive Working Life" in Norway--experience from "Models of Good Practice" enterprises.

    PubMed

    Lie, Arve

    2008-08-01

    To determine whether enterprises belonging to the Bank of Models of Good Practice were more successful than average Norwegian enterprises in the reduction of sickness absence, promotion of early return to work, and prevention of early retirement. In 2004 we selected 86 enterprises with a total of approximately 90000 employees from the Inclusive Working Life (IWL) Bank of Models of Good Practice. One representative of workers and one of management from each enterprise received a questionnaire on the aims, organization, and the results of the IWL program by mail. Data on sickness absence, use of early retirement, and disability retirement in the 2000-2004 period were collected from the National Insurance Registry. Data on comparable enterprises were obtained from the National Bureau of Statistics. The response rate was 65%. Although the IWL campaign was directed at reducing sickness absence, preventing early retirement, and promoting employment of the functionally impaired, most attention was paid to reducing sickness absence. Sickness absence rate in Models of Good Practice enterprises (8.2%) was higher than in comparable enterprises that were not part of the Models of Good Practice (6.9%). Implementation of many IWL activities, empowerment and involvement of employees, and good cooperation with the occupational health service were associated with a lower rate of sickness absence. On average, 0.7% new employees per year received disability pension, which is a significantly lower percentage than expected on the basis of the rate of 1.3% per year in comparable enterprises. Frequent use of disability pensioning was associated with high rate of sickness absence and having many employees older than 50 years. On average, 0.4% employees per year received early retirement compensation, which was expected on the basis of national estimates. Frequent use of early retirement was associated with having many employees older than 50 years. Models of Good Practice enterprises had

  18. Evaluation of bond strength of resin cements using different general-purpose statistical software packages for two-parameter Weibull statistics.

    PubMed

    Roos, Malgorzata; Stawarczyk, Bogna

    2012-07-01

    This study evaluated and compared Weibull parameters of resin bond strength values using six different general-purpose statistical software packages for two-parameter Weibull distribution. Two-hundred human teeth were randomly divided into 4 groups (n=50), prepared and bonded on dentin according to the manufacturers' instructions using the following resin cements: (i) Variolink (VAN, conventional resin cement), (ii) Panavia21 (PAN, conventional resin cement), (iii) RelyX Unicem (RXU, self-adhesive resin cement) and (iv) G-Cem (GCM, self-adhesive resin cement). Subsequently, all specimens were stored in water for 24h at 37°C. Shear bond strength was measured and the data were analyzed using Anderson-Darling goodness-of-fit (MINITAB 16) and two-parameter Weibull statistics with the following statistical software packages: Excel 2011, SPSS 19, MINITAB 16, R 2.12.1, SAS 9.1.3. and STATA 11.2 (p≤0.05). Additionally, the three-parameter Weibull was fitted using MNITAB 16. Two-parameter Weibull calculated with MINITAB and STATA can be compared using an omnibus test and using 95% CI. In SAS only 95% CI were directly obtained from the output. R provided no estimates of 95% CI. In both SAS and R the global comparison of the characteristic bond strength among groups is provided by means of the Weibull regression. EXCEL and SPSS provided no default information about 95% CI and no significance test for the comparison of Weibull parameters among the groups. In summary, conventional resin cement VAN showed the highest Weibull modulus and characteristic bond strength. There are discrepancies in the Weibull statistics depending on the software package and the estimation method. The information content in the default output provided by the software packages differs to very high extent. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  19. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  20. Stationary statistical theory of two-surface multipactor regarding all impacts for efficient threshold analysis

    NASA Astrophysics Data System (ADS)

    Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang

    2018-01-01

    Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.

  1. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  2. Rediscovery of Good-Turing estimators via Bayesian nonparametrics.

    PubMed

    Favaro, Stefano; Nipoti, Bernardo; Teh, Yee Whye

    2016-03-01

    The problem of estimating discovery probabilities originated in the context of statistical ecology, and in recent years it has become popular due to its frequent appearance in challenging applications arising in genetics, bioinformatics, linguistics, designs of experiments, machine learning, etc. A full range of statistical approaches, parametric and nonparametric as well as frequentist and Bayesian, has been proposed for estimating discovery probabilities. In this article, we investigate the relationships between the celebrated Good-Turing approach, which is a frequentist nonparametric approach developed in the 1940s, and a Bayesian nonparametric approach recently introduced in the literature. Specifically, under the assumption of a two parameter Poisson-Dirichlet prior, we show that Bayesian nonparametric estimators of discovery probabilities are asymptotically equivalent, for a large sample size, to suitably smoothed Good-Turing estimators. As a by-product of this result, we introduce and investigate a methodology for deriving exact and asymptotic credible intervals to be associated with the Bayesian nonparametric estimators of discovery probabilities. The proposed methodology is illustrated through a comprehensive simulation study and the analysis of Expressed Sequence Tags data generated by sequencing a benchmark complementary DNA library. © 2015, The International Biometric Society.

  3. Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1983-01-01

    The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.

  4. Statistical principle and methodology in the NISAN system.

    PubMed Central

    Asano, C

    1979-01-01

    The NISAN system is a new interactive statistical analysis program package constructed by an organization of Japanese statisticans. The package is widely available for both statistical situations, confirmatory analysis and exploratory analysis, and is planned to obtain statistical wisdom and to choose optimal process of statistical analysis for senior statisticians. PMID:540594

  5. Corpus-based Statistical Screening for Phrase Identification

    PubMed Central

    Kim, Won; Wilbur, W. John

    2000-01-01

    Purpose: The authors study the extraction of useful phrases from a natural language database by statistical methods. The aim is to leverage human effort by providing preprocessed phrase lists with a high percentage of useful material. Method: The approach is to develop six different scoring methods that are based on different aspects of phrase occurrence. The emphasis here is not on lexical information or syntactic structure but rather on the statistical properties of word pairs and triples that can be obtained from a large database. Measurements: The Unified Medical Language System (UMLS) incorporates a large list of humanly acceptable phrases in the medical field as a part of its structure. The authors use this list of phrases as a gold standard for validating their methods. A good method is one that ranks the UMLS phrases high among all phrases studied. Measurements are 11-point average precision values and precision-recall curves based on the rankings. Result: The authors find of six different scoring methods that each proves effective in identifying UMLS quality phrases in a large subset of MEDLINE. These methods are applicable both to word pairs and word triples. All six methods are optimally combined to produce composite scoring methods that are more effective than any single method. The quality of the composite methods appears sufficient to support the automatic placement of hyperlinks in text at the site of highly ranked phrases. Conclusion: Statistical scoring methods provide a promising approach to the extraction of useful phrases from a natural language database for the purpose of indexing or providing hyperlinks in text. PMID:10984469

  6. Impact of a statistical bias correction on the projected simulated hydrological changes obtained from three GCMs and two hydrology models

    NASA Astrophysics Data System (ADS)

    Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio

    2010-05-01

    Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.

  7. Time series models of environmental exposures: Good predictions or good understanding.

    PubMed

    Barnett, Adrian G; Stephen, Dimity; Huang, Cunrui; Wolkewitz, Martin

    2017-04-01

    Time series data are popular in environmental epidemiology as they make use of the natural experiment of how changes in exposure over time might impact on disease. Many published time series papers have used parameter-heavy models that fully explained the second order patterns in disease to give residuals that have no short-term autocorrelation or seasonality. This is often achieved by including predictors of past disease counts (autoregression) or seasonal splines with many degrees of freedom. These approaches give great residuals, but add little to our understanding of cause and effect. We argue that modelling approaches should rely more on good epidemiology and less on statistical tests. This includes thinking about causal pathways, making potential confounders explicit, fitting a limited number of models, and not over-fitting at the cost of under-estimating the true association between exposure and disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. The ethics of big data as a public good: which public? Whose good?

    PubMed Central

    2016-01-01

    International development and humanitarian organizations are increasingly calling for digital data to be treated as a public good because of its value in supplementing scarce national statistics and informing interventions, including in emergencies. In response to this claim, a ‘responsible data’ movement has evolved to discuss guidelines and frameworks that will establish ethical principles for data sharing. However, this movement is not gaining traction with those who hold the highest-value data, particularly mobile network operators who are proving reluctant to make data collected in low- and middle-income countries accessible through intermediaries. This paper evaluates how the argument for ‘data as a public good’ fits with the corporate reality of big data, exploring existing models for data sharing. I draw on the idea of corporate data as an ecosystem involving often conflicting rights, duties and claims, in comparison to the utilitarian claim that data's humanitarian value makes it imperative to share them. I assess the power dynamics implied by the idea of data as a public good, and how differing incentives lead actors to adopt particular ethical positions with regard to the use of data. This article is part of the themed issue ‘The ethical impact of data science’. PMID:28336800

  9. Delay, probability, and social discounting in a public goods game.

    PubMed

    Jones, Bryan A; Rachlin, Howard

    2009-01-01

    A human social discount function measures the value to a person of a reward to another person at a given social distance. Just as delay discounting is a hyperbolic function of delay, and probability discounting is a hyperbolic function of odds-against, social discounting is a hyperbolic function of social distance. Experiment 1 obtained individual social, delay, and probability discount functions for a hypothetical $75 reward; participants also indicated how much of an initial $100 endowment they would contribute to a common investment in a public good. Steepness of discounting correlated, across participants, among all three discount dimensions. However, only social and probability discounting were correlated with the public-good contribution; high public-good contributors were more altruistic and also less risk averse than low contributors. Experiment 2 obtained social discount functions with hypothetical $75 rewards and delay discount functions with hypothetical $1,000 rewards, as well as public-good contributions. The results replicated those of Experiment 1; steepness of the two forms of discounting correlated with each other across participants but only social discounting correlated with the public-good contribution. Most participants in Experiment 2 predicted that the average contribution would be lower than their own contribution.

  10. Patients and Medical Statistics

    PubMed Central

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-01-01

    BACKGROUND People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. OBJECTIVE To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. DESIGN Survey with retest after approximately 2 weeks. SUBJECTS Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. MEASURES We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. RESULTS Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test–retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's α=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). CONCLUSION The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data. PMID:16307623

  11. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation

  12. Statistical inference and Aristotle's Rhetoric.

    PubMed

    Macdonald, Ranald R

    2004-11-01

    Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

  13. Good pharmacovigilance practices: technology enabled.

    PubMed

    Nelson, Robert C; Palsulich, Bruce; Gogolak, Victor

    2002-01-01

    The assessment of spontaneous reports is most effective it is conducted within a defined and rigorous process. The framework for good pharmacovigilance process (GPVP) is proposed as a subset of good postmarketing surveillance process (GPMSP), a functional structure for both a public health and corporate risk management strategy. GPVP has good practices that implement each step within a defined process. These practices are designed to efficiently and effectively detect and alert the drug safety professional to new and potentially important information on drug-associated adverse reactions. These practices are enabled by applied technology designed specifically for the review and assessment of spontaneous reports. Specific practices include rules-based triage, active query prompts for severe organ insults, contextual single case evaluation, statistical proportionality and correlational checks, case-series analyses, and templates for signal work-up and interpretation. These practices and the overall GPVP are supported by state-of-the-art web-based systems with powerful analytical engines, workflow and audit trials to allow validated systems support for valid drug safety signalling efforts. It is also important to understand that a process has a defined set of steps and any one cannot stand independently. Specifically, advanced use of technical alerting methods in isolation can mislead and allow one to misunderstand priorities and relative value. In the end, pharmacovigilance is a clinical art and a component process to the science of pharmacoepidemiology and risk management.

  14. Basic statistics (the fundamental concepts).

    PubMed

    Lim, Eric

    2014-12-01

    An appreciation and understanding of statistics is import to all practising clinicians, not simply researchers. This is because mathematics is the fundamental basis to which we base clinical decisions, usually with reference to the benefit in relation to risk. Unless a clinician has a basic understanding of statistics, he or she will never be in a position to question healthcare management decisions that have been handed down from generation to generation, will not be able to conduct research effectively nor evaluate the validity of published evidence (usually making an assumption that most published work is either all good or all bad). This article provides a brief introduction to basic statistical methods and illustrates its use in common clinical scenarios. In addition, pitfalls of incorrect usage have been highlighted. However, it is not meant to be a substitute for formal training or consultation with a qualified and experienced medical statistician prior to starting any research project.

  15. Homicides, Public Goods, and Population Health in the Context of High Urban Violence Rates in Cali, Colombia.

    PubMed

    Martínez, Lina; Prada, Sergio; Estrada, Daniela

    2018-06-01

    Obesity and frequent mental and physical distress are often associated with major health problems. The characteristics of the urban environment, such as homicide rates and public goods provision, play an important role in influencing participation in physical activity and in overall mental health. This study aimed to determine whether there was a relationship between homicide rates and public goods provision on the health outcomes of the citizens of Cali, Colombia, a city known for its high urban violence rate and low municipal investment in public goods. We used a linear probability model to relate homicide rates and public goods provision (lighted parks, effective public space per inhabitant, and bus stations) at the district level to health outcomes (obesity and frequent mental and physical distress). Individual data were obtained from the 2014 CaliBRANDO survey, and urban context characteristics were obtained from official government statistics. After controlling for individual covariates, results showed that homicide rates were a risk factor in all examined outcomes. An increase in 1.0 m 2 of public space per inhabitant reduced the probability of an individual being obese or overweight by 0.2% (95% confidence interval (CI) = - 0.004 to - 0.001) and the probability of frequent physical distress by 0.1% (95% CI = - 0.002 to - 0.001). On average, the presence of one additional bus station increased the probability of being obese or overweight by 1.1%, the probability of frequent mental distress by 0.3% (95% CI = 0.001-0.004), and the probability of frequent physical distress by 0.02% (95% CI = 0.000-0.003). Living in districts with adequate public space and lighted parks lowers the probability of being obese and high homicide rates, which are correlated with poor health outcomes in Cali, Colombia. Investments in public goods provision and urban safety to reduce obesity rates may contribute to a better quality of life for the population.

  16. Reply to Rouder (2014): good frequentist properties raise confidence.

    PubMed

    Sanborn, Adam N; Hills, Thomas T; Dougherty, Michael R; Thomas, Rick P; Yu, Erica C; Sprenger, Amber M

    2014-04-01

    Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers' statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results.

  17. Multiple statistical tests: Lessons from a d20.

    PubMed

    Madan, Christopher R

    2016-01-01

    Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.

  18. Limited-information goodness-of-fit testing of diagnostic classification item response models.

    PubMed

    Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen

    2016-11-01

    Despite the growing popularity of diagnostic classification models (e.g., Rupp et al., 2010, Diagnostic measurement: theory, methods, and applications, Guilford Press, New York, NY) in educational and psychological measurement, methods for testing their absolute goodness of fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics such as Pearson's X 2 and the likelihood ratio statistic G 2 suffer from sparseness in the underlying contingency table from which they are computed. Recently, limited-information fit statistics such as Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 have been found to be quite useful in testing the overall goodness of fit of item response theory models. In this study, we applied Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 statistic to diagnostic classification models. Through a series of simulation studies, we found that M 2 is well calibrated across a wide range of diagnostic model structures and was sensitive to certain misspecifications of the item model (e.g., fitting disjunctive models to data generated according to a conjunctive model), errors in the Q-matrix (adding or omitting paths, omitting a latent variable), and violations of local item independence due to unmodelled testlet effects. On the other hand, M 2 was largely insensitive to misspecifications in the distribution of higher-order latent dimensions and to the specification of an extraneous attribute. To complement the analyses of the overall model goodness of fit using M 2 , we investigated the utility of the Chen and Thissen (1997, J. Educ. Behav. Stat., 22, 265) local dependence statistic XLD2 for characterizing sources of misfit, an important aspect of model appraisal often overlooked in favour of overall statements. The XLD2 statistic was found to be slightly conservative (with Type I error rates consistently below the nominal level) but still useful

  19. Web-Based Statistical Sampling and Analysis

    ERIC Educational Resources Information Center

    Quinn, Anne; Larson, Karen

    2016-01-01

    Consistent with the Common Core State Standards for Mathematics (CCSSI 2010), the authors write that they have asked students to do statistics projects with real data. To obtain real data, their students use the free Web-based app, Census at School, created by the American Statistical Association (ASA) to help promote civic awareness among school…

  20. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  1. Statistical characterization of short wind waves from stereo images of the sea surface

    NASA Astrophysics Data System (ADS)

    Mironov, Alexey; Yurovskaya, Maria; Dulov, Vladimir; Hauser, Danièle; Guérin, Charles-Antoine

    2013-04-01

    reference measurements for the small-scale wave field, we could not quantify exactly the accuracy of the retrieval technique. However, it appeared clearly that the obtained accuracy is good enough for the estimation of second-order statistical quantities (such as the correlation function), acceptable for third-order quantities (such as the skwewness function) and insufficient for fourth-order quantities (such as kurtosis). Therefore, the stereo technique in the present stage should not be thought as a self-contained universal tool to characterize the surface statistics. Instead, it should be used in conjunction with other well calibrated but sparse reference measurement (such as wave gauges) for cross-validation and calibration. It then completes the statistical analysis in as much as it provides a snapshot of the three-dimensional field and allows for the evaluation of higher-order spatial statistics.

  2. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  3. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  4. BrightStat.com: free statistics online.

    PubMed

    Stricker, Daniel

    2008-10-01

    Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.

  5. Why significant variables aren't automatically good predictors.

    PubMed

    Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa

    2015-11-10

    Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.

  6. Applications of statistical physics to technology price evolution

    NASA Astrophysics Data System (ADS)

    McNerney, James

    Understanding how changing technology affects the prices of goods is a problem with both rich phenomenology and important policy consequences. Using methods from statistical physics, I model technology-driven price evolution. First, I examine a model for the price evolution of individual technologies. The price of a good often follows a power law equation when plotted against its cumulative production. This observation turns out to have significant consequences for technology policy aimed at mitigating climate change, where technologies are needed that achieve low carbon emissions at low cost. However, no theory adequately explains why technology prices follow power laws. To understand this behavior, I simplify an existing model that treats technologies as machines composed of interacting components. I find that the power law exponent of the price trajectory is inversely related to the number of interactions per component. I extend the model to allow for more realistic component interactions and make a testable prediction. Next, I conduct a case-study on the cost evolution of coal-fired electricity. I derive the cost in terms of various physical and economic components. The results suggest that commodities and technologies fall into distinct classes of price models, with commodities following martingales, and technologies following exponentials in time or power laws in cumulative production. I then examine the network of money flows between industries. This work is a precursor to studying the simultaneous evolution of multiple technologies. Economies resemble large machines, with different industries acting as interacting components with specialized functions. To begin studying the structure of these machines, I examine 20 economies with an emphasis on finding common features to serve as targets for statistical physics models. I find they share the same money flow and industry size distributions. I apply methods from statistical physics to show that industries

  7. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  8. On the connection between financial processes with stochastic volatility and nonextensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Queirós, S. M. D.; Tsallis, C.

    2005-11-01

    The GARCH algorithm is the most renowned generalisation of Engle's original proposal for modelising returns, the ARCH process. Both cases are characterised by presenting a time dependent and correlated variance or volatility. Besides a memory parameter, b, (present in ARCH) and an independent and identically distributed noise, ω, GARCH involves another parameter, c, such that, for c=0, the standard ARCH process is reproduced. In this manuscript we use a generalised noise following a distribution characterised by an index qn, such that qn=1 recovers the Gaussian distribution. Matching low statistical moments of GARCH distribution for returns with a q-Gaussian distribution obtained through maximising the entropy Sq=1-sumipiq/q-1, basis of nonextensive statistical mechanics, we obtain a sole analytical connection between q and left( b,c,qnright) which turns out to be remarkably good when compared with computational simulations. With this result we also derive an analytical approximation for the stationary distribution for the (squared) volatility. Using a generalised Kullback-Leibler relative entropy form based on Sq, we also analyse the degree of dependence between successive returns, zt and zt+1, of GARCH(1,1) processes. This degree of dependence is quantified by an entropic index, qop. Our analysis points the existence of a unique relation between the three entropic indexes qop, q and qn of the problem, independent of the value of (b,c).

  9. The statistics of primordial density fluctuations

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Coles, Peter

    1990-05-01

    The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.

  10. Statistical auditing of toxicology reports.

    PubMed

    Deaton, R R; Obenchain, R L

    1994-06-01

    Statistical auditing is a new report review process used by the quality assurance unit at Eli Lilly and Co. Statistical auditing allows the auditor to review the process by which the report was generated, as opposed to the process by which the data was generated. We have the flexibility to use different sampling techniques and still obtain thorough coverage of the report data. By properly implementing our auditing process, we can work smarter rather than harder and continue to help our customers increase the quality of their products (reports). Statistical auditing is helping our quality assurance unit meet our customers' need, while maintaining or increasing the quality of our regulatory obligations.

  11. Design of off-statistics axial-flow fans by means of vortex law optimization

    NASA Astrophysics Data System (ADS)

    Lazari, Andrea; Cattanei, Andrea

    2014-12-01

    Off-statistics input data sets are common in axial-flow fans design and may easily result in some violation of the requirements of a good aerodynamic blade design. In order to circumvent this problem, in the present paper, a solution to the radial equilibrium equation is found which minimizes the outlet kinetic energy and fulfills the aerodynamic constraints, thus ensuring that the resulting blade has acceptable aerodynamic performance. The presented method is based on the optimization of a three-parameters vortex law and of the meridional channel size. The aerodynamic quantities to be employed as constraints are individuated and their suitable ranges of variation are proposed. The method is validated by means of a design with critical input data values and CFD analysis. Then, by means of systematic computations with different input data sets, some correlations and charts are obtained which are analogous to classic correlations based on statistical investigations on existing machines. Such new correlations help size a fan of given characteristics as well as study the feasibility of a given design.

  12. Methods to Approach Velocity Data Reduction and Their Effects on Conformation Statistics in Viscoelastic Turbulent Channel Flows

    NASA Astrophysics Data System (ADS)

    Samanta, Gaurab; Beris, Antony; Handler, Robert; Housiadas, Kostas

    2009-03-01

    Karhunen-Loeve (KL) analysis of DNS data of viscoelastic turbulent channel flows helps us to reveal more information on the time-dependent dynamics of viscoelastic modification of turbulence [Samanta et. al., J. Turbulence (in press), 2008]. A selected set of KL modes can be used for a data reduction modeling of these flows. However, it is pertinent that verification be done against established DNS results. For this purpose, we did comparisons of velocity and conformations statistics and probability density functions (PDFs) of relevant quantities obtained from DNS and reconstructed fields using selected KL modes and time-dependent coefficients. While the velocity statistics show good agreement between results from DNS and KL reconstructions even with just hundreds of KL modes, tens of thousands of KL modes are required to adequately capture the trace of polymer conformation resulting from DNS. New modifications to KL method have therefore been attempted to account for the differences in conformation statistics. The applicability and impact of these new modified KL methods will be discussed in the perspective of data reduction modeling.

  13. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  14. The Statistics of wood assays for preservative retention

    Treesearch

    Patricia K. Lebow; Scott W. Conklin

    2011-01-01

    This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.

  15. Weather related continuity and completeness on Deep Space Ka-band links: statistics and forecasting

    NASA Technical Reports Server (NTRS)

    Shambayati, Shervin

    2006-01-01

    In this paper the concept of link 'stability' as means of measuring the continuity of the link is introduced and through it, along with the distributions of 'good' periods and 'bad' periods, the performance of the proposed Ka-band link design method using both forecasting and long-term statistics has been analyzed. The results indicate that the proposed link design method has relatively good continuity and completeness characteristics even when only long-term statistics are used and that the continuity performance further improves when forecasting is employed. .

  16. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  17. Cooperation and the common good.

    PubMed

    Johnstone, Rufus A; Rodrigues, António M M

    2016-02-05

    In this paper, we draw the attention of biologists to a result from the economic literature, which suggests that when individuals are engaged in a communal activity of benefit to all, selection may favour cooperative sharing of resources even among non-relatives. Provided that group members all invest some resources in the public good, they should refrain from conflict over the division of these resources. The reason is that, given diminishing returns on investment in public and private goods, claiming (or ceding) a greater share of total resources only leads to the actor (or its competitors) investing more in the public good, such that the marginal costs and benefits of investment remain in balance. This cancels out any individual benefits of resource competition. We illustrate how this idea may be applied in the context of biparental care, using a sequential game in which parents first compete with one another over resources, and then choose how to allocate the resources they each obtain to care of their joint young (public good) versus their own survival and future reproductive success (private good). We show that when the two parents both invest in care to some extent, they should refrain from any conflict over the division of resources. The same effect can also support asymmetric outcomes in which one parent competes for resources and invests in care, whereas the other does not invest but refrains from competition. The fact that the caring parent gains higher fitness pay-offs at these equilibria suggests that abandoning a partner is not always to the latter's detriment, when the potential for resource competition is taken into account, but may instead be of benefit to the 'abandoned' mate. © 2016 The Author(s).

  18. Cooperation and the common good

    PubMed Central

    Johnstone, Rufus A.; Rodrigues, António M. M.

    2016-01-01

    In this paper, we draw the attention of biologists to a result from the economic literature, which suggests that when individuals are engaged in a communal activity of benefit to all, selection may favour cooperative sharing of resources even among non-relatives. Provided that group members all invest some resources in the public good, they should refrain from conflict over the division of these resources. The reason is that, given diminishing returns on investment in public and private goods, claiming (or ceding) a greater share of total resources only leads to the actor (or its competitors) investing more in the public good, such that the marginal costs and benefits of investment remain in balance. This cancels out any individual benefits of resource competition. We illustrate how this idea may be applied in the context of biparental care, using a sequential game in which parents first compete with one another over resources, and then choose how to allocate the resources they each obtain to care of their joint young (public good) versus their own survival and future reproductive success (private good). We show that when the two parents both invest in care to some extent, they should refrain from any conflict over the division of resources. The same effect can also support asymmetric outcomes in which one parent competes for resources and invests in care, whereas the other does not invest but refrains from competition. The fact that the caring parent gains higher fitness pay-offs at these equilibria suggests that abandoning a partner is not always to the latter's detriment, when the potential for resource competition is taken into account, but may instead be of benefit to the ‘abandoned’ mate. PMID:26729926

  19. What is a good index? Problems with statistically based indicators and the Malmquist index as alternative

    USDA-ARS?s Scientific Manuscript database

    Conventional multivariate statistical methods have been used for decades to calculate environmental indicators. These methods generally work fine if they are used in a situation where the method can be tailored to the data. But there is some skepticism that the methods might fail in the context of s...

  20. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  1. Entrepreneurs' self-reported health, social life, and strategies for maintaining good health.

    PubMed

    Gunnarsson, Kristina; Josephson, Malin

    2011-01-01

    This study investigated the association between self-reported good health and self-valued good social life. An additional aim was to examine entrepreneur's strategies for maintaining good health. The study design included a two-wave questionnaire, with five years between the surveys (2001 and 2006), and qualitative interviews. The study group consisted of 246 entrepreneurs from the central region of Sweden and represented ten different trades. Entrepreneurs reporting good health in both 2001 and 2006 were compared with entrepreneurs reporting poor health on both occasions or with inconsistent answers. Six of the entrepreneurs were strategically chosen for the interview study. Consistent good health was reported by 56% of the entrepreneurs. Good social life in 2001 was associated with an increased odds ratio (OR) for consistent good health when the analyses were adjusted for physical work conditions and job satisfaction (OR 2.12, 95% CI 1.07-4.17). Findings for good leisure time, weekly moderate physical exercise, and a rating of work being less or equally important as other life areas, were similar but not statistically significant when job satisfaction was considered in the analyses. Strategies for maintaining good health included good planning and control over work, flexibility at work, good social contact with family, friends and other entrepreneurs, and regular physical exercise. This study demonstrated an association between self-reported good health and good social life for entrepreneurs in small-scale enterprises. In addition, the entrepreneurs emphasised strategies such as planning and control over work and physical exercise are important for maintaining good health.

  2. Critical analysis of adsorption data statistically

    NASA Astrophysics Data System (ADS)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are <1, indicating favourable isotherms. Karl Pearson's correlation coefficient values for Langmuir and Freundlich adsorption isotherms were obtained as 0.99 and 0.95 respectively, which show higher degree of correlation between the variables. This validates the data obtained for adsorption of zinc ions from the contaminated aqueous solution with the help of mango leaf powder.

  3. Thermal Effusivity of Vegetable Oils Obtained by a Photothermal Technique

    NASA Astrophysics Data System (ADS)

    Cervantes-Espinosa, L. M.; de L. Castillo-Alvarado, F.; Lara-Hernández, G.; Cruz-Orea, A.; Hernández-Aguilar, C.; Domínguez-Pacheco, A.

    2014-10-01

    Thermal properties of several vegetable oils such as soy, corn, and avocado commercial oils were obtained by using a photopyroelectric technique. The inverse photopyroelectric configuration was used in order to obtain the thermal effusivity of the oil samples. The theoretical equation for the photopyroelectric signal in this configuration, as a function of the incident light modulation frequency, was fitted to the experimental data in order to obtain the thermal effusivity of these samples. The obtained results are in good agreement with the thermal effusivity reported for other vegetable oils. All measurements were done at room temperature.

  4. Statistical evaluation for stability studies under stress storage conditions.

    PubMed

    Gil-Alegre, M E; Bernabeu, J A; Camacho, M A; Torres-Suarez, A I

    2001-11-01

    During the pharmaceutical development of a new drug, it is necessary to select as soon as possible the formulation with the best stability characteristics. The current International Commission for Harmonisation (ICH) regulations regarding stability testing requirements for a Registration Application provide the stress testing conditions with the aim of assessing the effect of severe conditions on the drug product. In practice, the well-known Arrhenius theory is still used to make a rapid stability prediction, to estimate a drug product shelf life during early stages of its pharmaceutical development. In this work, both the planning of a stress stability study to obtain a correct stability prediction from a temperature extrapolation and the suitable data treatment to discern the reliability of the stability results are discussed. The study was focused on the early formulation step of a very stable drug, Mitonafide (antineoplastic agent), formulated in a parenteral solution and in tablets. It was observed, for the solid system, that the extrapolated results using Arrhenius theory might be statistically good, but far from the real situation if the stability study is not designed in a correct way. The statistical data treatment and the stress-stability test proposed in this work are suitable to make a reliable stability prediction of different formulations with the same drug, within its pharmaceutical development.

  5. When good statistical models of aquifer heterogeneity go right: The impact of aquifer permeability structures on 3D flow and transport

    NASA Astrophysics Data System (ADS)

    Jankovic, I.; Maghrebi, M.; Fiori, A.; Dagan, G.

    2017-02-01

    Natural gradient steady flow of mean velocity U takes place in heterogeneous aquifers of random logconductivity Y = lnK , characterized by the univariate PDF f(Y) and autocorrelation ρY. Solute transport is analyzed through the Breakthrough Curve (BTC) at planes at distance x from the injection plane. The study examines the impact of permeability structures sharing same f(Y) and ρY, but differing in higher order statistics (integral scales of variograms of Y classes) upon the numerical solution of flow and transport. Flow and transport are solved for 3D structures, rather than the 2D models adopted in most of previous works. We considered a few permeability structures, including the widely employed multi-Gaussian, the connected and disconnected fields introduced by Zinn and Harvey [2003] and a model characterized by equipartition of the correlation scale among Y values. We also consider the impact of statistical anisotropy of Y, the shape of ρY and local diffusion. The main finding is that unlike 2D, the prediction of the BTC of ergodic plumes by numerical and analytical models for different structures is quite robust, displaying a seemingly universal behavior, and can be used with confidence in applications. However, as a prerequisite the basic parameters KG (the geometric mean), σY2 (the logconductivity variance) and I (the horizontal integral scale of ρY) have to be identified from field data. The results suggest that narrowing down the gap between the BTCs in applications can be achieved by obtaining Kef (the effective conductivity) or U independently (e.g. by pumping tests), rather than attempting to characterize the permeability structure beyond f(Y) and ρY.

  6. Teaching Statistics to Social Science Students: Making It Valuable

    ERIC Educational Resources Information Center

    North, D.; Zewotir, T.

    2006-01-01

    In this age of rapid information expansion and technology, statistics is playing an ever increasing role in education, particularly also in the training of social scientists. Statistics enables the social scientist to obtain a quantitative awareness of socio-economic phenomena hence is essential in their training. Statistics, however, is becoming…

  7. Calculating Statistical Orbit Distributions Using GEO Optical Observations with the Michigan Orbital Debris Survey Telescope (MODEST)

    NASA Technical Reports Server (NTRS)

    Matney, M.; Barker, E.; Seitzer, P.; Abercromby, K. J.; Rodriquez, H. M.

    2006-01-01

    NASA's Orbital Debris measurements program has a goal to characterize the small debris environment in the geosynchronous Earth-orbit (GEO) region using optical telescopes ("small" refers to objects too small to catalog and track with current systems). Traditionally, observations of GEO and near-GEO objects involve following the object with the telescope long enough to obtain an orbit suitable for tracking purposes. Telescopes operating in survey mode, however, randomly observe objects that pass through their field of view. Typically, these short-arc observation are inadequate to obtain detailed orbits, but can be used to estimate approximate circular orbit elements (semimajor axis, inclination, and ascending node). From this information, it should be possible to make statistical inferences about the orbital distributions of the GEO population bright enough to be observed by the system. The Michigan Orbital Debris Survey Telescope (MODEST) has been making such statistical surveys of the GEO region for four years. During that time, the telescope has made enough observations in enough areas of the GEO belt to have had nearly complete coverage. That means that almost all objects in all possible orbits in the GEO and near- GEO region had a non-zero chance of being observed. Some regions (such as those near zero inclination) have had good coverage, while others are poorly covered. Nevertheless, it is possible to remove these statistical biases and reconstruct the orbit populations within the limits of sampling error. In this paper, these statistical techniques and assumptions are described, and the techniques are applied to the current MODEST data set to arrive at our best estimate of the GEO orbit population distribution.

  8. Sharpening method of satellite thermal image based on the geographical statistical model

    NASA Astrophysics Data System (ADS)

    Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng

    2016-04-01

    To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.

  9. Statistics in the pharmacy literature.

    PubMed

    Lee, Charlene M; Soin, Herpreet K; Einarson, Thomas R

    2004-09-01

    Research in statistical methods is essential for maintenance of high quality of the published literature. To update previous reports of the types and frequencies of statistical terms and procedures in research studies of selected professional pharmacy journals. We obtained all research articles published in 2001 in 6 journals: American Journal of Health-System Pharmacy, The Annals of Pharmacotherapy, Canadian Journal of Hospital Pharmacy, Formulary, Hospital Pharmacy, and Journal of the American Pharmaceutical Association. Two independent reviewers identified and recorded descriptive and inferential statistical terms/procedures found in the methods, results, and discussion sections of each article. Results were determined by tallying the total number of times, as well as the percentage, that each statistical term or procedure appeared in the articles. One hundred forty-four articles were included. Ninety-eight percent employed descriptive statistics; of these, 28% used only descriptive statistics. The most common descriptive statistical terms were percentage (90%), mean (74%), standard deviation (58%), and range (46%). Sixty-nine percent of the articles used inferential statistics, the most frequent being chi(2) (33%), Student's t-test (26%), Pearson's correlation coefficient r (18%), ANOVA (14%), and logistic regression (11%). Statistical terms and procedures were found in nearly all of the research articles published in pharmacy journals. Thus, pharmacy education should aim to provide current and future pharmacists with an understanding of the common statistical terms and procedures identified to facilitate the appropriate appraisal and consequential utilization of the information available in research articles.

  10. Good Education, the Good Teacher, and a Practical Art of Living a Good Life: A Catholic Perspective

    ERIC Educational Resources Information Center

    Hermans, Chris

    2017-01-01

    What is good education? We value education for reasons connected to the good provided by education in society. This good is connected to be the pedagogical aim of education. This article distinguishes five criteria for good education based on the concept of "Bildung". Next, these five criteria are used to develop the idea of the good…

  11. A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics

    NASA Astrophysics Data System (ADS)

    Colli, Matteo; Lanza, Luca

    2014-05-01

    The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity

  12. BTS guide to good statistical practice

    DOT National Transportation Integrated Search

    2002-09-01

    Quality of data has many faces. Primarily, it has to be relevant to its users. Relevance is : an outcome that is achieved through a series of steps starting with a planning process that : link user needs to data requirements. It continues through acq...

  13. Which statistics should tropical biologists learn?

    PubMed

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  14. Photon-number statistics of twin beams: Self-consistent measurement, reconstruction, and properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peřina, Jan Jr.; Haderka, Ondřej; Michálek, Václav

    2014-12-04

    A method for the determination of photon-number statistics of twin beams using the joint signal-idler photocount statistics obtained by an iCCD camera is described. It also provides absolute quantum detection efficiency of the camera. Using the measured photocount statistics, quasi-distributions of integrated intensities are obtained. They attain negative values occurring in characteristic strips an a consequence of pairing of photons in twin beams.

  15. Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.

    PubMed

    Yin, Guosheng; Ma, Yanyuan

    2013-01-01

    The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.

  16. On the gas phase fragmentation of protonated uracil: a statistical perspective.

    PubMed

    Rossich Molina, Estefanía; Salpin, Jean-Yves; Spezia, Riccardo; Martínez-Núñez, Emilio

    2016-06-01

    The potential energy surface of protonated uracil has been explored by an automated transition state search procedure, resulting in the finding of 1398 stationary points and 751 reactive channels, which can be categorized into isomerizations between pairs of isomers, unimolecular fragmentations and bimolecular reactions. The use of statistical Rice-Ramsperger-Kassel-Marcus (RRKM) theory and Kinetic Monte Carlo (KMC) simulations allowed us to determine the relative abundances of each fragmentation channel as a function of the ion's internal energy. The KMC/RRKM product abundances are compared with novel mass spectrometry (MS) experiments in the collision energy range 1-6 eV. To facilitate the comparison between theory and experiments, further dynamics simulations are carried out to determine the fraction of collision energy converted into the ion's internal energy. The KMC simulations show that the major fragmentation channels are isocyanic acid and ammonia losses, in good agreement with experiments. The third predominant channel is water loss according to both theory and experiments, although the abundance obtained in the KMC simulations is very low, suggesting that non-statistical dynamics might play an important role in this channel. Isocyanic acid (HNCOH(+)) is also an important product in the KMC simulations, although its abundance is only significant at internal energies not accessible in the MS experiments.

  17. Nonlinear wave chaos: statistics of second harmonic fields.

    PubMed

    Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M

    2017-10-01

    Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.

  18. ON THE DYNAMICAL DERIVATION OF EQUILIBRIUM STATISTICAL MECHANICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prigogine, I.; Balescu, R.; Henin, F.

    1960-12-01

    Work on nonequilibrium statistical mechanics, which allows an extension of the kinetic proof to all results of equilibrium statistical mechanics involving a finite number of degrees of freedom, is summarized. As an introduction to the general N-body problem, the scattering theory in classical mechanics is considered. The general N-body problem is considered for the case of classical mechanics, quantum mechanics with Boltzmann statistics, and quantum mechanics including quantum statistics. Six basic diagrams, which describe the elementary processes of the dynamics of correlations, were obtained. (M.C.G.)

  19. Do good actions inspire good actions in others?

    PubMed

    Capraro, Valerio; Marcelletti, Alessandra

    2014-12-12

    Actions such as sharing food and cooperating to reach a common goal have played a fundamental role in the evolution of human societies. Despite the importance of such good actions, little is known about if and how they can spread from person to person to person. For instance, does being recipient of an altruistic act increase your probability of being cooperative with a third party? We have conducted an experiment on Amazon Mechanical Turk to test this mechanism using economic games. We have measured willingness to be cooperative through a standard Prisoner's dilemma and willingness to act altruistically using a binary Dictator game. In the baseline treatments, the endowments needed to play were given by the experimenters, as usual; in the control treatments, they came from a good action made by someone else. Across four different comparisons and a total of 572 subjects, we have never found a significant increase of cooperation or altruism when the endowment came from a good action. We conclude that good actions do not necessarily inspire good actions in others. While this is consistent with the theoretical prediction, it challenges the majority of other experimental studies.

  20. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets: I. Summary statistics

    USGS Publications Warehouse

    Antweiler, Ronald C.; Taylor, Howard E.

    2008-01-01

    The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.

  1. A statistical approach to develop a detailed soot growth model using PAH characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael

    A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less

  2. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes

    PubMed Central

    2017-01-01

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274, 1926–1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105, 2745–2750; Thiessen & Yee 2010 Child Development 81, 1287–1303; Saffran 2002 Journal of Memory and Language 47, 172–196; Misyak & Christiansen 2012 Language Learning 62, 302–331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39, 246–263; Thiessen et al. 2013 Psychological Bulletin 139, 792–814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  3. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  4. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  5. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  6. Statistics of Macroturbulence from Flow Equations

    NASA Astrophysics Data System (ADS)

    Marston, Brad; Iadecola, Thomas; Qi, Wanming

    2012-02-01

    Probability distribution functions of stochastically-driven and frictionally-damped fluids are governed by a linear framework that resembles quantum many-body theory. Besides the Fokker-Planck approach, there is a closely related Hopf functional methodfootnotetextOokie Ma and J. B. Marston, J. Stat. Phys. Th. Exp. P10007 (2005).; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we generalize the flow equation approachfootnotetextF. Wegner, Ann. Phys. 3, 77 (1994). (also known as the method of continuous unitary transformationsfootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994).) to find the zero mode. We test the approach using a prototypical model of geophysical and astrophysical flows on a rotating sphere that spontaneously organizes into a coherent jet. Good agreement is found with low-order equal-time statistics accumulated by direct numerical simulation, the traditional method. Different choices for the generators of the continuous transformations, and for closure approximations of the operator algebra, are discussed.

  7. Statistics for Learning Genetics

    NASA Astrophysics Data System (ADS)

    Charles, Abigail Sheena

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

  8. Statistics of bow shock nonuniformity.

    NASA Technical Reports Server (NTRS)

    Greenstadt, E. W.

    1973-01-01

    The statistical occurrence of pulsation or oblique structure about the earth's generally nonuniform bow shock is estimated at selected points by combining a three-dimensional distribution of interplanetary field directions obtained for a six-day solar wind sector with an index of local pulsation geometry. The result, obtained with a pulsation index of 1.6, is a set of distribution patterns showing the dependence of the pulsation index on the field orientation at the selected shock loci for this value of the index.

  9. Confronting Passive and Active Sensors with Non-Gaussian Statistics

    PubMed Central

    Rodríguez-Gonzálvez, Pablo.; Garcia-Gago, Jesús.; Gomez-Lahoz, Javier.; González-Aguilera, Diego.

    2014-01-01

    This paper has two motivations: firstly, to compare the Digital Surface Models (DSM) derived by passive (digital camera) and by active (terrestrial laser scanner) remote sensing systems when applied to specific architectural objects, and secondly, to test how well the Gaussian classic statistics, with its Least Squares principle, adapts to data sets where asymmetrical gross errors may appear and whether this approach should be changed for a non-parametric one. The field of geomatic technology automation is immersed in a high demanding competition in which any innovation by one of the contenders immediately challenges the opponents to propose a better improvement. Nowadays, we seem to be witnessing an improvement of terrestrial photogrammetry and its integration with computer vision to overcome the performance limitations of laser scanning methods. Through this contribution some of the issues of this “technological race” are examined from the point of view of photogrammetry. A new software is introduced and an experimental test is designed, performed and assessed to try to cast some light on this thrilling match. For the case considered in this study, the results show good agreement between both sensors, despite considerable asymmetry. This asymmetry suggests that the standard Normal parameters are not adequate to assess this type of data, especially when accuracy is of importance. In this case, standard deviation fails to provide a good estimation of the results, whereas the results obtained for the Median Absolute Deviation and for the Biweight Midvariance are more appropriate measures. PMID:25196104

  10. Assessing the sense of `good at' and `not good at' toward learning topics of mathematics with conjoint analysis

    NASA Astrophysics Data System (ADS)

    Izuta, Giido; Nishikawa, Tomoko

    2017-05-01

    Over the past years, educational psychology and pedagogy communities have focused on the metacognition formalism as a helpful approach to carry out investigations on the feeling of difficulty in mastering some classroom materials that students acquire through their subjective experiences of learning in schools. Motivated by hitherto studies, this work deals with the assessment of the awareness of `good at' and `not good at' that Japanese junior high school students have towards the main learning modules in their three years of mathematics. More specifically, the aims here are (i) to shed some light into how the awareness varies across the grades and gender; (ii) to get some insights into the extent to what the conjoint analysis can be applied to understand the students' feelings toward learning activities. To accomplish them, a conjoint analysis survey with three conjoint attributes, each with two levels, were designed to assess the learners' perceptions of `good at' and `not good at' with respect to arithmetic (algebraic operations), geometry and functions, which make up the three major modules of their curricula. The measurements took place in a public junior high school with 616 school children. It turned out that the conjoint analyses for boys and girls of each grade generated the partial utility and importance graphs which along with a pre-established precision of measurement allowed us to form groups of pupils according to their `sense of being good at' characteristics. Moreover, the results showed that the number of groups obtained differed for boys and girls as well as grades when the gender and school years were considered for comparisons. These findings suggesting that female students outnumbers their peers in number of `good at' despite the low number of females pursuing careers in mathematics and related fields imply that investigation on the causes of this juxtaposition has to be taken into account in the future.

  11. Magnification Bias in Gravitational Arc Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caminha, G. B.; Estrada, J.; Makler, M.

    2013-08-29

    The statistics of gravitational arcs in galaxy clusters is a powerful probe of cluster structure and may provide complementary cosmological constraints. Despite recent progresses, discrepancies still remain among modelling and observations of arc abundance, specially regarding the redshift distribution of strong lensing clusters. Besides, fast "semi-analytic" methods still have to incorporate the success obtained with simulations. In this paper we discuss the contribution of the magnification in gravitational arc statistics. Although lensing conserves surface brightness, the magnification increases the signal-to-noise ratio of the arcs, enhancing their detectability. We present an approach to include this and other observational effects in semi-analyticmore » calculations for arc statistics. The cross section for arc formation ({\\sigma}) is computed through a semi-analytic method based on the ratio of the eigenvalues of the magnification tensor. Using this approach we obtained the scaling of {\\sigma} with respect to the magnification, and other parameters, allowing for a fast computation of the cross section. We apply this method to evaluate the expected number of arcs per cluster using an elliptical Navarro--Frenk--White matter distribution. Our results show that the magnification has a strong effect on the arc abundance, enhancing the fraction of arcs, moving the peak of the arc fraction to higher redshifts, and softening its decrease at high redshifts. We argue that the effect of magnification should be included in arc statistics modelling and that it could help to reconcile arcs statistics predictions with the observational data.« less

  12. Indoor Soiling Method and Outdoor Statistical Risk Analysis of Photovoltaic Power Plants

    NASA Astrophysics Data System (ADS)

    Rajasekar, Vidyashree

    This is a two-part thesis. Part 1 presents an approach for working towards the development of a standardized artificial soiling method for laminated photovoltaic (PV) cells or mini-modules. Construction of an artificial chamber to maintain controlled environmental conditions and components/chemicals used in artificial soil formulation is briefly explained. Both poly-Si mini-modules and a single cell mono-Si coupons were soiled and characterization tests such as I-V, reflectance and quantum efficiency (QE) were carried out on both soiled, and cleaned coupons. From the results obtained, poly-Si mini-modules proved to be a good measure of soil uniformity, as any non-uniformity present would not result in a smooth curve during I-V measurements. The challenges faced while executing reflectance and QE characterization tests on poly-Si due to smaller size cells was eliminated on the mono-Si coupons with large cells to obtain highly repeatable measurements. This study indicates that the reflectance measurements between 600-700 nm wavelengths can be used as a direct measure of soil density on the modules. Part 2 determines the most dominant failure modes of field aged PV modules using experimental data obtained in the field and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 744 poly-Si glass/polymer frameless modules fielded for 18 years under the cold-dry climate of New York was evaluated. Defect chart, degradation rates (both string and module levels) and safety map were generated using the field measured data. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is used to determine the dominant failure or degradation modes in the strings and modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives. The indoor and outdoor soiling studies were jointly

  13. Phase error statistics of a phase-locked loop synchronized direct detection optical PPM communication system

    NASA Technical Reports Server (NTRS)

    Natarajan, Suresh; Gardner, C. S.

    1987-01-01

    Receiver timing synchronization of an optical Pulse-Position Modulation (PPM) communication system can be achieved using a phased-locked loop (PLL), provided the photodetector output is suitably processed. The magnitude of the PLL phase error is a good indicator of the timing error at the receiver decoder. The statistics of the phase error are investigated while varying several key system parameters such as PPM order, signal and background strengths, and PPL bandwidth. A practical optical communication system utilizing a laser diode transmitter and an avalanche photodiode in the receiver is described, and the sampled phase error data are presented. A linear regression analysis is applied to the data to obtain estimates of the relational constants involving the phase error variance and incident signal power.

  14. Statistical Study of Nightside Quiet Time Midlatitude Ionospheric Convection

    NASA Astrophysics Data System (ADS)

    Maimaiti, M.; Ruohoniemi, J. M.; Baker, J. B. H.; Ribeiro, A. J.

    2018-03-01

    Previous studies have shown that F region midlatitude ionospheric plasma exhibits drifts of a few tens of meters per second during quiet geomagnetic conditions, predominantly in the westward direction. However, detailed morphology of this plasma motion and its drivers are still not well understood. In this study, we have used 2 years of data obtained from six midlatitude SuperDARN radars in the North American sector to derive a statistical model of quiet time midlatitude plasma convection between 52° and 58° magnetic latitude (MLAT). The model is organized in MLAT-MLT (magnetic local time) coordinates and has a spatial resolution of 1° × 7 min with thousands of velocity measurements contributing to most grid cells. Our results show that the flow is predominantly westward (20-55 m/s) and weakly northward (0-20 m/s) deep on the nightside but with a strong seasonal dependence such that the flows tend to be strongest and most structured in winter. These statistical results are in good agreement with previously reported observations from Millstone Hill incoherent scatter radar measurements for a single latitude but also show some interesting new features, one being a significant latitudinal variation of zonal flow velocity near midnight in winter. Our analysis suggests that penetration of the high-latitude convection electric fields can account for the direction of midlatitude convection in the premidnight sector, but postmidnight midlatitude convection is dominated by the neutral wind dynamo.

  15. Calculation of recoil implantation profiles using known range statistics

    NASA Technical Reports Server (NTRS)

    Fung, C. D.; Avila, R. E.

    1985-01-01

    A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.

  16. Statistical speed of quantum states: Generalized quantum Fisher information and Schatten speed

    NASA Astrophysics Data System (ADS)

    Gessner, Manuel; Smerzi, Augusto

    2018-02-01

    We analyze families of measures for the quantum statistical speed which include as special cases the quantum Fisher information, the trace speed, i.e., the quantum statistical speed obtained from the trace distance, and more general quantifiers obtained from the family of Schatten norms. These measures quantify the statistical speed under generic quantum evolutions and are obtained by maximizing classical measures over all possible quantum measurements. We discuss general properties, optimal measurements, and upper bounds on the speed of separable states. We further provide a physical interpretation for the trace speed by linking it to an analog of the quantum Cramér-Rao bound for median-unbiased quantum phase estimation.

  17. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  18. Evaluation of graphical and statistical representation of analytical signals of spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam Mahmoud; Fayez, Yasmin Mohammed; Tawakkol, Shereen Mostafa; Fahmy, Nesma Mahmoud; Shehata, Mostafa Abd El-Atty

    2017-09-01

    Simultaneous determination of miconazole (MIC), mometasone furaoate (MF), and gentamicin (GEN) in their pharmaceutical combination. Gentamicin determination is based on derivatization with of o-phthalaldehyde reagent (OPA) without any interference of other cited drugs, while the spectra of MIC and MF are resolved using both successive and progressive resolution techniques. The first derivative spectrum of MF is measured using constant multiplication or spectrum subtraction, while its recovered zero order spectrum is obtained using derivative transformation. Beside the application of constant value method. Zero order spectrum of MIC is obtained by derivative transformation after getting its first derivative spectrum by derivative subtraction method. The novel method namely, differential amplitude modulation is used to get the concentration of MF and MIC, while the novel graphical method namely, concentration value is used to get the concentration of MIC, MF, and GEN. Accuracy and precision testing of the developed methods show good results. Specificity of the methods is ensured and is successfully applied for the analysis of pharmaceutical formulation of the three drugs in combination. ICH guidelines are used for validation of the proposed methods. Statistical data are calculated, and the results are satisfactory revealing no significant difference regarding accuracy and precision.

  19. The GISMO two-millimeter deep field in GOODS-N

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staguhn, Johannes G.; Kovács, Attila; Arendt, Richard G.

    2014-07-20

    We present deep continuum observations using the GISMO camera at a wavelength of 2 mm centered on the Hubble Deep Field in the GOODS-N field. These are the first deep field observations ever obtained at this wavelength. The 1σ sensitivity in the innermost ∼4' of the 7' diameter map is ∼135 μJy beam{sup –1}, a factor of three higher in flux/beam sensitivity than the deepest available SCUBA 850 μm observations, and almost a factor of four higher in flux/beam sensitivity than the combined MAMBO/AzTEC 1.2 mm observations of this region. Our source extraction algorithm identifies 12 sources directly, and anothermore » 3 through correlation with known sources at 1.2 mm and 850 μm. Five of the directly detected GISMO sources have counterparts in the MAMBO/AzTEC catalog, and four of those also have SCUBA counterparts. HDF850.1, one of the first blank-field detected submillimeter galaxies, is now detected at 2 mm. The median redshift of all sources with counterparts of known redshifts is z-tilde =2.91±0.94. Statistically, the detections are most likely real for five of the seven 2 mm sources without shorter wavelength counterparts, while the probability for none of them being real is negligible.« less

  20. Statistics of concentrations due to single air pollution sources to be applied in numerical modelling of pollutant dispersion

    NASA Astrophysics Data System (ADS)

    Tumanov, Sergiu

    A test of goodness of fit based on rank statistics was applied to prove the applicability of the Eggenberger-Polya discrete probability law to hourly SO 2-concentrations measured in the vicinity of single sources. With this end in view, the pollutant concentration was considered an integral quantity which may be accepted if one properly chooses the unit of measurement (in this case μg m -3) and if account is taken of the limited accuracy of measurements. The results of the test being satisfactory, even in the range of upper quantiles, the Eggenberger-Polya law was used in association with numerical modelling to estimate statistical parameters, e.g. quantiles, cumulative probabilities of threshold concentrations to be exceeded, and so on, in the grid points of a network covering the area of interest. This only needs accurate estimations of means and variances of the concentration series which can readily be obtained through routine air pollution dispersion modelling.

  1. Statistical Mechanics of Disordered Systems - Series: Cambridge Series in Statistical and Probabilistic Mathematics (No. 18)

    NASA Astrophysics Data System (ADS)

    Bovier, Anton

    2006-06-01

    Our mathematical understanding of the statistical mechanics of disordered systems is going through a period of stunning progress. This self-contained book is a graduate-level introduction for mathematicians and for physicists interested in the mathematical foundations of the field, and can be used as a textbook for a two-semester course on mathematical statistical mechanics. It assumes only basic knowledge of classical physics and, on the mathematics side, a good working knowledge of graduate-level probability theory. The book starts with a concise introduction to statistical mechanics, proceeds to disordered lattice spin systems, and concludes with a presentation of the latest developments in the mathematical understanding of mean-field spin glass models. In particular, recent progress towards a rigorous understanding of the replica symmetry-breaking solutions of the Sherrington-Kirkpatrick spin glass models, due to Guerra, Aizenman-Sims-Starr and Talagrand, is reviewed in some detail. Comprehensive introduction to an active and fascinating area of research Clear exposition that builds to the state of the art in the mathematics of spin glasses Written by a well-known and active researcher in the field

  2. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on

  3. On Using the Weimer Statistical Model for Real-Time Ionospheric Specifications and Forecasts

    NASA Astrophysics Data System (ADS)

    Bekerat, H. A.; Schunk, R. W.; Scherliess, L.

    2002-12-01

    The Weimer statistical model (Weimer, 2001) for the high-latitude convection pattern was tested with regard to its ability to produce real-time convection patterns. This work is being conducted under the polar section of GAIM (Global Assimilation of Ionospheric Measurements). The method adopted involves the comparison of the cross-track ion drift velocities measured by DMSP satellites with those calculated from the Weimer model. Starting with a Weimer pattern obtained using real-time IMF and solar wind data at the time of a DMSP satellite pass in the high-latitude ionosphere, the cross-track ion drift velocities along the DMSP track were calculated from the Weimer convection model and compared to those measured by the DMSP satellite. Then, in order to improve the agreement between the measurement and the model, two of the input parameters to the model, the IMF clock-angle and the solar wind speed, were varied to get the pattern that gives the best agreement with the DMSP satellite measurements. Four months of data (March, July, September, and December 1998) were used to test the Weimer model. The result shows that the agreement between the measurement and the Weimer model is improved by using this procedure. The Weimer model is good in a statistical sense, it was able to produce the large-scale structure in most cases. However, it is not good enough to be used for real-time ionospheric specifications and forecasts because it failed to produce a lot of the mesoscale structure measured along most DMSP satellite passes. Reference Weimer, D. R., J. Geophys. Res., 106, 407,2001

  4. Statistical wave climate projections for coastal impact assessments

    NASA Astrophysics Data System (ADS)

    Camus, P.; Losada, I. J.; Izaguirre, C.; Espejo, A.; Menéndez, M.; Pérez, J.

    2017-09-01

    Global multimodel wave climate projections are obtained at 1.0° × 1.0° scale from 30 Coupled Model Intercomparison Project Phase 5 (CMIP5) global circulation model (GCM) realizations. A semi-supervised weather-typing approach based on a characterization of the ocean wave generation areas and the historical wave information from the recent GOW2 database are used to train the statistical model. This framework is also applied to obtain high resolution projections of coastal wave climate and coastal impacts as port operability and coastal flooding. Regional projections are estimated using the collection of weather types at spacing of 1.0°. This assumption is feasible because the predictor is defined based on the wave generation area and the classification is guided by the local wave climate. The assessment of future changes in coastal impacts is based on direct downscaling of indicators defined by empirical formulations (total water level for coastal flooding and number of hours per year with overtopping for port operability). Global multimodel projections of the significant wave height and peak period are consistent with changes obtained in previous studies. Statistical confidence of expected changes is obtained due to the large number of GCMs to construct the ensemble. The proposed methodology is proved to be flexible to project wave climate at different spatial scales. Regional changes of additional variables as wave direction or other statistics can be estimated from the future empirical distribution with extreme values restricted to high percentiles (i.e., 95th, 99th percentiles). The statistical framework can also be applied to evaluate regional coastal impacts integrating changes in storminess and sea level rise.

  5. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  6. Statistical evaluation of forecasts

    NASA Astrophysics Data System (ADS)

    Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn

    2014-08-01

    Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.

  7. 19 CFR 10.521 - Goods eligible for tariff preference level claims.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... States-Singapore Free Trade Agreement Tariff Preference Level § 10.521 Goods eligible for tariff... assembled in Singapore from fabric or yarn produced or obtained outside the territory of Singapore or the...

  8. A "good death": perspectives of Muslim patients and health care providers.

    PubMed

    Tayeb, Mohamad A; Al-Zamel, Ersan; Fareed, Muhammed M; Abouellail, Hesham A

    2010-01-01

    Twelve "good death" principles have been identified that apply to Westerners. This study aimed to review the TFHCOP good death perception to determine its validity for Muslim patients and health care providers, and to identify and describe other components of the Muslim good death perspective. Participants included 284 Muslims of both genders with different nationalities and careers. We used a 12-question questionnaire based on the 12 principles of the TFHCOP good death definition, followed by face-to-face interviews. We used descriptive statistics to analyze questionnaire responses. However, for new themes, we used a grounded theory approach with a "constant comparisons" method. On average, each participant agreed on eight principles of the questionnaire. Dignity, privacy, spiritual and emotional support, access to hospice care, ability to issue advance directives, and to have time to say goodbye were the top priorities. Participants identified three main domains. The first domain was related to faith and belief. The second domain included some principles related to self-esteem and person's image to friends and family. The third domain was related to satisfaction about family security after the death of the patient. Professional role distinctions were more pronounced than were gender or nationality differences. Several aspects of "good death," as perceived by Western communities, are not recognized as being important by many Muslim patients and health care providers. Furthermore, our study introduced three novel components of good death in Muslim society.

  9. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  10. Depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Li, Yu-Jian; Wang, Bing-Hong

    2011-10-01

    In real situations, the value of public goods will be reduced or even lost because of external factors or for intrinsic reasons. In this work, we investigate the evolution of cooperation by considering the effect of depreciation of public goods in spatial public goods games on a square lattice. It is assumed that each individual gains full advantage if the number of the cooperators nc within a group centered on that individual equals or exceeds the critical mass (CM). Otherwise, there is depreciation of the public goods, which is realized by rescaling the multiplication factor r to (nc/CM)r. It is shown that the emergence of cooperation is remarkably promoted for CM > 1 even at small values of r, and a global cooperative level is achieved at an intermediate value of CM = 4 at a small r. We further study the effect of depreciation of public goods on different topologies of a regular lattice, and find that the system always reaches global cooperation at a moderate value of CM = G - 1 regardless of whether or not there exist overlapping triangle structures on the regular lattice, where G is the group size of the associated regular lattice.

  11. Inappropriate Fiddling with Statistical Analyses to Obtain a Desirable P-value: Tests to Detect its Presence in Published Literature

    PubMed Central

    Gadbury, Gary L.; Allison, David B.

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a “near significant p-value” to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called “fiddling”) in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000. PMID:23056287

  12. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature.

    PubMed

    Gadbury, Gary L; Allison, David B

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a "near significant p-value" to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called "fiddling") in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000.

  13. VizieR Online Data Catalog: GOODS-S CANDELS multiwavelength catalog (Guo+, 2013)

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ferguson, H. C.; Giavalisco, M.; Barro, G.; Willner, S. P.; Ashby, M. L. N.; Dahlen, T.; Donley, J. L.; Faber, S. M.; Fontana, A.; Galametz, A.; Grazian, A.; Huang, K.-H.; Kocevski, D. D.; Koekemoer, A. M.; Koo, D. C.; McGrath, E. J.; Peth, M.; Salvato, M.; Wuyts, S.; Castellano, M.; Cooray, A. R.; Dickinson, M. E.; Dunlop, J. S.; Fazio, G. G.; Gardner, J. P.; Gawiser, E.; Grogin, N. A.; Hathi, N. P.; Hsu, L.-T.; Lee, K.-S.; Lucas, R. A.; Mobasher, B.; Nandra, K.; Newman, J. A.; van der Wel, A.

    2014-04-01

    The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011ApJS..197...35G; Koekemoer et al. 2011ApJS..197...36K) is designed to document galaxy formation and evolution over the redshift range of z=1.5-8. The core of CANDELS is to use the revolutionary near-infrared HST/WFC3 camera, installed on HST in 2009 May, to obtain deep imaging of faint and faraway objects. The GOODS-S field, centered at RAJ2000=03:32:30 and DEJ2000=-27:48:20 and located within the Chandra Deep Field South (CDFS; Giacconi et al. 2002, Cat. J/ApJS/139/369), is a sky region of about 170arcmin2 which has been targeted for some of the deepest observations ever taken by NASA's Great Observatories, HST, Spitzer, and Chandra as well as by other world-class telescopes. The field has been (among others) imaged in the optical wavelength with HST/ACS in F435W, F606W, F775W, and F850LP bands as part of the HST Treasury Program: the Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004, Cat. II/261); in the mid-IR (3.6-24um) wavelength with Spitzer as part of the GOODS Spitzer Legacy Program (PI: M. Dickinson). The CDF-S/GOODS field was observed by the MOSAIC II imager on the CTIO 4m Blanco telescope to obtain deep U-band observations in 2001 September. Another U-band survey in GOODS-S was carried out using the VIMOS instrument mounted at the Melipal Unit Telescope of the VLT at ESO's Cerro Paranal Observatory, Chile. This large program of ESO (168.A-0485; PI: C. Cesarsky) was obtained in service mode observations in UT3 between 2004 August and fall 2006. In the ground-based NIR, imaging observations of the CDFS were carried out in J, H, Ks bands using the ISAAC instrument mounted at the Antu Unit Telescope of the VLT. Data were obtained as part of the ESO Large Programme 168.A-0485 (PI: C. Cesarsky) as well as ESO Programmes 64.O-0643, 66.A-0572, and 68.A-0544 (PI: E. Giallongo) with a total allocation time of ~500 hr from 1999 October to 2007 January

  14. The public goods game with a new form of shared reward

    NASA Astrophysics Data System (ADS)

    Zhang, Chunyan; Chen, Zengqiang

    2016-10-01

    Altruistic contribution to a common good evenly enjoyed by all group members is hard to explain because of the greater benefits obtained by a defector than a cooperator. A variety of mechanisms have been proposed to resolve the collective dilemma over the years, including rewards for altruism. An underrated and easily ignored phenomenon is that the altruistic behaviors of cooperators not only directly enhance the benefits of their game opponents, but also indirectly produce good influences to other allied members in their surroundings (e.g. relatives or friends). Here we propose a shared reward, in the form of extensive benefits, to extend the traditional definition of the public goods game. Mathematical analysis using the Moran process helps us to obtain the fixation probability for one ‘mutant’ cooperator to invade and dominate the whole defecting population. Results suggest that a tunable parameter exists, above a certain critical value of which natural selection favors cooperation over defection. In addition, analytical results with replicator dynamics show that this critical value influencing the evolution of altruism is closely correlated with the population size, the gaming group size and the synergy factor of the public goods game. These results, based on an extended notion of shared reward and extensive benefits, are expected to provide novel explanations for the emergence of altruistic behaviors.

  15. Wolfram's class IV automata and a good life

    NASA Astrophysics Data System (ADS)

    McIntosh, Harold V.

    1990-09-01

    A comprehensive discussion of Wolfram's four classes of cellular automata is given, with the intention of relating them to Conway's criteria for a good game of Life. Although it is known that such classifications cannot be entirely rigorous, much information about the behavior of an automaton can be gleaned from the statistical properties of its transition table. Still more information can be deduced from the mean field approximation to its state densities, in particular, from the distribution of horizontal and diagonal tangents of the latter. In turn these characteristics can be related to the presence or absence of certain loops in the de Bruijn diagram of the automaton.

  16. Teaching Basic Probability in Undergraduate Statistics or Management Science Courses

    ERIC Educational Resources Information Center

    Naidu, Jaideep T.; Sanford, John F.

    2017-01-01

    Standard textbooks in core Statistics and Management Science classes present various examples to introduce basic probability concepts to undergraduate business students. These include tossing of a coin, throwing a die, and examples of that nature. While these are good examples to introduce basic probability, we use improvised versions of Russian…

  17. 19 CFR 10.605 - Goods classifiable as goods put up in sets.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-Central America-United States Free Trade Agreement Rules of Origin § 10.605 Goods classifiable as goods... 19 Customs Duties 1 2010-04-01 2010-04-01 false Goods classifiable as goods put up in sets. 10.605 Section 10.605 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY...

  18. Modeling of carbon dioxide condensation in the high pressure flows using the statistical BGK approach

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Li, Zheng; Levin, Deborah A.

    2011-05-01

    In this work, we propose a new heat accommodation model to simulate freely expanding homogeneous condensation flows of gaseous carbon dioxide using a new approach, the statistical Bhatnagar-Gross-Krook method. The motivation for the present work comes from the earlier work of Li et al. [J. Phys. Chem. 114, 5276 (2010)] in which condensation models were proposed and used in the direct simulation Monte Carlo method to simulate the flow of carbon dioxide from supersonic expansions of small nozzles into near-vacuum conditions. Simulations conducted for stagnation pressures of one and three bar were compared with the measurements of gas and cluster number densities, cluster size, and carbon dioxide rotational temperature obtained by Ramos et al. [Phys. Rev. A 72, 3204 (2005)]. Due to the high computational cost of direct simulation Monte Carlo method, comparison between simulations and data could only be performed for these stagnation pressures, with good agreement obtained beyond the condensation onset point, in the farfield. As the stagnation pressure increases, the degree of condensation also increases; therefore, to improve the modeling of condensation onset, one must be able to simulate higher stagnation pressures. In simulations of an expanding flow of argon through a nozzle, Kumar et al. [AIAA J. 48, 1531 (2010)] found that the statistical Bhatnagar-Gross-Krook method provides the same accuracy as direct simulation Monte Carlo method, but, at one half of the computational cost. In this work, the statistical Bhatnagar-Gross-Krook method was modified to account for internal degrees of freedom for multi-species polyatomic gases. With the computational approach in hand, we developed and tested a new heat accommodation model for a polyatomic system to properly account for the heat release of condensation. We then developed condensation models in the framework of the statistical Bhatnagar-Gross-Krook method. Simulations were found to agree well with the experiment for

  19. Descriptive statistics: the specification of statistical measures and their presentation in tables and graphs. Part 7 of a series on evaluation of scientific publications.

    PubMed

    Spriestersbach, Albert; Röhrig, Bernd; du Prel, Jean-Baptist; Gerhold-Ay, Aslihan; Blettner, Maria

    2009-09-01

    Descriptive statistics are an essential part of biometric analysis and a prerequisite for the understanding of further statistical evaluations, including the drawing of inferences. When data are well presented, it is usually obvious whether the author has collected and evaluated them correctly and in keeping with accepted practice in the field. Statistical variables in medicine may be of either the metric (continuous, quantitative) or categorical (nominal, ordinal) type. Easily understandable examples are given. Basic techniques for the statistical description of collected data are presented and illustrated with examples. The goal of a scientific study must always be clearly defined. The definition of the target value or clinical endpoint determines the level of measurement of the variables in question. Nearly all variables, whatever their level of measurement, can be usefully presented graphically and numerically. The level of measurement determines what types of diagrams and statistical values are appropriate. There are also different ways of presenting combinations of two independent variables graphically and numerically. The description of collected data is indispensable. If the data are of good quality, valid and important conclusions can already be drawn when they are properly described. Furthermore, data description provides a basis for inferential statistics.

  20. The effects of good glycaemic control on left ventricular and coronary endothelial functions in patients with poorly controlled Type 2 diabetes mellitus.

    PubMed

    Erdogan, Dogan; Akcay, Salaheddin; Yucel, Habil; Ersoy, I Hakkı; Icli, Atilla; Kutlucan, Ali; Arslan, Akif; Yener, Mahmut; Ozaydin, Mehmet; Tamer, M Numan

    2015-03-01

    Diabetics are at risk for developing overt heart failure and subclinical left ventricular (LV) dysfunction. Also, impaired coronary flow reserve (CFR) reflecting coronary microvascular dysfunction is common in diabetics. However, no substantial data regarding the effects of good glycaemic control on subclinical LV dysfunction and CFR are available. To investigate whether good glycaemic control had favourable effects on subclinical LV dysfunction and CFR. Prospective, open-label, follow-up study. Diabetics (n = 202) were classified based on baseline HbA1C levels: patients with good (group 1) (<7·0%) and poor glycaemic control (≥7·0%). All patients underwent echocardiographic examination at baseline evaluation, and it was repeated at months 6 and 12. Based on HbA1C levels obtained at month 6, the patients with poor glycaemic control were divided into two groups: achieved (group 2) and not achieved good glycaemic control (group 3). The groups were comparable with respect to diastolic function parameters including left atrium diameter, mitral E/A, Sm , Em /Am , E/E' and Tei index, and these parameters did not significantly change at follow-up in the groups. At baseline, CFR was slightly higher in group 1 than in group 2 and group 3, but it did not reach statistically significant level. At follow-up, CFR remained unchanged in group 1 (P = 0·58) and group 3 (P = 0·86), but increased in group 2 (P = 0·02: month 6 vs baseline and P = 0·004: month 12 vs baseline). Diabetics with poor and good glycaemic control were comparable with respect to echocardiographic parameters reflecting subclinical LV dysfunction, and good glycaemic control did not affect these parameters. However, good glycaemic control improved CFR. © 2014 John Wiley & Sons Ltd.

  1. Bayesian statistics and Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  2. Shelf life of packaged bakery goods--a review.

    PubMed

    Galić, K; Curić, D; Gabrić, D

    2009-05-01

    Packaging requirements for fresh bakery goods are often minimal as many of the products are for immediate consumption. However, packaging can be an important factor in extending the shelf life of other cereal-based goods (toast, frozen products, biscuits, cakes, pastas). Some amount of the texture changes and flavor loss manifest over the shelf life of a soft-baked good can usually be minimized or delayed by effective use of packaging materials. The gains in the extension of shelf life will be application specific. It is recognized that defining the shelf life of a food is a difficult task and is an area of intense research for food product development scientists (food technologists, microbiologists, packaging experts). Proper application of chemical kinetic principles to food quality loss allows for efficiently designing appropriate shelf-life tests and maximizing the useful information that can be obtained from the resulting data. In the development of any new food product including reformulating, change of packaging, or storage/distribution condition (to penetrate into a new market), one important aspect is the knowledge of shelf life.

  3. Good Agreements Make Good Friends

    PubMed Central

    Han, The Anh; Pereira, Luís Moniz; Santos, Francisco C.; Lenaerts, Tom

    2013-01-01

    When starting a new collaborative endeavor, it pays to establish upfront how strongly your partner commits to the common goal and what compensation can be expected in case the collaboration is violated. Diverse examples in biological and social contexts have demonstrated the pervasiveness of making prior agreements on posterior compensations, suggesting that this behavior could have been shaped by natural selection. Here, we analyze the evolutionary relevance of such a commitment strategy and relate it to the costly punishment strategy, where no prior agreements are made. We show that when the cost of arranging a commitment deal lies within certain limits, substantial levels of cooperation can be achieved. Moreover, these levels are higher than that achieved by simple costly punishment, especially when one insists on sharing the arrangement cost. Not only do we show that good agreements make good friends, agreements based on shared costs result in even better outcomes. PMID:24045873

  4. "Good mothering" or "good citizenship"?

    PubMed

    Porter, Maree; Kerridge, Ian H; Jordens, Christopher F C

    2012-03-01

    Umbilical cord blood banking is one of many biomedical innovations that confront pregnant women with new choices about what they should do to secure their own and their child's best interests. Many mothers can now choose to donate their baby's umbilical cord blood (UCB) to a public cord blood bank or pay to store it in a private cord blood bank. Donation to a public bank is widely regarded as an altruistic act of civic responsibility. Paying to store UCB may be regarded as a "unique opportunity" to provide "insurance" for the child's future. This paper reports findings from a survey of Australian women that investigated the decision to either donate or store UCB. We conclude that mothers are faced with competing discourses that force them to choose between being a "good mother" and fulfilling their role as a "good citizen." We discuss this finding with reference to the concept of value pluralism.

  5. Evaluation of Rock Powdering Methods to Obtain Fine-grained Samples for CHEMIN, a Combined XRD/XRF Instrument

    NASA Technical Reports Server (NTRS)

    Chipera, S. J.; Vaniman, D. T.; Bish, D. L.; Sarrazin, P.; Feldman, S.; Blake, D. F.; Bearman, G.; Bar-Cohen, Y.

    2004-01-01

    A miniature XRD/XRF (X-ray diffraction / X-ray fluorescence) instrument, CHEMIN, is currently being developed for definitive mineralogic analysis of soils and rocks on Mars. One of the technical issues that must be addressed to enable remote XRD analysis is how best to obtain a representative sample powder for analysis. For powder XRD analyses, it is beneficial to have a fine-grained sample to reduce preferred orientation effects and to provide a statistically significant number of crystallites to the X-ray beam. Although a two-dimensional detector as used in the CHEMIN instrument will produce good results even with poorly prepared powder, the quality of the data will improve and the time required for data collection will be reduced if the sample is fine-grained and randomly oriented. A variety of methods have been proposed for XRD sample preparation. Chipera et al. presented grain size distributions and XRD results from powders generated with an Ultrasonic/Sonic Driller/Corer (USDC) currently being developed at JPL. The USDC was shown to be an effective instrument for sampling rock to produce powder suitable for XRD. In this paper, we compare powder prepared using the USDC with powder obtained with a miniaturized rock crusher developed at JPL and with powder obtained with a rotary tungsten carbide bit to powders obtained from a laboratory bench-scale Retsch mill (provides benchmark mineralogical data). These comparisons will allow assessment of the suitability of these methods for analysis by an XRD/XRF instrument such as CHEMIN.

  6. Statistical Significance and Effect Size: Two Sides of a Coin.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  7. Spectral statistics of the acoustic stadium

    NASA Astrophysics Data System (ADS)

    Méndez-Sánchez, R. A.; Báez, G.; Leyvraz, F.; Seligman, T. H.

    2014-01-01

    We calculate the normal-mode frequencies and wave amplitudes of the two-dimensional acoustical stadium. We also obtain the statistical properties of the acoustical spectrum and show that they agree with the results given by random matrix theory. Some normal-mode wave amplitudes showing scarring are presented.

  8. Snug as a Bug: Goodness of Fit and Quality of Models.

    PubMed

    Jupiter, Daniel C

    In elucidating risk factors, or attempting to make predictions about the behavior of subjects in our biomedical studies, we often build statistical models. These models are meant to capture some aspect of reality, or some real-world process underlying the phenomena we are examining. However, no model is perfect, and it is thus important to have tools to assess how accurate models are. In this commentary, we delve into the various roles that our models can play. Then we introduce the notion of the goodness of fit of models and lay the ground work for further study of diagnostic tests for assessing both the fidelity of our models and the statistical assumptions underlying them. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Sources of international migration statistics in Africa.

    PubMed

    1984-01-01

    The sources of international migration data for Africa may be classified into 2 main categories: administrative records and 2) censuses and survey data. Both categories are sources for the direct measurement of migration, but the 2nd category can be used for the indirect estimation of net international migration. The administrative records from which data on international migration may be derived include 1) entry/departure cards or forms completed at international borders, 2) residence/work permits issued to aliens, and 3) general population registers and registers of aliens. The statistics derived from the entry/departure cards may be described as 1) land frontier control statistics and 2) port control statistics. The former refer to data derived from movements across land borders and the latter refer to information collected at international airports and seaports. Other administrative records which are potential sources of statistics on international migration in some African countries include some limited population registers, records of the registration of aliens, and particulars of residence/work permits issued to aliens. Although frontier control data are considered the most important source of international migration statistics, in many African countries these data are too deficient to provide a satisfactory indication of the level of international migration. Thus decennial population censuses and/or sample surveys are the major sources of the available statistics on the stock and characteristics of international migration. Indirect methods can be used to supplement census data with intercensal estimates of net migration using census data on the total population. This indirect method of obtaining information on migration can be used to evaluate estimates derived from frontier control records, and it also offers the means of obtaining alternative information on international migration in African countries which have not directly investigated migration topics

  10. Understanding Statistics - Cancer Statistics

    Cancer.gov

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  11. Can Low Frequency Measurements Be Good Enough? - A Statistical Assessment of Citizen Hydrology Streamflow Observations

    NASA Astrophysics Data System (ADS)

    Davids, J. C.; Rutten, M.; Van De Giesen, N.

    2016-12-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from

  12. Statistical analysis of field data for aircraft warranties

    NASA Astrophysics Data System (ADS)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  13. Good vaccination practice: it all starts with a good vaccine storage temperature.

    PubMed

    Vangroenweghe, Frédéric

    2017-01-01

    Recent introduction of strategies to reduce antibiotic use in food animal production implies an increased use of vaccines in order to prevent the economic impact of several important diseases in swine. Good Vaccination Practice (GVP) is an overall approach on the swine farm aiming to obtain maximal efficacy of vaccination through good storage, preparation and finally correct application to the target animals. In order to have a better insight into GVP on swine farms and the vaccine storage conditions, a survey on vaccination practices was performed on a farmers' fair and temperatures in the vaccine storage refrigerators were measured during farm visits over a period of 1 year. The survey revealed that knowledge on GVP, such as vaccine storage and handling, needle management and injection location could be improved. Less than 10% had a thermometer in their vaccine storage refrigerator on the moment of the visit. Temperature measurement revealed that only 71% of the measured refrigerators were in line with the recommended temperature range of +2 °C to +8 °C. Both below +2 °C and above +8 °C temperatures were registered during all seasons of the year. Compliance was lower during summer with an average temperature of 9.2 °C while only 43% of the measured temperatures were within the recommended range. The present study clearly showed the need for continuous education on GVP for swine veterinarians, swine farmers and their farm personnel in general and vaccine storage management in particular. In veterinary medicine, the correct storage of vaccines is crucial since both too low and too high temperatures can provoke damage to specific vaccine types. Adjuvanted killed or subunit vaccines can be damaged (e.g. structure of aluminiumhydroxide in adjuvans) by too low temperatures (below 0 °C), whereas lyophilized live vaccines are susceptible (e.g. loss of vaccine potency) to heat damage by temperatures above +8 °C. In conclusion, knowledge and awareness of GVP

  14. An entropy-based statistic for genomewide association studies.

    PubMed

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-07-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.

  15. VizieR Online Data Catalog: GOODS-MUSIC sample: multicolour catalog (Grazian+, 2006)

    NASA Astrophysics Data System (ADS)

    Grazian, A.; Fontana, A.; de Santis, C.; Nonino, M.; Salimbeni, S.; Giallongo, E.; Cristiani, S.; Gallozzi, S.; Vanzella, E.

    2006-02-01

    The GOODS-MUSIC multi-wavelength catalog provides photometric and spectroscopic information for galaxies in the GOODS Southern field. It includes two U images obtained with the ESO 2.2m telescope and one U band image from VLT-VIMOS, the ACS-HST images in four optical (B,V,i,z) bands, the VLT-ISAAC J, H, and Ks bands as well as the Spitzer images in at 3.5, 4.5, 5.8, and 8 micron. Most of these images have been made publicly available in the coadded version by the GOODS team, while the U band data were retrieved in raw format and reduced by our team. We also collected all the available spectroscopic information from public spectroscopic surveys and cross-correlated the spectroscopic redshifts with our photometric catalog. For the unobserved fraction of the objects, we applied our photometric redshift code to obtain well-calibrated photometric redshifts. The final catalog is made up of 14847 objects, with at least 72 known stars, 68 AGNs, and 928 galaxies with spectroscopic redshift (668 galaxies with reliable redshift determination). (3 data files).

  16. Assessing the statistical robustness of inter- and intra-basinal carbon isotope chemostratigraphic correlation

    NASA Astrophysics Data System (ADS)

    Hay, C.; Creveling, J. R.; Huybers, P. J.

    2016-12-01

    Excursions in the stable carbon isotopic composition of carbonate rocks (δ13Ccarb) can facilitate correlation of Precambrian and Phanerozoic sedimentary successions at a higher temporal resolution than radiometric and biostratigraphic frameworks typically afford. Within the bounds of litho- and biostratigraphic constraints, stratigraphers often correlate isotopic patterns between distant stratigraphic sections through visual alignment of local maxima and minima of isotopic values. The reproducibility of this method can prove challenging and, thus, evaluating the statistical robustness of intrabasinal composite carbon isotope curves, and global correlations to these reference curves, remains difficult. To assess the reproducibility of stratigraphic alignment of δ13Ccarb data, and correlations between carbon isotope excursions, we employ a numerical dynamic time warping methodology that stretches and squeezes the time axis of a record to obtain an optimal correlation (in a least-squares sense) between time-uncertain series of data. In particular, we assess various alignments between series of Early Cambrian δ13Ccarb data with respect to plausible matches. We first show that an alignment of these records obtained visually, and published previously, is broadly reproducible using dynamic time warping. Alternative alignments with similar goodness of fits are also obtainable, and their stratigraphic plausibility are discussed. This approach should be generalizable to an algorithm for the purposes of developing a library of plausible alignments between multiple time-uncertain stratigraphic records.

  17. Algorithm for repairing the damaged images of grain structures obtained from the cellular automata and measurement of grain size

    NASA Astrophysics Data System (ADS)

    Ramírez-López, A.; Romero-Romo, M. A.; Muñoz-Negron, D.; López-Ramírez, S.; Escarela-Pérez, R.; Duran-Valencia, C.

    2012-10-01

    Computational models are developed to create grain structures using mathematical algorithms based on the chaos theory such as cellular automaton, geometrical models, fractals, and stochastic methods. Because of the chaotic nature of grain structures, some of the most popular routines are based on the Monte Carlo method, statistical distributions, and random walk methods, which can be easily programmed and included in nested loops. Nevertheless, grain structures are not well defined as the results of computational errors and numerical inconsistencies on mathematical methods. Due to the finite definition of numbers or the numerical restrictions during the simulation of solidification, damaged images appear on the screen. These images must be repaired to obtain a good measurement of grain geometrical properties. Some mathematical algorithms were developed to repair, measure, and characterize grain structures obtained from cellular automata in the present work. An appropriate measurement of grain size and the corrected identification of interfaces and length are very important topics in materials science because they are the representation and validation of mathematical models with real samples. As a result, the developed algorithms are tested and proved to be appropriate and efficient to eliminate the errors and characterize the grain structures.

  18. Statistical dielectronic recombination rates for multielectron ions in plasma

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.

    2017-10-01

    We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.

  19. Effects of epidemic threshold definition on disease spread statistics

    NASA Astrophysics Data System (ADS)

    Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.

    2009-03-01

    We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to . For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.

  20. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  1. Air Carrier Financial Statistics : third quarter : [2009

    DOT National Transportation Integrated Search

    2009-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  2. Air Carrier Financial Statistics : fourth quarter : [2006

    DOT National Transportation Integrated Search

    2006-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  3. Air Carrier Financial Statistics : second quarter : [2013

    DOT National Transportation Integrated Search

    2013-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  4. Air Carrier Financial Statistics : first quarter : [2009

    DOT National Transportation Integrated Search

    2009-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  5. Air Carrier Financial Statistics : fourth quarter : [2004

    DOT National Transportation Integrated Search

    2004-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  6. Air Carrier Financial Statistics : second quarter : [2008

    DOT National Transportation Integrated Search

    2008-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  7. Air Carrier Financial Statistics : second quarter : [2012

    DOT National Transportation Integrated Search

    2012-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  8. Air Carrier Financial Statistics : first quarter : [2008

    DOT National Transportation Integrated Search

    2008-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  9. Air Carrier Financial Statistics : fourth quarter : [2003

    DOT National Transportation Integrated Search

    2003-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  10. Air Carrier Financial Statistics : second quarter : [2011

    DOT National Transportation Integrated Search

    2011-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  11. Air Carrier Financial Statistics : fourth quarter : [2013

    DOT National Transportation Integrated Search

    2013-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  12. Air Carrier Financial Statistics : fourth quarter : [2002

    DOT National Transportation Integrated Search

    2002-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  13. Air Carrier Financial Statistics : fourth quarter : [2005

    DOT National Transportation Integrated Search

    2005-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  14. Air Carrier Financial Statistics : second quarter : [2012

    DOT National Transportation Integrated Search

    2010-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  15. Air Carrier Financial Statistics : second quarter : [2010

    DOT National Transportation Integrated Search

    2010-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  16. Air Carrier Financial Statistics : second quarter : [2009

    DOT National Transportation Integrated Search

    2009-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  17. Air Carrier Financial Statistics : first quarter : [2012

    DOT National Transportation Integrated Search

    2012-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  18. Air Carrier Financial Statistics : fourth quarter : [2010

    DOT National Transportation Integrated Search

    2010-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  19. Air Carrier Financial Statistics : fourth quarter : [2012

    DOT National Transportation Integrated Search

    2012-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  20. Air Carrier Financial Statistics : third quarter : [2011

    DOT National Transportation Integrated Search

    2011-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  1. Air Carrier Financial Statistics : fourth quarter : [2011

    DOT National Transportation Integrated Search

    2011-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  2. Air Carrier Financial Statistics : fourth quarter : [2007

    DOT National Transportation Integrated Search

    2007-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  3. Air Carrier Financial Statistics : third quarter : [2013

    DOT National Transportation Integrated Search

    2013-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  4. Air Carrier Financial Statistics : first quarter : [2011

    DOT National Transportation Integrated Search

    2011-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  5. Air Carrier Financial Statistics : first quarter : [2014

    DOT National Transportation Integrated Search

    2014-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  6. Air Carrier Financial Statistics : first quarter : [2013

    DOT National Transportation Integrated Search

    2013-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  7. Air Carrier Financial Statistics : third quarter : [2012

    DOT National Transportation Integrated Search

    2012-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  8. Air Carrier Financial Statistics : third quarter : [2008

    DOT National Transportation Integrated Search

    2008-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  9. Air Carrier Financial Statistics : first quarter : [2010

    DOT National Transportation Integrated Search

    2010-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  10. Air Carrier Financial Statistics : fourth quarter : [2009

    DOT National Transportation Integrated Search

    2009-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  11. Air Carrier Financial Statistics : fourth quarter : [2008

    DOT National Transportation Integrated Search

    2008-01-01

    This report presents airline financial statistics obtained from carrier reports to DOT on BTS Form 41 financial schedules. Effective with a rule (ER-1297) adopted July 1982, the filing frequency for income statement and balance sheet data was changed...

  12. An Introduction to Goodness of Fit for PMU Parameter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riepnieks, Artis; Kirkham, Harold

    2017-10-01

    New results of measurements of phasor-like signals are presented based on our previous work on the topic. In this document an improved estimation method is described. The algorithm (which is realized in MATLAB software) is discussed. We examine the effect of noisy and distorted signals on the Goodness of Fit metric. The estimation method is shown to be performing very well with clean data and with a measurement window as short as a half a cycle and as few as 5 samples per cycle. The Goodness of Fit decreases predictably with added phase noise, and seems to be acceptable evenmore » with visible distortion in the signal. While the exact results we obtain are specific to our method of estimation, the Goodness of Fit method could be implemented in any phasor measurement unit.« less

  13. Estimation of Global Network Statistics from Incomplete Data

    PubMed Central

    Bliss, Catherine A.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2014-01-01

    Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week. PMID:25338183

  14. AGN Variability in the GOODS Fields

    NASA Astrophysics Data System (ADS)

    Sarajedini, Vicki

    2007-07-01

    Variability is a proven method to identify intrinsically faint active nuclei in galaxies found in deep HST surveys. We propose to extend our short-term variability study of the GOODS fields to include the more recent epochs obtained via supernovae searchers, increasing the overall time baseline from 6 months to 2.5 years. Based on typical AGN lightcurves, we expect to detect 70% more AGN by including these more recent epochs. Variable-detected AGN samples complement current X-ray and mid-IR surveys for AGN by providing unambigous evidence of nuclear activity. Additionallty, a significant number of variable nuclei are not associated with X-ray or mid-IR sources and would thus go undetected. With the increased time baseline, we will be able to construct the structure function {variability amplitude vs. time} for low-luminosity AGN to z 1. The inclusion of the longer time interval will allow for better descrimination among the various models describing the nature of AGN variability. The variability survey will be compared against spectroscopically selected AGN from the Team Keck Redshift Survey of the GOODS-N and the upcoming Flamingos-II NIR survey of the GOODS-S. The high-resolution ACS images will be used to separate the AGN from the host galaxy light and study the morphology, size and environment of the host galaxy. These studies will address questions concerning the nature of low-luminosity AGN evolution and variability at z 1.

  15. ASYMPTOTIC DISTRIBUTION OF ΔAUC, NRIs, AND IDI BASED ON THEORY OF U-STATISTICS

    PubMed Central

    Demler, Olga V.; Pencina, Michael J.; Cook, Nancy R.; D’Agostino, Ralph B.

    2017-01-01

    The change in AUC (ΔAUC), the IDI, and NRI are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues we unite the ΔAUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ΔAUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ΔAUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ΔAUC, NRIs, or IDI. In the former case SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ΔAUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ΔAUC. PMID:28627112

  16. Behavioral Patterns in Special Education. Good Teaching Practices.

    PubMed

    Rodríguez-Dorta, Manuela; Borges, África

    2017-01-01

    Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning.

  17. Do quality indicators for general practice teaching practices predict good outcomes for students?

    PubMed

    Bartlett, Maggie; Potts, Jessica; McKinley, Bob

    2016-07-01

    Keele medical students spend 113 days in general practices over our five-year programme. We collect practice data thought to indicate good quality teaching. We explored the relationships between these data and two outcomes for students; Objective Structured Clinical Examination (OSCE) scores and feedback regarding the placements. Though both are surrogate markers of good teaching, they are widely used. We collated practice and outcome data for one academic year. Two separate statistical analyses were carried out: (1) to determine how much of the variation seen in the OSCE scores was due to the effect of the practice and how much to the individual student. (2) to identify practice characteristics with a relationship to student feedback scores. (1) OSCE performance: 268 students in 90 practices: six quality indicators independently influenced the OSCE score, though without linear relationships and not to statistical significance. (2) Student satisfaction: 144 students in 69 practices: student feedback scores are not influenced by practice characteristics. The relationships between the quality indicators we collect for practices and outcomes for students are not clear. It may be that neither the quality indicators nor the outcome measures are reliable enough to inform decisions about practices' suitability for teaching.

  18. Polypropylene Production Optimization in Fluidized Bed Catalytic Reactor (FBCR): Statistical Modeling and Pilot Scale Experimental Validation

    PubMed Central

    Khan, Mohammad Jakir Hossain; Hussain, Mohd Azlan; Mujtaba, Iqbal Mohammed

    2014-01-01

    Propylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75°C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR). PMID:28788576

  19. Machine learning Z2 quantum spin liquids with quasiparticle statistics

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Melko, Roger G.; Kim, Eun-Ah

    2017-12-01

    After decades of progress and effort, obtaining a phase diagram for a strongly correlated topological system still remains a challenge. Although in principle one could turn to Wilson loops and long-range entanglement, evaluating these nonlocal observables at many points in phase space can be prohibitively costly. With growing excitement over topological quantum computation comes the need for an efficient approach for obtaining topological phase diagrams. Here we turn to machine learning using quantum loop topography (QLT), a notion we have recently introduced. Specifically, we propose a construction of QLT that is sensitive to quasiparticle statistics. We then use mutual statistics between the spinons and visons to detect a Z2 quantum spin liquid in a multiparameter phase space. We successfully obtain the quantum phase boundary between the topological and trivial phases using a simple feed-forward neural network. Furthermore, we demonstrate advantages of our approach for the evaluation of phase diagrams relating to speed and storage. Such statistics-based machine learning of topological phases opens new efficient routes to studying topological phase diagrams in strongly correlated systems.

  20. Attitude of teaching faculty towards statistics at a medical university in Karachi, Pakistan.

    PubMed

    Khan, Nazeer; Mumtaz, Yasmin

    2009-01-01

    Statistics is mainly used in biological research to verify the clinicians and researchers findings and feelings, and gives scientific validity for their inferences. In Pakistan, the educational curriculum is developed in such a way that the students who are interested in entering in the field of biological sciences do not study mathematics after grade 10. Therefore, due to their fragile background of mathematical skills, the Pakistani medical professionals feel that they do not have adequate base to understand the basic concepts of statistical techniques when they try to use it in their research or read a scientific article. The aim of the study was to assess the attitude of medical faculty towards statistics. A questionnaire containing 42 close-ended and 4 open-ended questions, related to the attitude and knowledge of statistics, was distributed among the teaching faculty of Dow University of Health Sciences (DUHS). One hundred and sixty-seven filled questionnaires were returned from 374 faculty members (response rate 44.7%). Forty-three percent of the respondents claimed that they had 'introductive' level of statistics courses, 63% of the respondents strongly agreed that a good researcher must have some training in statistics, 82% of the faculty was in favour (strongly agreed or agreed) that statistics was really useful for research. Only 17% correctly stated that statistics is the science of uncertainty. Half of the respondents accepted that they have problem of writing the statistical section of the article. 64% of the subjects indicated that statistical teaching methods were the main reasons for the impression of its difficulties. 53% of the faculty indicated that the co-authorship of the statistician should depend upon his/her contribution in the study. Gender did not show any significant difference among the responses. However, senior faculty showed higher level of the importance for the use of statistics and difficulties of writing result section of

  1. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism.

    PubMed

    Levman, Jacob; Takahashi, Emi; Forgeron, Cynthia; MacDonald, Patrick; Stewart, Natalie; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital , Harvard Medical School , demonstrating that it can potentially play a constructive role in future healthcare technologies.

  2. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism

    PubMed Central

    Takahashi, Emi; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital, Harvard Medical School, demonstrating that it can potentially play a constructive role in future healthcare technologies. PMID:29796236

  3. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  4. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  5. Monetary and affective judgments of consumer goods: modes of evaluation matter.

    PubMed

    Seta, John J; Seta, Catherine E; McCormick, Michael; Gallagher, Ashleigh H

    2014-01-01

    Participants who evaluated 2 positively valued items separately reported more positive attraction (using affective and monetary measures) than those who evaluated the same two items as a unit. In Experiments 1-3, this separate/unitary evaluation effect was obtained when participants evaluated products that they were purchasing for a friend. Similar findings were obtained in Experiments 4 and 5 when we considered the amount participants were willing to spend to purchase insurance for items that they currently owned. The averaging/summation model was contrasted with several theoretical perspectives and implicated averaging and summation integration processes in how items are evaluated. The procedural and theoretical similarities and differences between this work and related research on unpacking, comparison processes, public goods, and price bundling are discussed. Overall, the results support the operation of integration processes and contribute to an understanding of how these processes influence the evaluation and valuation of private goods.

  6. Asymptotic modal analysis and statistical energy analysis

    NASA Technical Reports Server (NTRS)

    Dowell, Earl H.

    1992-01-01

    Asymptotic Modal Analysis (AMA) is a method which is used to model linear dynamical systems with many participating modes. The AMA method was originally developed to show the relationship between statistical energy analysis (SEA) and classical modal analysis (CMA). In the limit of a large number of modes of a vibrating system, the classical modal analysis result can be shown to be equivalent to the statistical energy analysis result. As the CMA result evolves into the SEA result, a number of systematic assumptions are made. Most of these assumptions are based upon the supposition that the number of modes approaches infinity. It is for this reason that the term 'asymptotic' is used. AMA is the asymptotic result of taking the limit of CMA as the number of modes approaches infinity. AMA refers to any of the intermediate results between CMA and SEA, as well as the SEA result which is derived from CMA. The main advantage of the AMA method is that individual modal characteristics are not required in the model or computations. By contrast, CMA requires that each modal parameter be evaluated at each frequency. In the latter, contributions from each mode are computed and the final answer is obtained by summing over all the modes in the particular band of interest. AMA evaluates modal parameters only at their center frequency and does not sum the individual contributions from each mode in order to obtain a final result. The method is similar to SEA in this respect. However, SEA is only capable of obtaining spatial averages or means, as it is a statistical method. Since AMA is systematically derived from CMA, it can obtain local spatial information as well.

  7. Further developments in cloud statistics for computer simulations

    NASA Technical Reports Server (NTRS)

    Chang, D. T.; Willand, J. H.

    1972-01-01

    This study is a part of NASA's continued program to provide global statistics of cloud parameters for computer simulation. The primary emphasis was on the development of the data bank of the global statistical distributions of cloud types and cloud layers and their applications in the simulation of the vertical distributions of in-cloud parameters such as liquid water content. These statistics were compiled from actual surface observations as recorded in Standard WBAN forms. Data for a total of 19 stations were obtained and reduced. These stations were selected to be representative of the 19 primary cloud climatological regions defined in previous studies of cloud statistics. Using the data compiled in this study, a limited study was conducted of the hemogeneity of cloud regions, the latitudinal dependence of cloud-type distributions, the dependence of these statistics on sample size, and other factors in the statistics which are of significance to the problem of simulation. The application of the statistics in cloud simulation was investigated. In particular, the inclusion of the new statistics in an expanded multi-step Monte Carlo simulation scheme is suggested and briefly outlined.

  8. Statistical physics of interacting neural networks

    NASA Astrophysics Data System (ADS)

    Kinzel, Wolfgang; Metzler, Richard; Kanter, Ido

    2001-12-01

    Recent results on the statistical physics of time series generation and prediction are presented. A neural network is trained on quasi-periodic and chaotic sequences and overlaps to the sequence generator as well as the prediction errors are calculated numerically. For each network there exists a sequence for which it completely fails to make predictions. Two interacting networks show a transition to perfect synchronization. A pool of interacting networks shows good coordination in the minority game-a model of competition in a closed market. Finally, as a demonstration, a perceptron predicts bit sequences produced by human beings.

  9. Sequential selection of economic good and action in medial frontal cortex of macaques during value-based decisions

    PubMed Central

    Chen, Xiaomo; Stuphorn, Veit

    2015-01-01

    Value-based decisions could rely either on the selection of desired economic goods or on the selection of the actions that will obtain the goods. We investigated this question by recording from the supplementary eye field (SEF) of monkeys during a gambling task that allowed us to distinguish chosen good from chosen action signals. Analysis of the individual neuron activity, as well as of the population state-space dynamic, showed that SEF encodes first the chosen gamble option (the desired economic good) and only ~100 ms later the saccade that will obtain it (the chosen action). The action selection is likely driven by inhibitory interactions between different SEF neurons. Our results suggest that during value-based decisions, the selection of economic goods precedes and guides the selection of actions. The two selection steps serve different functions and can therefore not compensate for each other, even when information guiding both processes is given simultaneously. DOI: http://dx.doi.org/10.7554/eLife.09418.001 PMID:26613409

  10. Calculating weighted estimates of peak streamflow statistics

    USGS Publications Warehouse

    Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.

    2012-01-01

    According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.

  11. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  12. Goodness-of-fit tests for open capture-recapture models

    USGS Publications Warehouse

    Pollock, K.H.; Hines, J.E.; Nichols, J.D.

    1985-01-01

    General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.

  13. A fast neural signature of motivated attention to consumer goods separates the sexes.

    PubMed

    Junghöfer, Markus; Kissler, Johanna; Schupp, Harald T; Putsche, Christian; Elling, Ludger; Dobel, Christian

    2010-01-01

    Emotional stimuli guide selective visual attention and receive enhanced processing. Previous event-related potential studies have identified an early (>120 ms) negative potential shift over occipito-temporal regions (early posterior negativity, EPN) presumed to indicate the facilitated processing of survival-relevant stimuli. The present study investigated whether this neural signature of motivated attention is also responsive to the intrinsic significance of man-made objects and consumer goods. To address this issue, we capitalized on gender differences towards specific man-made objects, shoes and motorcycles, for which the Statistical Yearbook 2005 of Germany's Federal Statistical Office (Statistisches Bundesamt, 2005) revealed pronounced differences in consumer behavior. In a passive viewing paradigm, male and female participants viewed pictures of motorcycles and shoes, while their magnetoencephalographic brain responses were measured. Source localization of the magnetic counterpart of the EPN (EPNm) revealed pronounced gender differences in picture processing. Specifically, between 130 and 180 ms, all female participants generated stronger activity in occipito-temporal regions when viewing shoes compared to motorcycles, while all men except one showed stronger activation for motorcycles than shoes. Thus, the EPNm allowed a sex-dimorphic classification of the processing of consumer goods. Self-report data confirmed gender differences in consumer behavior, which, however, were less distinct compared to the brain based measure. Considering the latency of the EPNm, the reflected automatic emotional network activity is most likely not yet affected by higher cognitive functions such as response strategies or social expectancy. Non-invasive functional neuroimaging measures of early brain activity may thus serve as objective measure for individual preferences towards consumer goods.

  14. A “good death”: perspectives of Muslim patients and health care providers

    PubMed Central

    Tayeb, Mohamad A.; Al-Zamel, Ersan; Fareed, Muhammed M.; Abouellail, Hesham A.

    2010-01-01

    BACKGROUND AND OBJECTIVES: Twelve “good death” principles have been identified that apply to Westerners. This study aimed to review the TFHCOP good death perception to determine its validity for Muslim patients and health care providers, and to identify and describe other components of the Muslim good death perspective. SUBJECTS AND METHODS: Participants included 284 Muslims of both genders with different nationalities and careers. We used a 12-question questionnaire based on the 12 principles of the TFHCOP good death definition, followed by face-to-face interviews. We used descriptive statistics to analyze questionnaire responses. However, for new themes, we used a grounded theory approach with a “constant comparisons” method. RESULT: On average, each participant agreed on eight principles of the questionnaire. Dignity, privacy, spiritual and emotional support, access to hospice care, ability to issue advance directives, and to have time to say goodbye were the top priorities. Participants identified three main domains. The first domain was related to faith and belief. The second domain included some principles related to self-esteem and person>s image to friends and family. The third domain was related to satisfaction about family security after the death of the patient. Professional role distinctions were more pronounced than were gender or nationality differences. CONCLUSION: Several aspects of «good death,» as perceived by Western communities, are not recognized as being important by many Muslim patients and health care providers. Furthermore, our study introduced three novel components of good death in Muslim society. PMID:20427938

  15. Behavioral Patterns in Special Education. Good Teaching Practices

    PubMed Central

    Rodríguez-Dorta, Manuela; Borges, África

    2017-01-01

    Providing quality education means to respond to the diversity in the classroom. The teacher is a key figure in responding to the various educational needs presented by students. Specifically, special education professionals are of great importance as they are the ones who lend their support to regular classroom teachers and offer specialized educational assistance to students who require it. Therefore, special education is different from what takes place in the regular classroom, demanding greater commitment by the teacher. There are certain behaviors, considered good teaching practices, which teachers have always been connected with to achieve good teaching and good learning. To ensure that these teachers are carrying out their educational work properly it is necessary to evaluate. This means having appropriate instruments. The Observational Protocol for Teaching Functions in Primary School and Special Education (PROFUNDO-EPE, v.3., in Spanish) allows to capture behaviors from these professionals and behavioral patterns that correspond to good teaching practices. This study evaluates the behavior of two special education teachers who work with students from different educational stages and educational needs. It reveals that the analyzed teachers adapt their behavior according the needs and characteristics of their students to the students responding more adequately to the needs presented by the students and showing good teaching practices. The patterns obtained indicate that they offer support, help and clear guidelines to perform the tasks. They motivate them toward learning by providing positive feedback and they check that students have properly assimilated the contents through questions or non-verbal supervision. Also, they provide a safe and reliable climate for learning. PMID:28512437

  16. Return on research investments: personal good versus public good

    NASA Astrophysics Data System (ADS)

    Fox, P. A.

    2017-12-01

    For some time the outputs, i.e. what's produced, of publicly and privately funded research while necessary, are far from sufficient, when considering an overall return on (research) investment. At the present time products such as peer-reviewed papers, websites, data, and software are recognized by funders on timescales related to research awards and reporting. However, from a consumer perspective impact and value are determined at the time a product is discovered, accessed, assessed and used. As is often the case, the perspectives of producer and consumer communities can be distinct and not intersect at all. We contrast personal good, i.e. credit, reputation, with that of public good, e.g. interest, leverage, exploitation, and more. This presentation will elaborate on both the metaphorical and idealogical aspects of applying a "return on investment" frame for the topic of assessing "good".

  17. Patients and medical statistics. Interest, confidence, and ability.

    PubMed

    Woloshin, Steven; Schwartz, Lisa M; Welch, H Gilbert

    2005-11-01

    People are increasingly presented with medical statistics. There are no existing measures to assess their level of interest or confidence in using medical statistics. To develop 2 new measures, the STAT-interest and STAT-confidence scales, and assess their reliability and validity. Survey with retest after approximately 2 weeks. Two hundred and twenty-four people were recruited from advertisements in local newspapers, an outpatient clinic waiting area, and a hospital open house. We developed and revised 5 items on interest in medical statistics and 3 on confidence understanding statistics. Study participants were mostly college graduates (52%); 25% had a high school education or less. The mean age was 53 (range 20 to 84) years. Most paid attention to medical statistics (6% paid no attention). The mean (SD) STAT-interest score was 68 (17) and ranged from 15 to 100. Confidence in using statistics was also high: the mean (SD) STAT-confidence score was 65 (19) and ranged from 11 to 100. STAT-interest and STAT-confidence scores were moderately correlated (r=.36, P<.001). Both scales demonstrated good test-retest repeatability (r=.60, .62, respectively), internal consistency reliability (Cronbach's alpha=0.70 and 0.78), and usability (individual item nonresponse ranged from 0% to 1.3%). Scale scores correlated only weakly with scores on a medical data interpretation test (r=.15 and .26, respectively). The STAT-interest and STAT-confidence scales are usable and reliable. Interest and confidence were only weakly related to the ability to actually use data.

  18. Evaluation of agreement between temporal series obtained from electrocardiogram and pulse wave.

    NASA Astrophysics Data System (ADS)

    Leikan, GM; Rossi, E.; Sanz, MCuadra; Delisle Rodríguez, D.; Mántaras, MC; Nicolet, J.; Zapata, D.; Lapyckyj, I.; Siri, L. Nicola; Perrone, MS

    2016-04-01

    Heart rate variability allows to study the cardiovascular autonomic nervous system modulation. Usually, this signal is obtained from the electrocardiogram (ECG). A simpler method for recording the pulse wave (PW) is by means of finger photoplethysmography (PPG), which also provides information about the duration of the cardiac cycle. In this study, the correlation and agreement between the time series of the intervals between heartbeats obtained from the ECG with those obtained from the PPG, were studied. Signals analyzed were obtained from young, healthy and resting subjects. For statistical analysis, the Pearson correlation coefficient and the Bland and Altman limits of agreement were used. Results show that the time series constructed from the PW would not replace the ones obtained from ECG.

  19. Mutual interference between statistical summary perception and statistical learning.

    PubMed

    Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B

    2011-09-01

    The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.

  20. Statistical inference for noisy nonlinear ecological dynamic systems.

    PubMed

    Wood, Simon N

    2010-08-26

    Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.

  1. Statistical description and transport in stochastic magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanden Eijnden, E.; Balescu, R.

    1996-03-01

    The statistical description of particle motion in a stochastic magnetic field is presented. Starting form the stochastic Liouville equation (or, hybrid kinetic equation) associated with the equations of motion of a test particle, the probability distribution function of the system is obtained for various magnetic fields and collisional processes. The influence of these two ingredients on the statistics of the particle dynamics is stressed. In all cases, transport properties of the system are discussed. {copyright} {ital 1996 American Institute of Physics.}

  2. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  3. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  4. Photon counting statistics analysis of biophotons from hands.

    PubMed

    Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup

    2003-05-01

    The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.

  5. High cumulants of conserved charges and their statistical uncertainties

    NASA Astrophysics Data System (ADS)

    Li-Zhu, Chen; Ye-Yin, Zhao; Xue, Pan; Zhi-Ming, Li; Yuan-Fang, Wu

    2017-10-01

    We study the influence of measured high cumulants of conserved charges on their associated statistical uncertainties in relativistic heavy-ion collisions. With a given number of events, the measured cumulants randomly fluctuate with an approximately normal distribution, while the estimated statistical uncertainties are found to be correlated with corresponding values of the obtained cumulants. Generally, with a given number of events, the larger the cumulants we measure, the larger the statistical uncertainties that are estimated. The error-weighted averaged cumulants are dependent on statistics. Despite this effect, however, it is found that the three sigma rule of thumb is still applicable when the statistics are above one million. Supported by NSFC (11405088, 11521064, 11647093), Major State Basic Research Development Program of China (2014CB845402) and Ministry of Science and Technology (MoST) (2016YFE0104800)

  6. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  7. The role of noise in the spatial public goods game

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto; Battiston, Federico

    2016-07-01

    In this work we aim to analyze the role of noise in the spatial public goods game, one of the most famous games in evolutionary game theory. The dynamics of this game is affected by a number of parameters and processes, namely the topology of interactions among the agents, the synergy factor, and the strategy revision phase. The latter is a process that allows agents to change their strategy. Notably, rational agents tend to imitate richer neighbors, in order to increase the probability to maximize their payoff. By implementing a stochastic revision process, it is possible to control the level of noise in the system, so that even irrational updates may occur. In particular, in this work we study the effect of noise on the macroscopic behavior of a finite structured population playing the public goods game. We consider both the case of a homogeneous population, where the noise in the system is controlled by tuning a parameter representing the level of stochasticity in the strategy revision phase, and a heterogeneous population composed of a variable proportion of rational and irrational agents. In both cases numerical investigations show that the public goods game has a very rich behavior which strongly depends on the amount of noise in the system and on the value of the synergy factor. To conclude, our study sheds a new light on the relations between the microscopic dynamics of the public goods game and its macroscopic behavior, strengthening the link between the field of evolutionary game theory and statistical physics.

  8. "Everyone just ate good food": 'Good food' in Islamabad, Pakistan.

    PubMed

    Hasnain, Saher

    2018-08-01

    In recent years, consumption of alternatively produced foods has increased in popularity in response to the deleterious effects of rapidly globalising and industrialised food systems. Concerns over food safety in relation to these changes may result from elevated levels of risk and changing perceptions associated with food production practices. This paper explores how the middle class residents of Islamabad, Pakistan, use the concept of 'good food' to reconnect themselves with nature, changing food systems, and traditional values. The paper also demonstrates how these ideas relate to those of organic, local, and traditional food consumption as currently used in more economically developed states in the Global North. Through research based on participant observation and semi-structured interviews, this paper illustrates that besides price and convenience, purity, freshness, association with specific places, and 'Pakistani-ness' were considered as the basis for making decisions about 'good food'. The results show that while individuals are aware of and have some access to imported organic and local food, they prefer using holistic and culturally informed concepts of 'good food' instead that reconnect them with food systems. I argue that through conceptualisations of 'good food', the urban middle class in Islamabad is reducing their disconnection and dis-embeddedness from nature, the food systems, and their social identities. The paper contributes to literature on food anxieties, reconnections in food geography, and 'good food' perceptions, with a focus on Pakistan. Copyright © 2018. Published by Elsevier Ltd.

  9. Improving GEFS Weather Forecasts for Indian Monsoon with Statistical Downscaling

    NASA Astrophysics Data System (ADS)

    Agrawal, Ankita; Salvi, Kaustubh; Ghosh, Subimal

    2014-05-01

    Weather forecast has always been a challenging research problem, yet of a paramount importance as it serves the role of 'key input' in formulating modus operandi for immediate future. Short range rainfall forecasts influence a wide range of entities, right from agricultural industry to a common man. Accurate forecasts actually help in minimizing the possible damage by implementing pre-decided plan of action and hence it is necessary to gauge the quality of forecasts which might vary with the complexity of weather state and regional parameters. Indian Summer Monsoon Rainfall (ISMR) is one such perfect arena to check the quality of weather forecast not only because of the level of intricacy in spatial and temporal patterns associated with it, but also the amount of damage it can cause (because of poor forecasts) to the Indian economy by affecting agriculture Industry. The present study is undertaken with the rationales of assessing, the ability of Global Ensemble Forecast System (GEFS) in predicting ISMR over central India and the skill of statistical downscaling technique in adding value to the predictions by taking them closer to evidentiary target dataset. GEFS is a global numerical weather prediction system providing the forecast results of different climate variables at a fine resolution (0.5 degree and 1 degree). GEFS shows good skills in predicting different climatic variables but fails miserably over rainfall predictions for Indian summer monsoon rainfall, which is evident from a very low to negative correlation values between predicted and observed rainfall. Towards the fulfilment of second rationale, the statistical relationship is established between the reasonably well predicted climate variables (GEFS) and observed rainfall. The GEFS predictors are treated with multicollinearity and dimensionality reduction techniques, such as principal component analysis (PCA) and least absolute shrinkage and selection operator (LASSO). Statistical relationship is

  10. Asymptotic distribution of ∆AUC, NRIs, and IDI based on theory of U-statistics.

    PubMed

    Demler, Olga V; Pencina, Michael J; Cook, Nancy R; D'Agostino, Ralph B

    2017-09-20

    The change in area under the curve (∆AUC), the integrated discrimination improvement (IDI), and net reclassification index (NRI) are commonly used measures of risk prediction model performance. Some authors have reported good validity of associated methods of estimating their standard errors (SE) and construction of confidence intervals, whereas others have questioned their performance. To address these issues, we unite the ∆AUC, IDI, and three versions of the NRI under the umbrella of the U-statistics family. We rigorously show that the asymptotic behavior of ∆AUC, NRIs, and IDI fits the asymptotic distribution theory developed for U-statistics. We prove that the ∆AUC, NRIs, and IDI are asymptotically normal, unless they compare nested models under the null hypothesis. In the latter case, asymptotic normality and existing SE estimates cannot be applied to ∆AUC, NRIs, or IDI. In the former case, SE formulas proposed in the literature are equivalent to SE formulas obtained from U-statistics theory if we ignore adjustment for estimated parameters. We use Sukhatme-Randles-deWet condition to determine when adjustment for estimated parameters is necessary. We show that adjustment is not necessary for SEs of the ∆AUC and two versions of the NRI when added predictor variables are significant and normally distributed. The SEs of the IDI and three-category NRI should always be adjusted for estimated parameters. These results allow us to define when existing formulas for SE estimates can be used and when resampling methods such as the bootstrap should be used instead when comparing nested models. We also use the U-statistic theory to develop a new SE estimate of ∆AUC. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A Statistical Method of Identifying Interactions in Neuron–Glia Systems Based on Functional Multicell Ca2+ Imaging

    PubMed Central

    Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin

    2014-01-01

    Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874

  12. Comparison of urine and bladder or urethral mucosal biopsy culture obtained by transurethral cystoscopy in dogs with chronic lower urinary tract disease: 41 cases (2002 to 2011).

    PubMed

    Sycamore, K F; Poorbaugh, V R; Pullin, S S; Ward, C R

    2014-07-01

    To compare aerobic bacterial culture of urine to cystoscopically obtained mucosal biopsies of the lower urinary tract in dogs. Retrospective review of case records from dogs that had transurethral cystoscopy at a veterinary teaching hospital between 2002 and 2011. Dogs that had culture results from cystocentesis obtained urine and transurethral cystoscopically obtained mucosal samples were included in the study. Pathogens identified were compared between sampling methods. Forty dogs underwent transurethral cystoscopy for lower urinary tract disease on 41 occasions. There was significant (P = 0 · 0003) agreement between urine and mucosal biopsy cultures. Both cultures were negative in 66% and positive in 17% of dogs. There was a 17% disagreement between the sampling methods. Although not statistically significant, more mucosal samples than urine cultures were positive for Escherichia coli. There was a good agreement between pathogen identification from urine and lower urinary tract mucosal cultures. These results do not support the utilisation of transurethral cystoscopy to obtain biopsy samples for culture in dogs with urinary tract infection and positive urine culture. Individual cases with possible chronic urinary tract infection and negative urine culture may benefit from transurethral cystoscopy to obtain biopsies for culture. © 2014 British Small Animal Veterinary Association.

  13. Good work ability among unemployed individuals: Association of sociodemographic, work-related and well-being factors.

    PubMed

    Hult, Marja; Pietilä, Anna-Maija; Koponen, Päivikki; Saaranen, Terhi

    2018-05-01

    The aims of this study were to describe the perceived work ability of unemployed individuals and to explore the association between perceived good work ability and sociodemographic, work-related and well-being factors. The data were derived from the Finnish Regional Health and Well-being Study (ATH) collected by postal and Internet-based questionnaires in 2014-2015. The random sample was selected from the Finnish National Population Register. The present study includes data from unemployed or laid-off respondents ( n=1975) aged 20-65 years. Logistic regression was used in the statistical analysis. Perceived work ability was measured with the Work Ability Score. Factors significantly associated with good work ability were having young children living in the household, short-term unemployment, low or moderate physical strain in most recent job, moderate mental strain in most recent job, satisfaction with most recent job, good self-rated health and good quality of life. Good self-rated health (odds ratio=10.53, 95% confidence interval 5.90-18.80) was the most substantial factor in the multivariate model. The findings provide further evidence on the factors related to good work ability of the unemployed. These factors should be considered when designing interventions for promoting work ability and to minimise the harmful effects of long-term unemployment.

  14. [Statistics for statistics?--Thoughts about psychological tools].

    PubMed

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  15. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  16. Structure and statistics of turbulent flow over riblets

    NASA Astrophysics Data System (ADS)

    Henderson, R. D.; Crawford, C. H.; Karniadakis, G. E.

    1993-01-01

    In this paper we present comparisons of turbulence statistics obtained from direct numerical simulation of flow over streamwise aligned triangular riblets with experimental results. We also present visualizations of the instantaneous velocity field inside and around the riblet valleys. In light of the behavior of the statistics and flowfields inside the riblet valleys, we investigate previously reported physical mechanisms for the drag reducing effect of riblets; our results here support the hypothesis of flow anchoring by the riblet valleys and the corresponding inhibition of spanwise flow motions.

  17. Gain statistics of a fiber optical parametric amplifier with a temporally incoherent pump.

    PubMed

    Xu, Y Q; Murdoch, S G

    2010-03-15

    We present an investigation of the statistics of the gain fluctuations of a fiber optical parametric amplifier pumped with a temporally incoherent pump. We derive a simple expression for the probability distribution of the gain of the amplified optical signal. The gain statistics are shown to be a strong function of the signal detuning and allow the possibility of generating optical gain distributions with controllable long-tails. Very good agreement is found between this theory and the experimentally measured gain distributions of an incoherently pumped amplifier.

  18. Statistical Tools And Artificial Intelligence Approaches To Predict Fracture In Bulk Forming Processes

    NASA Astrophysics Data System (ADS)

    Di Lorenzo, R.; Ingarao, G.; Fonti, V.

    2007-05-01

    The crucial task in the prevention of ductile fracture is the availability of a tool for the prediction of such defect occurrence. The technical literature presents a wide investigation on this topic and many contributions have been given by many authors following different approaches. The main class of approaches regards the development of fracture criteria: generally, such criteria are expressed by determining a critical value of a damage function which depends on stress and strain paths: ductile fracture is assumed to occur when such critical value is reached during the analysed process. There is a relevant drawback related to the utilization of ductile fracture criteria; in fact each criterion usually has good performances in the prediction of fracture for particular stress - strain paths, i.e. it works very well for certain processes but may provide no good results for other processes. On the other hand, the approaches based on damage mechanics formulation are very effective from a theoretical point of view but they are very complex and their proper calibration is quite difficult. In this paper, two different approaches are investigated to predict fracture occurrence in cold forming operations. The final aim of the proposed method is the achievement of a tool which has a general reliability i.e. it is able to predict fracture for different forming processes. The proposed approach represents a step forward within a research project focused on the utilization of innovative predictive tools for ductile fracture. The paper presents a comparison between an artificial neural network design procedure and an approach based on statistical tools; both the approaches were aimed to predict fracture occurrence/absence basing on a set of stress and strain paths data. The proposed approach is based on the utilization of experimental data available, for a given material, on fracture occurrence in different processes. More in detail, the approach consists in the analysis of

  19. Statistical mechanics in the context of special relativity. II.

    PubMed

    Kaniadakis, G

    2005-09-01

    The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.

  20. Statistics of Sxy estimates

    NASA Technical Reports Server (NTRS)

    Freilich, M. H.; Pawka, S. S.

    1987-01-01

    The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.

  1. Student perceptions of a good teacher: the gender perspective.

    PubMed

    Jules, V; Kutnick, P

    1997-12-01

    A large-scale survey of pupils' perceptions of a good teacher in the Caribbean republic of Trinidad and Tobago is reported. An essay-based, interpretative mode of research was used to elicit and identify constructs used by boys and girls. The study explores similarities and differences between boys and girls in their perceptions of a good teacher, in a society where girls achieve superior academic performance (than boys). A total of 1756 pupils and students aged between 8 and 16 provided the sample, which was proportional, stratified, clustered. Within these constraints classrooms were randomly selected to be representative of primary and secondary schools across the two islands. Altogether 1539 essays and 217 interviews were content analysed, coded for age development and compared between boys and girls. Content items identified by the pupils were logically grouped into: physical and personal characteristics of the teacher, quality of the relationship between the teacher and pupil, control of behaviour by the teacher, descriptions of the teaching process, and educational and other outcomes obtained by pupils due to teacher efforts. Female pupils identified more good teacher concepts at all age levels than males. There was some commonality between the sexes in concepts regarding interpersonal relationships and inclusiveness in the good teachers' teaching practices and boys showed significantly greater concerns regarding teacher control and use of punishment. Males as young as 8 years stated that good teachers should be sensitive to their needs. Only among the 16-year-old males were males noted as good teachers. Consideration is given to the roles of male and female teachers, how their classroom actions may set the basis for future success (or failure) of their pupils, and the needs of pupils with regard to teacher support within developing and developed countries.

  2. Statistical downscaling for winter streamflow in Douro River

    NASA Astrophysics Data System (ADS)

    Jesús Esteban Parra, María; Hidalgo Muñoz, José Manuel; García-Valdecasas-Ojeda, Matilde; Raquel Gámiz Fortis, Sonia; Castro Díez, Yolanda

    2015-04-01

    In this paper we have obtained climate change projections for winter flow of the Douro River in the period 2071-2100 by applying the technique of Partial Regression and various General Circulation Models of CMIP5. The streamflow data base used has been provided by the Center for Studies and Experimentation of Public Works, CEDEX. Series from gauing stations and reservoirs with less than 10% of missing data (filled by regression with well correlated neighboring stations) have been considered. The homogeneity of these series has been evaluated through the Pettit test and degree of human alteration by the Common Area Index. The application of these criteria led to the selection of 42 streamflow time series homogeneously distributed over the basin, covering the period 1951-2011. For these streamflow data, winter seasonal values were obtained by averaging the monthly values from January to March. Statistical downscaling models for the streamflow have been fitted using as predictors the main atmospheric modes of variability over the North Atlantic region. These modes have been obtained using winter sea level pressure data of the NCEP reanalysis, averaged for the months from December to February. Period 1951-1995 was used for calibration, while 1996-2011 period was used in validating the adjusted models. In general, these models are able to reproduce about 70% of the variability of the winter streamflow of the Douro River. Finally, the obtained statistical models have been applied to obtain projections for 2071-2100 period, using outputs from different CMIP5 models under the RPC8.5 scenario. The results for the end of the century show modest declines of winter streamflow in this river for most of the models. Keywords: Statistical downscaling, streamflow, Douro River, climate change. ACKNOWLEDGEMENTS This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).

  3. The Good Work.

    ERIC Educational Resources Information Center

    Csikszentmihalyi, Mihaly

    2003-01-01

    Examines the working lives of geneticists and journalists to place into perspective what lies behind personal ethics and success. Defines "good work" as productive activity that is valued socially and loved by people engaged in it. Asserts that certain cultural values, social controls, and personal standards are necessary to maintain good work and…

  4. Papillary Thyroid Cancer: The Good and Bad of the "Good Cancer".

    PubMed

    Randle, Reese W; Bushman, Norah M; Orne, Jason; Balentine, Courtney J; Wendt, Elizabeth; Saucke, Megan; Pitt, Susan C; Macdonald, Cameron L; Connor, Nadine P; Sippel, Rebecca S

    2017-07-01

    Papillary thyroid cancer is often described as the "good cancer" because of its treatability and relatively favorable survival rates. This study sought to characterize the thoughts of papillary thyroid cancer patients as they relate to having the "good cancer." This qualitative study included 31 papillary thyroid cancer patients enrolled in an ongoing randomized trial. Semi-structured interviews were conducted with participants at the preoperative visit and two weeks, six weeks, six months, and one year after thyroidectomy. Grounded theory was used, inductively coding the first 113 interview transcripts with NVivo 11. The concept of thyroid cancer as "good cancer" emerged unprompted from 94% (n = 29) of participants, mostly concentrated around the time of diagnosis. Patients encountered this perception from healthcare providers, Internet research, friends, and preconceived ideas about other cancers. While patients generally appreciated optimism, this perspective also generated negative feelings. It eased the diagnosis of cancer but created confusion when individual experiences varied from expectations. Despite initially feeling reassured, participants described feeling the "good cancer" characterization invalidated their fears of having cancer. Thyroid cancer patients expressed that they did not want to hear that it's "only thyroid cancer" and that it's "no big deal," because "cancer is cancer," and it is significant. Patients with papillary thyroid cancer commonly confront the perception that their malignancy is "good," but the favorable prognosis and treatability of the disease do not comprehensively represent their cancer fight. The "good cancer" perception is at the root of many mixed and confusing emotions. Clinicians emphasize optimistic outcomes, hoping to comfort, but they might inadvertently invalidate the impact thyroid cancer has on patients' lives.

  5. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  6. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  7. Reproducibility-optimized test statistic for ranking genes in microarray studies.

    PubMed

    Elo, Laura L; Filén, Sanna; Lahesmaa, Riitta; Aittokallio, Tero

    2008-01-01

    A principal goal of microarray studies is to identify the genes showing differential expression under distinct conditions. In such studies, the selection of an optimal test statistic is a crucial challenge, which depends on the type and amount of data under analysis. While previous studies on simulated or spike-in datasets do not provide practical guidance on how to choose the best method for a given real dataset, we introduce an enhanced reproducibility-optimization procedure, which enables the selection of a suitable gene- anking statistic directly from the data. In comparison with existing ranking methods, the reproducibilityoptimized statistic shows good performance consistently under various simulated conditions and on Affymetrix spike-in dataset. Further, the feasibility of the novel statistic is confirmed in a practical research setting using data from an in-house cDNA microarray study of asthma-related gene expression changes. These results suggest that the procedure facilitates the selection of an appropriate test statistic for a given dataset without relying on a priori assumptions, which may bias the findings and their interpretation. Moreover, the general reproducibilityoptimization procedure is not limited to detecting differential expression only but could be extended to a wide range of other applications as well.

  8. [Notes on vital statistics for the study of perinatal health].

    PubMed

    Juárez, Sol Pía

    2014-01-01

    Vital statistics, published by the National Statistics Institute in Spain, are a highly important source for the study of perinatal health nationwide. However, the process of data collection is not well-known and has implications both for the quality and interpretation of the epidemiological results derived from this source. The aim of this study was to present how the information is collected and some of the associated problems. This study is the result of an analysis of the methodological notes from the National Statistics Institute and first-hand information obtained from hospitals, the Central Civil Registry of Madrid, and the Madrid Institute for Statistics. Greater integration between these institutions is required to improve the quality of birth and stillbirth statistics. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  9. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  10. The Statistical Drake Equation

    NASA Astrophysics Data System (ADS)

    Maccone, Claudio

    2010-12-01

    function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.

  11. "Good Citizen" Program.

    ERIC Educational Resources Information Center

    Placer Hills Union Elementary School District, Meadow Vista, CA.

    THE FOLLOWING IS THE FULL TEXT OF THIS DOCUMENT: The "Good Citizen" Program was developed for many reasons: to keep the campus clean, to reward students for improvement, to reward students for good deeds, to improve the total school climate, to reward students for excellence, and to offer staff members a method of reward for positive…

  12. Ordering statistics of four random walkers on a line

    NASA Astrophysics Data System (ADS)

    Helenbrook, Brian; ben-Avraham, Daniel

    2018-05-01

    We study the ordering statistics of four random walkers on the line, obtaining a much improved estimate for the long-time decay exponent of the probability that a particle leads to time t , Plead(t ) ˜t-0.91287850 , and that a particle lags to time t (never assumes the lead), Plag(t ) ˜t-0.30763604 . Exponents of several other ordering statistics for N =4 walkers are obtained to eight-digit accuracy as well. The subtle correlations between n walkers that lag jointly, out of a field of N , are discussed: for N =3 there are no correlations and Plead(t ) ˜Plag(t) 2 . In contrast, our results rule out the possibility that Plead(t ) ˜Plag(t) 3 for N =4 , although the correlations in this borderline case are tiny.

  13. A Guerilla Guide to Common Problems in 'Neurostatistics': Essential Statistical Topics in Neuroscience.

    PubMed

    Smith, Paul F

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins.

  14. The ultrasound-enhanced bioscouring performance of four polygalacturonase enzymes obtained from rhizopus oryzae

    USDA-ARS?s Scientific Manuscript database

    An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...

  15. New heterogeneous test statistics for the unbalanced fixed-effect nested design.

    PubMed

    Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming

    2011-05-01

    When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.

  16. Pure human urine is a good fertiliser for cucumbers.

    PubMed

    Heinonen-Tanski, Helvi; Sjöblom, Annalena; Fabritius, Helena; Karinen, Päivi

    2007-01-01

    Human urine obtained from separating toilets was tested as a fertiliser for cultivation of outdoor cucumber (Cucumis sativus L.) in a Nordic climate. The urine used contained high amounts of nitrogen with some phosphorus and potassium, but numbers of enteric microorganisms were low even though urine had not been preserved before sampling. The cucumber yield after urine fertilisation was similar or slightly better than the yield obtained from control rows fertilised with commercial mineral fertiliser. None of the cucumbers contained any enteric microorganisms (coliforms, enterococci, coliphages and clostridia). In the taste assessment, 11 out of 20 persons could recognise which cucumber of three cucumbers was different but they did not prefer one over the other cucumber samples, since all of them were assessed as equally good.

  17. The crossing statistic: dealing with unknown errors in the dispersion of Type Ia supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafieloo, Arman; Clifton, Timothy; Ferreira, Pedro, E-mail: arman@ewha.ac.kr, E-mail: tclifton@astro.ox.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    2011-08-01

    We propose a new statistic that has been designed to be used in situations where the intrinsic dispersion of a data set is not well known: The Crossing Statistic. This statistic is in general less sensitive than χ{sup 2} to the intrinsic dispersion of the data, and hence allows us to make progress in distinguishing between different models using goodness of fit to the data even when the errors involved are poorly understood. The proposed statistic makes use of the shape and trends of a model's predictions in a quantifiable manner. It is applicable to a variety of circumstances, althoughmore » we consider it to be especially well suited to the task of distinguishing between different cosmological models using type Ia supernovae. We show that this statistic can easily distinguish between different models in cases where the χ{sup 2} statistic fails. We also show that the last mode of the Crossing Statistic is identical to χ{sup 2}, so that it can be considered as a generalization of χ{sup 2}.« less

  18. Perspective: chemical dynamics simulations of non-statistical reaction dynamics

    PubMed Central

    Ma, Xinyou; Hase, William L.

    2017-01-01

    Non-statistical chemical dynamics are exemplified by disagreements with the transition state (TS), RRKM and phase space theories of chemical kinetics and dynamics. The intrinsic reaction coordinate (IRC) is often used for the former two theories, and non-statistical dynamics arising from non-IRC dynamics are often important. In this perspective, non-statistical dynamics are discussed for chemical reactions, with results primarily obtained from chemical dynamics simulations and to a lesser extent from experiment. The non-statistical dynamical properties discussed are: post-TS dynamics, including potential energy surface bifurcations, product energy partitioning in unimolecular dissociation and avoiding exit-channel potential energy minima; non-RRKM unimolecular decomposition; non-IRC dynamics; direct mechanisms for bimolecular reactions with pre- and/or post-reaction potential energy minima; non-TS theory barrier recrossings; and roaming dynamics. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320906

  19. A Simplified Algorithm for Statistical Investigation of Damage Spreading

    NASA Astrophysics Data System (ADS)

    Gecow, Andrzej

    2009-04-01

    On the way to simulating adaptive evolution of complex system describing a living object or human developed project, a fitness should be defined on node states or network external outputs. Feedbacks lead to circular attractors of these states or outputs which make it difficult to define a fitness. The main statistical effects of adaptive condition are the result of small change tendency and to appear, they only need a statistically correct size of damage initiated by evolutionary change of system. This observation allows to cut loops of feedbacks and in effect to obtain a particular statistically correct state instead of a long circular attractor which in the quenched model is expected for chaotic network with feedback. Defining fitness on such states is simple. We calculate only damaged nodes and only once. Such an algorithm is optimal for investigation of damage spreading i.e. statistical connections of structural parameters of initial change with the size of effected damage. It is a reversed-annealed method—function and states (signals) may be randomly substituted but connections are important and are preserved. The small damages important for adaptive evolution are correctly depicted in comparison to Derrida annealed approximation which expects equilibrium levels for large networks. The algorithm indicates these levels correctly. The relevant program in Pascal, which executes the algorithm for a wide range of parameters, can be obtained from the author.

  20. The Computer Student Worksheet Based Mathematical Literacy for Statistics

    NASA Astrophysics Data System (ADS)

    Manoy, J. T.; Indarasati, N. A.

    2018-01-01

    The student worksheet is one of media teaching which is able to improve teaching an activity in the classroom. Indicators in mathematical literacy were included in a student worksheet is able to help the students for applying the concept in daily life. Then, the use of computers in learning can create learning with environment-friendly. This research used developmental research which was Thiagarajan (Four-D) development design. There are 4 stages in the Four-D, define, design, develop, and disseminate. However, this research was finish until the third stage, develop stage. The computer student worksheet based mathematical literacy for statistics executed good quality. This student worksheet is achieving the criteria if able to achieve three aspects, validity, practicality, and effectiveness. The subject in this research was the students at The 1st State Senior High School of Driyorejo, Gresik, grade eleven of The 5th Mathematics and Natural Sciences. The computer student worksheet products based mathematical literacy for statistics executed good quality, while it achieved the aspects for validity, practical, and effectiveness. This student worksheet achieved the validity aspects with an average of 3.79 (94.72%), and practical aspects with an average of 2.85 (71.43%). Besides, it achieved the effectiveness aspects with a percentage of the classical complete students of 94.74% and a percentage of the student positive response of 75%.

  1. Quantitative Comparison of Tandem Mass Spectra Obtained on Various Instruments

    NASA Astrophysics Data System (ADS)

    Bazsó, Fanni Laura; Ozohanics, Oliver; Schlosser, Gitta; Ludányi, Krisztina; Vékey, Károly; Drahos, László

    2016-08-01

    The similarity between two tandem mass spectra, which were measured on different instruments, was compared quantitatively using the similarity index (SI), defined as the dot product of the square root of peak intensities in the respective spectra. This function was found to be useful for comparing energy-dependent tandem mass spectra obtained on various instruments. Spectral comparisons show the similarity index in a 2D "heat map", indicating which collision energy combinations result in similar spectra, and how good this agreement is. The results and methodology can be used in the pharma industry to design experiments and equipment well suited for good reproducibility. We suggest that to get good long-term reproducibility, it is best to adjust the collision energy to yield a spectrum very similar to a reference spectrum. It is likely to yield better results than using the same tuning file, which, for example, does not take into account that contamination of the ion source due to extended use may influence instrument tuning. The methodology may be used to characterize energy dependence on various instrument types, to optimize instrumentation, and to study the influence or correlation between various experimental parameters.

  2. Significance levels for studies with correlated test statistics.

    PubMed

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  3. A Good Suit Beats a Good Idea.

    ERIC Educational Resources Information Center

    Machiavelli, Nick

    1992-01-01

    Inspired by Niccolo Machiavelli, this column offers beleaguered school executives advice on looking good, dressing well, losing weight, beating the proper enemy, and saying nothing. Administrators who follow these simple rules should have an easier life, jealous colleagues, well-tended gardens, and respectful board members. (MLH)

  4. Inverse statistics in the foreign exchange market

    NASA Astrophysics Data System (ADS)

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-09-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed by Simonsen et al. (Eur. Phys. J. 27 (2002) 583) and Jensen et al. (Physica A 324 (2003) 338). Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) ρ in the price of an investment. The analysis is performed for the Deutsch Mark (DM) against the US for the full year of 1998, but similar results are obtained for the Japanese Yen against the US. With high statistical significance, the presence of “resonance peaks” in the waiting time distributions is established. Such peaks are a consequence of the trading habits of the market participants as they are not present in the corresponding tick (business) waiting time distributions. Furthermore, a new stylized fact, is observed for the (normalized) waiting time distribution in the form of a power law Pdf. This result is achieved by rescaling of the physical waiting time by the corresponding tick time thereby partially removing scale-dependent features of the market activity.

  5. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  6. Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement.

    PubMed

    Popplewell, Michael; Reizes, John; Zaslawski, Chris

    2018-05-31

    Fleiss' Kappa (FK) has been commonly, but incorrectly, employed as the "standard" for evaluating chance-removed inter-rater agreement with ordinal data. This practice may lead to misleading conclusions in inter-rater agreement research. An example is presented that demonstrates the conditions where FK produces inappropriate results, compared with Gwet's AC2, which is proposed as a more appropriate statistic. A novel format for recording a Chinese Medical (CM) diagnoses, called the Diagnostic System of Oriental Medicine (DSOM), was used to record and compare patient diagnostic data, which, unlike the contemporary CM diagnostic format, allows agreement by chance to be considered when evaluating patient data obtained with unrestricted diagnostic options available to diagnosticians. Five CM practitioners diagnosed 42 subjects drawn from an open population. Subjects' diagnoses were recorded using the DSOM format. All the available data were initially used to evaluate agreement. Then, the subjects were sorted into three groups to demonstrate the effects of differing data marginality on the calculated chance-removed agreement. Agreement between the practitioners for each subject was evaluated with linearly weighted simple agreement, FK and Gwet's AC2. In all cases, overall agreement was much lower with FK than Gwet's AC2. Larger differences occurred when the data were more free marginal. Inter-rater agreement determined with FK statistics is unlikely to be correct unless it can be shown that the data from which agreement is determined are, in fact, fixed marginal. It follows that results obtained on agreement between practitioners with FK are probably incorrect. It is shown that inter-rater agreement evaluated with AC2 statistic is an appropriate measure when fixed marginal data are neither expected nor guaranteed. The AC2 statistic should be used as the standard statistical approach for determining agreement between practitioners.

  7. [Subjective health and burden of disease in seniors: Overview of official statistics and public health reports].

    PubMed

    Bardehle, D

    2015-12-01

    There are different types of information on men's health in older age. High morbidity burden is offset by subjective assessments of "very good" and "good" health by 52% of men over 65 years. The aim of this study is to assess the health situation of seniors from official publications and public health reports. How can the quality of life in our male population be positively influenced so that they can actively participate in society in old age. Information on the health of seniors and burden of disease were taken from men's health reports and official publications from the Robert-Koch-Institute, the Federal Statistical Office, and the IHME Institute of the USA according to age groups and gender. Burden of disease in seniors is influenced by one's own health behavior and the social situation. The increase in life expectancy of seniors is characterized by longer life with chronic conditions. Official statistics indicate that about 50% of seniors are affected by disease or severe disability, while 50% assess their health status as "very good" or "good". Aging of the population requires diverse health promotion activities. Parallel with the inevitable increased multimorbidity in the elderly, maintaining and increase of physical fitness is required so that seniors have a positive "subjective health" or "wellbeing".

  8. The Relationship between Statistics Self-Efficacy, Statistics Anxiety, and Performance in an Introductory Graduate Statistics Course

    ERIC Educational Resources Information Center

    Schneider, William R.

    2011-01-01

    The purpose of this study was to determine the relationship between statistics self-efficacy, statistics anxiety, and performance in introductory graduate statistics courses. The study design compared two statistics self-efficacy measures developed by Finney and Schraw (2003), a statistics anxiety measure developed by Cruise and Wilkins (1980),…

  9. 2000 Iowa crash facts : a summary of motor vehicle crash statistics on Iowa roadways

    DOT National Transportation Integrated Search

    2000-01-01

    All statistics are gathered and calculated by the Iowa Department of Transportations Office of Driver Services. National statistics : are obtained from Traffic Safety Facts 2000 published by the U.S. Department of Transportations National...

  10. Studies of concentration and temperature dependences of precipitation kinetics in iron-copper alloys using kinetic Monte Carlo and stochastic statistical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khromov, K. Yu.; Vaks, V. G., E-mail: vaks@mbslab.kiae.ru; Zhuravlev, I. A.

    2013-02-15

    The previously developed ab initio model and the kinetic Monte Carlo method (KMCM) are used to simulate precipitation in a number of iron-copper alloys with different copper concentrations x and temperatures T. The same simulations are also made using an improved version of the previously suggested stochastic statistical method (SSM). The results obtained enable us to make a number of general conclusions about the dependences of the decomposition kinetics in Fe-Cu alloys on x and T. We also show that the SSM usually describes the precipitation kinetics in good agreement with the KMCM, and using the SSM in conjunction withmore » the KMCM allows extending the KMC simulations to the longer evolution times. The results of simulations seem to agree with available experimental data for Fe-Cu alloys within statistical errors of simulations and the scatter of experimental results. Comparison of simulation results with experiments for some multicomponent Fe-Cu-based alloys allows making certain conclusions about the influence of alloying elements in these alloys on the precipitation kinetics at different stages of evolution.« less

  11. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  12. Near-exact distributions for the block equicorrelation and equivariance likelihood ratio test statistic

    NASA Astrophysics Data System (ADS)

    Coelho, Carlos A.; Marques, Filipe J.

    2013-09-01

    In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.

  13. Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech

    NASA Astrophysics Data System (ADS)

    Přibil, J.; Přibilová, A.

    2009-01-01

    The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.

  14. Computing Science and Statistics: Volume 24. Graphics and Visualization

    DTIC Science & Technology

    1993-03-20

    r, is set to 3.569, the population examples include: kneading ingredients into a bread eventually oscillates about 16 fixed values. However the dough ...34fun statistics". My goal is to offer leagues I said in jest "After all, regression analysis is you the equivalent of a fortune cookie which clearly is... cookie of the night reads: One problem that statisticians traditionally seem to "You have good friends who will come to your aid in have is that they

  15. Statistical properties of MHD fluctuations associated with high speed streams from HELIOS 2 observations

    NASA Technical Reports Server (NTRS)

    Bavassano, B.; Dobrowolny, H.; Fanfoni, G.; Mariani, F.; Ness, N. F.

    1981-01-01

    Helios 2 magnetic data were used to obtain several statistical properties of MHD fluctuations associated with the trailing edge of a given stream served in different solar rotations. Eigenvalues and eigenvectors of the variance matrix, total power and degree of compressibility of the fluctuations were derived and discussed both as a function of distance from the Sun and as a function of the frequency range included in the sample. The results obtained add new information to the picture of MHD turbulence in the solar wind. In particular, a dependence from frequency range of the radial gradients of various statistical quantities is obtained.

  16. Assessing colour-dependent occupation statistics inferred from galaxy group catalogues

    NASA Astrophysics Data System (ADS)

    Campbell, Duncan; van den Bosch, Frank C.; Hearin, Andrew; Padmanabhan, Nikhil; Berlind, Andreas; Mo, H. J.; Tinker, Jeremy; Yang, Xiaohu

    2015-09-01

    We investigate the ability of current implementations of galaxy group finders to recover colour-dependent halo occupation statistics. To test the fidelity of group catalogue inferred statistics, we run three different group finders used in the literature over a mock that includes galaxy colours in a realistic manner. Overall, the resulting mock group catalogues are remarkably similar, and most colour-dependent statistics are recovered with reasonable accuracy. However, it is also clear that certain systematic errors arise as a consequence of correlated errors in group membership determination, central/satellite designation, and halo mass assignment. We introduce a new statistic, the halo transition probability (HTP), which captures the combined impact of all these errors. As a rule of thumb, errors tend to equalize the properties of distinct galaxy populations (i.e. red versus blue galaxies or centrals versus satellites), and to result in inferred occupation statistics that are more accurate for red galaxies than for blue galaxies. A statistic that is particularly poorly recovered from the group catalogues is the red fraction of central galaxies as a function of halo mass. Group finders do a good job in recovering galactic conformity, but also have a tendency to introduce weak conformity when none is present. We conclude that proper inference of colour-dependent statistics from group catalogues is best achieved using forward modelling (i.e. running group finders over mock data) or by implementing a correction scheme based on the HTP, as long as the latter is not too strongly model dependent.

  17. Statistical thermodynamics of a two-dimensional relativistic gas.

    PubMed

    Montakhab, Afshin; Ghodrat, Malihe; Barati, Mahmood

    2009-03-01

    In this paper we study a fully relativistic model of a two-dimensional hard-disk gas. This model avoids the general problems associated with relativistic particle collisions and is therefore an ideal system to study relativistic effects in statistical thermodynamics. We study this model using molecular-dynamics simulation, concentrating on the velocity distribution functions. We obtain results for x and y components of velocity in the rest frame (Gamma) as well as the moving frame (Gamma;{'}) . Our results confirm that Jüttner distribution is the correct generalization of Maxwell-Boltzmann distribution. We obtain the same "temperature" parameter beta for both frames consistent with a recent study of a limited one-dimensional model. We also address the controversial topic of temperature transformation. We show that while local thermal equilibrium holds in the moving frame, relying on statistical methods such as distribution functions or equipartition theorem are ultimately inconclusive in deciding on a correct temperature transformation law (if any).

  18. Relationship between Graduate Students' Statistics Self-Efficacy, Statistics Anxiety, Attitude toward Statistics, and Social Support

    ERIC Educational Resources Information Center

    Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael

    2011-01-01

    Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…

  19. The use and misuse of aircraft and missile RCS statistics

    NASA Astrophysics Data System (ADS)

    Bishop, Lee R.

    1991-07-01

    Both static and dynamic radar cross sections measurements are used for RCS predictions, but the static data are less complete than the dynamic. Integrated dynamics RCS data also have limitations for prediction radar detection performance. When raw static data are properly used, good first-order detection estimates are possible. The research to develop more-usable RCS statistics is reviewed, and windowing techniques for creating probability density functions from static RCS data are discussed.

  20. Statistical characterization of thermal plumes in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    Zhou, Sheng-Qi; Xie, Yi-Chao; Sun, Chao; Xia, Ke-Qing

    2016-09-01

    We report an experimental study on the statistical properties of the thermal plumes in turbulent thermal convection. A method has been proposed to extract the basic characteristics of thermal plumes from temporal temperature measurement inside the convection cell. It has been found that both plume amplitude A and cap width w , in a time domain, are approximately in the log-normal distribution. In particular, the normalized most probable front width is found to be a characteristic scale of thermal plumes, which is much larger than the thermal boundary layer thickness. Over a wide range of the Rayleigh number, the statistical characterizations of the thermal fluctuations of plumes, and the turbulent background, the plume front width and plume spacing have been discussed and compared with the theoretical predictions and morphological observations. For the most part good agreements have been found with the direct observations.

  1. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  2. Statistical process control: a practical application for hospitals.

    PubMed

    VanderVeen, L M

    1992-01-01

    A six-step plan based on using statistics was designed to improve quality in the central processing and distribution department of a 223-bed hospital in Oakland, CA. This article describes how the plan was implemented sequentially, starting with the crucial first step of obtaining administrative support. The QI project succeeded in overcoming beginners' fear of statistics and in training both managers and staff to use inspection checklists, Pareto charts, cause-and-effect diagrams, and control charts. The best outcome of the program was the increased commitment to quality improvement by the members of the department.

  3. Review of a bituminous concrete statistical specification : final report.

    DOT National Transportation Integrated Search

    1971-01-01

    The statistically oriented specification for bituminous concrete production reviewed in this report was used as a basis for acceptance of more than one million tons of bituminous concrete in 1970. Data obtained from this system were analyzed for grad...

  4. Good Concrete Activity Is Good Mental Activity

    ERIC Educational Resources Information Center

    McDonough, Andrea

    2016-01-01

    Early years mathematics classrooms can be colourful, exciting, and challenging places of learning. Andrea McDonough and fellow teachers have noticed that some students make good decisions about using materials to assist their problem solving, but this is not always the case. These experiences lead her to ask the following questions: (1) Are…

  5. Teaching Students to Use Summary Statistics and Graphics to Clean and Analyze Data

    ERIC Educational Resources Information Center

    Holcomb, John; Spalsbury, Angela

    2005-01-01

    Textbooks and websites today abound with real data. One neglected issue is that statistical investigations often require a good deal of "cleaning" to ready data for analysis. The purpose of this dataset and exercise is to teach students to use exploratory tools to identify erroneous observations. This article discusses the merits of such…

  6. Effect of the depreciation of public goods in spatial public goods games

    NASA Astrophysics Data System (ADS)

    Shi, Dong-Mei; Zhuang, Yong; Wang, Bing-Hong

    2012-02-01

    In this work, the depreciation effect of public goods is considered in the public goods games, which is realized by rescaling the multiplication factor r of each group as r‧=r( (β≥0). It is assumed that each individual enjoys the full profit r of the public goods if all the players of this group are cooperators. Otherwise, the value of public goods is reduced to r‧. It is found that compared with the original version (β=0), the emergence of cooperation is remarkably promoted for β>0, and there exist intermediate values of β inducing the best cooperation. Particularly, there exists a range of β inducing the highest cooperative level, and this range of β broadens as r increases. It is further presented that the variation of cooperator density with noise has close relations with the values of β and r, and cooperation at an intermediate value of β=1.0 is most tolerant to noise.

  7. Condensate statistics in interacting and ideal dilute bose gases

    PubMed

    Kocharovsky; Kocharovsky; Scully

    2000-03-13

    We obtain analytical formulas for the statistics, in particular, for the characteristic function and all cumulants, of the Bose-Einstein condensate in dilute weakly interacting and ideal equilibrium gases in the canonical ensemble via the particle-number-conserving operator formalism of Girardeau and Arnowitt. We prove that the ground-state occupation statistics is not Gaussian even in the thermodynamic limit. We calculate the effect of Bogoliubov coupling on suppression of ground-state occupation fluctuations and show that they are governed by a pair-correlation, squeezing mechanism.

  8. DTI segmentation by statistical surface evolution.

    PubMed

    Lenglet, Christophe; Rousson, Mikaël; Deriche, Rachid

    2006-06-01

    We address the problem of the segmentation of cerebral white matter structures from diffusion tensor images (DTI). A DTI produces, from a set of diffusion-weighted MR images, tensor-valued images where each voxel is assigned with a 3 x 3 symmetric, positive-definite matrix. This second order tensor is simply the covariance matrix of a local Gaussian process, with zero-mean, modeling the average motion of water molecules. As we will show in this paper, the definition of a dissimilarity measure and statistics between such quantities is a nontrivial task which must be tackled carefully. We claim and demonstrate that, by using the theoretically well-founded differential geometrical properties of the manifold of multivariate normal distributions, it is possible to improve the quality of the segmentation results obtained with other dissimilarity measures such as the Euclidean distance or the Kullback-Leibler divergence. The main goal of this paper is to prove that the choice of the probability metric, i.e., the dissimilarity measure, has a deep impact on the tensor statistics and, hence, on the achieved results. We introduce a variational formulation, in the level-set framework, to estimate the optimal segmentation of a DTI according to the following hypothesis: Diffusion tensors exhibit a Gaussian distribution in the different partitions. We must also respect the geometric constraints imposed by the interfaces existing among the cerebral structures and detected by the gradient of the DTI. We show how to express all the statistical quantities for the different probability metrics. We validate and compare the results obtained on various synthetic data-sets, a biological rat spinal cord phantom and human brain DTIs.

  9. Statistical analysis of the magnetization signatures of impact basins

    NASA Astrophysics Data System (ADS)

    Gabasova, L. R.; Wieczorek, M. A.

    2017-09-01

    We quantify the magnetic signatures of the largest lunar impact basins using recent mission data and robust statistical bounds, and obtain an early activity timeline for the lunar core dynamo which appears to peak earlier than indicated by Apollo paleointensity measurements.

  10. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  11. α -induced reactions on 115In: Cross section measurements and statistical model analysis

    NASA Astrophysics Data System (ADS)

    Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.

    2018-05-01

    Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also

  12. APA's Learning Objectives for Research Methods and Statistics in Practice: A Multimethod Analysis

    ERIC Educational Resources Information Center

    Tomcho, Thomas J.; Rice, Diana; Foels, Rob; Folmsbee, Leah; Vladescu, Jason; Lissman, Rachel; Matulewicz, Ryan; Bopp, Kara

    2009-01-01

    Research methods and statistics courses constitute a core undergraduate psychology requirement. We analyzed course syllabi and faculty self-reported coverage of both research methods and statistics course learning objectives to assess the concordance with APA's learning objectives (American Psychological Association, 2007). We obtained a sample of…

  13. A Guerilla Guide to Common Problems in ‘Neurostatistics’: Essential Statistical Topics in Neuroscience

    PubMed Central

    Smith, Paul F.

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins. PMID:29371855

  14. CHOICE OF INDICATOR DETERMINES THE SIGNIFICANCE AND RISK OBTAINED FROM THE STATISTICAL ASSOCIATION BETWEN FINE PARTICULATE MATTER MASS AND CARDIOVASCULAR MORTALITY

    EPA Science Inventory

    Minor changes in the indicator used to measure fine PM, which cause only modest changes in Mass concentrations, can lead to dramatic changes in the statistical relationship of fine PM mass with cardiovascular mortality. An epidemiologic study in Phoenix (Mar et al., 2000), augme...

  15. Harmonic statistics

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  16. Reanalysis Intercomparison on a Surface Wind Statistical Downscaling Exercise over Northeastern North America.

    NASA Astrophysics Data System (ADS)

    Lucio-Eceiza, Etor E.; Fidel González-Rouco, J.; Navarro, Jorge; García-Bustamante, Elena; Beltrami, Hugo; Rojas-Labanda, Cristina

    2017-04-01

    The area of North Eastern North America is located in a privileged position for the study of the wind behaviour as it lies within the track of many of the extratropical cyclones that travel that half of the continent. During the winter season the cyclonic activity and wind intensity are higher in the region, offering a great opportunity to analyse the relationships of the surface wind field with various large-scale configurations. The analysis of the wind behaviour is conducted via a statistical downscaling method based on Canonical Correlation Analysis (CCA). This methodology exploits the relationships among the main modes of circulation over the North Atlantic and Pacific Sectors and the behaviour of an observational surface wind database. For this exercise, various predictor variables have been selected (surface wind, SLP, geopotential height at 850 and 500 hPa, and thermal thickness between these two levels), obtained by all the global reanalysis products available to date. Our predictand field consists of an observational surface wind dataset with 525 sites distributed over North Eastern North America that span over a period of about 60 years (1953-2010). These data have been previously subjected to an exhaustive quality control process. A sensitivity analysis of the methodology to different parameter configurations has been carried out, such as reanalysis product, window size, predictor variables, number of retained EOF and CCA modes, and crossvalidation subset (to test the robustness of the method). An evaluation of the predictive skill of the wind estimations has also been conducted. Overall, the methodology offers a good representation of the wind variability, which is very consistent between all the reanalysis products. The wind directly obtained from the reanalyses offer a better temporal correlation but a larger range, and in many cases, worst representation of the local variability. The long observational period has also permitted the study of intra to

  17. Seeing is believing: good graphic design principles for medical research.

    PubMed

    Duke, Susan P; Bancken, Fabrice; Crowe, Brenda; Soukup, Mat; Botsis, Taxiarchis; Forshee, Richard

    2015-09-30

    Have you noticed when you browse a book, journal, study report, or product label how your eye is drawn to figures more than to words and tables? Statistical graphs are powerful ways to transparently and succinctly communicate the key points of medical research. Furthermore, the graphic design itself adds to the clarity of the messages in the data. The goal of this paper is to provide a mechanism for selecting the appropriate graph to thoughtfully construct quality deliverables using good graphic design principles. Examples are motivated by the efforts of a Safety Graphics Working Group that consisted of scientists from the pharmaceutical industry, Food and Drug Administration, and academic institutions. Copyright © 2015 John Wiley & Sons, Ltd.

  18. The Application of FT-IR Spectroscopy for Quality Control of Flours Obtained from Polish Producers

    PubMed Central

    Ceglińska, Alicja; Reder, Magdalena; Ciemniewska-Żytkiewicz, Hanna

    2017-01-01

    Samples of wheat, spelt, rye, and triticale flours produced by different Polish mills were studied by both classic chemical methods and FT-IR MIR spectroscopy. An attempt was made to statistically correlate FT-IR spectral data with reference data with regard to content of various components, for example, proteins, fats, ash, and fatty acids as well as properties such as moisture, falling number, and energetic value. This correlation resulted in calibrated and validated statistical models for versatile evaluation of unknown flour samples. The calibration data set was used to construct calibration models with use of the CSR and the PLS with the leave one-out, cross-validation techniques. The calibrated models were validated with a validation data set. The results obtained confirmed that application of statistical models based on MIR spectral data is a robust, accurate, precise, rapid, inexpensive, and convenient methodology for determination of flour characteristics, as well as for detection of content of selected flour ingredients. The obtained models' characteristics were as follows: R2 = 0.97, PRESS = 2.14; R2 = 0.96, PRESS = 0.69; R2 = 0.95, PRESS = 1.27; R2 = 0.94, PRESS = 0.76, for content of proteins, lipids, ash, and moisture level, respectively. Best results of CSR models were obtained for protein, ash, and crude fat (R2 = 0.86; 0.82; and 0.78, resp.). PMID:28243483

  19. A statistical spatial power spectrum of the Earth's lithospheric magnetic field

    NASA Astrophysics Data System (ADS)

    Thébault, E.; Vervelidou, F.

    2015-05-01

    The magnetic field of the Earth's lithosphere arises from rock magnetization contrasts that were shaped over geological times. The field can be described mathematically in spherical harmonics or with distributions of magnetization. We exploit this dual representation and assume that the lithospheric field is induced by spatially varying susceptibility values within a shell of constant thickness. By introducing a statistical assumption about the power spectrum of the susceptibility, we then derive a statistical expression for the spatial power spectrum of the crustal magnetic field for the spatial scales ranging from 60 to 2500 km. This expression depends on the mean induced magnetization, the thickness of the shell, and a power law exponent for the power spectrum of the susceptibility. We test the relevance of this form with a misfit analysis to the observational NGDC-720 lithospheric magnetic field model power spectrum. This allows us to estimate a mean global apparent induced magnetization value between 0.3 and 0.6 A m-1, a mean magnetic crustal thickness value between 23 and 30 km, and a root mean square for the field value between 190 and 205 nT at 95 per cent. These estimates are in good agreement with independent models of the crustal magnetization and of the seismic crustal thickness. We carry out the same analysis in the continental and oceanic domains separately. We complement the misfit analyses with a Kolmogorov-Smirnov goodness-of-fit test and we conclude that the observed power spectrum can be each time a sample of the statistical one.

  20. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical

  1. Forest statistics for Southeast Texas counties - 1986

    Treesearch

    William H. McWilliams; Daniel F. Bertelson

    1986-01-01

    These tables were derived from data obtained during a 1986 inventory of 22 counties comprising the Southeast Unit of Texas (fig. 1). Grimes, Leon, Madison, and Waller counties have been added to the Southeastern Unit since the previous inventory if 1975. All comparisons of the 1975 and 1986 forest statistics made in this Bulletin account for this change. The data on...

  2. Infinitely divisible cascades to model the statistics of natural images.

    PubMed

    Chainais, Pierre

    2007-12-01

    We propose to model the statistics of natural images thanks to the large class of stochastic processes called Infinitely Divisible Cascades (IDC). IDC were first introduced in one dimension to provide multifractal time series to model the so-called intermittency phenomenon in hydrodynamical turbulence. We have extended the definition of scalar infinitely divisible cascades from 1 to N dimensions and commented on the relevance of such a model in fully developed turbulence in [1]. In this article, we focus on the particular 2 dimensional case. IDC appear as good candidates to model the statistics of natural images. They share most of their usual properties and appear to be consistent with several independent theoretical and experimental approaches of the literature. We point out the interest of IDC for applications to procedural texture synthesis.

  3. Application of an Online Reference for Reviewing Basic Statistical Principles of Operating Room Management

    ERIC Educational Resources Information Center

    Dexter, Franklin; Masursky, Danielle; Wachtel, Ruth E.; Nussmeier, Nancy A.

    2010-01-01

    Operating room (OR) management differs from clinical anesthesia in that statistical literacy is needed daily to make good decisions. Two of the authors teach a course in operations research for surgical services to anesthesiologists, anesthesia residents, OR nursing directors, hospital administration students, and analysts to provide them with the…

  4. Nonequilibrium Statistical Operator Method and Generalized Kinetic Equations

    NASA Astrophysics Data System (ADS)

    Kuzemsky, A. L.

    2018-01-01

    We consider some principal problems of nonequilibrium statistical thermodynamics in the framework of the Zubarev nonequilibrium statistical operator approach. We present a brief comparative analysis of some approaches to describing irreversible processes based on the concept of nonequilibrium Gibbs ensembles and their applicability to describing nonequilibrium processes. We discuss the derivation of generalized kinetic equations for a system in a heat bath. We obtain and analyze a damped Schrödinger-type equation for a dynamical system in a heat bath. We study the dynamical behavior of a particle in a medium taking the dissipation effects into account. We consider the scattering problem for neutrons in a nonequilibrium medium and derive a generalized Van Hove formula. We show that the nonequilibrium statistical operator method is an effective, convenient tool for describing irreversible processes in condensed matter.

  5. Reconsidering the "Good Divorce"

    PubMed

    Amato, Paul R; Kane, Jennifer B; James, Spencer

    2011-12-01

    This study attempted to assess the notion that a "good divorce" protects children from the potential negative consequences of marital dissolution. A cluster analysis of data on postdivorce parenting from 944 families resulted in three groups: cooperative coparenting, parallel parenting, and single parenting. Children in the cooperative coparenting (good divorce) cluster had the smallest number of behavior problems and the closest ties to their fathers. Nevertheless, children in this cluster did not score significantly better than other children on 10 additional outcomes. These findings provide only modest support for the good divorce hypothesis.

  6. Innovative Formulation Combining Al, Zr and Si Precursors to Obtain Anticorrosion Hybrid Sol-Gel Coating.

    PubMed

    Genet, Clément; Menu, Marie-Joëlle; Gavard, Olivier; Ansart, Florence; Gressier, Marie; Montpellaz, Robin

    2018-05-10

    The aim of our study is to improve the aluminium alloy corrosion resistance with Organic-Inorganic Hybrid (OIH) sol-gel coating. Coatings are obtained from unusual formulation with precursors mixing: glycidoxypropyltrimethoxysilane (GPTMS), zirconium (IV) propoxide (TPOZ) and aluminium tri-sec-butoxide (ASB). This formulation was characterized and compared with sol formulations GPTMS/TPOZ and GPTMS/ASB. In each formulation, a corrosion inhibitor, cerium (III) nitrate hexahydrate, is employed to improve the corrosion performance. Coatings obtained from sol based on GPTMS/TPOZ/ASB have good anti-corrosion performances with Natural Salt Spray (NSS) resistance of 500 h for a thickness lower than 4 µm. Contact angle measurement showed a coating hydrophobic behaviour. To understand these performances, nuclear magnetic resonance (NMR) analyses were performed, results make sol-gel coating condensation evident and are in very good agreement with previous results.

  7. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  8. Statistics Anxiety and Business Statistics: The International Student

    ERIC Educational Resources Information Center

    Bell, James A.

    2008-01-01

    Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…

  9. Hospice in Assisted Living: Promoting Good Quality Care at End of Life

    ERIC Educational Resources Information Center

    Cartwright, Juliana C.; Miller, Lois; Volpin, Miriam

    2009-01-01

    Purpose: The purpose of this study was to describe good quality care at the end of life (EOL) for hospice-enrolled residents in assisted living facilities (ALFs). Design and Methods: A qualitative descriptive design was used to obtain detailed descriptions of EOL care provided by ALF medication aides, caregivers, nurses, and hospice nurses in…

  10. Statistical analysis of low level atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Tieleman, H. W.; Chen, W. W. L.

    1974-01-01

    The statistical properties of low-level wind-turbulence data were obtained with the model 1080 total vector anemometer and the model 1296 dual split-film anemometer, both manufactured by Thermo Systems Incorporated. The data obtained from the above fast-response probes were compared with the results obtained from a pair of Gill propeller anemometers. The digitized time series representing the three velocity components and the temperature were each divided into a number of blocks, the length of which depended on the lowest frequency of interest and also on the storage capacity of the available computer. A moving-average and differencing high-pass filter was used to remove the trend and the low frequency components in the time series. The calculated results for each of the anemometers used are represented in graphical or tabulated form.

  11. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  12. Velocity statistics of the Nagel-Schreckenberg model.

    PubMed

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  13. Statistical Analysis of CFD Solutions from the Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    2002-01-01

    A simple, graphical framework is presented for robust statistical evaluation of results obtained from N-Version testing of a series of RANS CFD codes. The solutions were obtained by a variety of code developers and users for the June 2001 Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration used for the computational tests is the DLR-F4 wing-body combination previously tested in several European wind tunnels and for which a previous N-Version test had been conducted. The statistical framework is used to evaluate code results for (1) a single cruise design point, (2) drag polars and (3) drag rise. The paper concludes with a discussion of the meaning of the results, especially with respect to predictability, Validation, and reporting of solutions.

  14. Management Documentation: Indicators & Good Practice at Cultural Heritage Places

    NASA Astrophysics Data System (ADS)

    Eppich, R.; Garcia Grinda, J. L.

    2015-08-01

    Documentation for cultural heritage places usually refers to describing the physical attributes, surrounding context, condition or environment; most of the time with images, graphics, maps or digital 3D models in their various forms with supporting textural information. Just as important as this type of information is the documentation of managerial attributes. How do managers of cultural heritage places collect information related to financial or economic well-being? How are data collected over time measured, and what are significant indicators for improvement? What quality of indicator is good enough? Good management of cultural heritage places is essential for conservation longevity, preservation of values and enjoyment by the public. But how is management documented? The paper will describe the research methodology, selection and description of attributes or indicators related to good management practice. It will describe the criteria for indicator selection and why they are important, how and when they are collected, by whom, and the difficulties in obtaining this information. As importantly it will describe how this type of documentation directly contributes to improving conservation practice. Good practice summaries will be presented that highlight this type of documentation including Pamplona and Ávila, Spain and Valletta, Malta. Conclusions are drawn with preliminary recommendations for improvement of this important aspect of documentation. Documentation of this nature is not typical and presents a unique challenge to collect, measure and communicate easily. However, it is an essential category that is often ignored yet absolutely essential in order to conserve cultural heritage places.

  15. Statistical methods for convergence detection of multi-objective evolutionary algorithms.

    PubMed

    Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J

    2009-01-01

    In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.

  16. Carbon nanofibers obtained from electrospinning process

    NASA Astrophysics Data System (ADS)

    Bovi de Oliveira, Juliana; Müller Guerrini, Lília; Sizuka Oishi, Silvia; Rogerio de Oliveira Hein, Luis; dos Santos Conejo, Luíza; Cerqueira Rezende, Mirabel; Cocchieri Botelho, Edson

    2018-02-01

    In recent years, reinforcements consisting of carbon nanostructures, such as carbon nanotubes, fullerenes, graphenes, and carbon nanofibers have received significant attention due mainly to their chemical inertness and good mechanical, electrical and thermal properties. Since carbon nanofibers comprise a continuous reinforcing with high specific surface area, associated with the fact that they can be obtained at a low cost and in a large amount, they have shown to be advantageous compared to traditional carbon nanotubes. The main objective of this work is the processing of carbon nanofibers, using polyacrylonitrile (PAN) as a precursor, obtained by the electrospinning process via polymer solution, with subsequent use for airspace applications as reinforcement in polymer composites. In this work, firstly PAN nanofibers were produced by electrospinning with diameters in the range of (375 ± 85) nm, using a dimethylformamide solution. Using a furnace, the PAN nanofiber was converted into carbon nanofiber. Morphologies and structures of PAN and carbon nanofibers were investigated by scanning electron microscopy, Raman Spectroscopy, thermogravimetric analyses and differential scanning calorimeter. The resulting residual weight after carbonization was approximately 38% in weight, with a diameters reduction of 50%, and the same showed a carbon yield of 25%. From the analysis of the crystalline structure of the carbonized material, it was found that the material presented a disordered structure.

  17. Measuring a diffusion coefficient by single-particle tracking: statistical analysis of experimental mean squared displacement curves.

    PubMed

    Ernst, Dominique; Köhler, Jürgen

    2013-01-21

    We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.

  18. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students' Statistical Reasoning and Quantitative Literacy Skills.

    PubMed

    Olimpo, Jeffrey T; Pevey, Ryan S; McCabe, Thomas M

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students' reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce.

  19. Dynamically biased statistical model for the ortho/para conversion in the H2+H3+ --> H3++ H2 reaction

    NASA Astrophysics Data System (ADS)

    Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio

    2012-09-01

    In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.

  20. Understanding AlN Obtaining Through Computational Thermodynamics Combined with Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Florea, R. M.

    2017-06-01

    Basic material concept, technology and some results of studies on aluminum matrix composite with dispersive aluminum nitride reinforcement was shown. Studied composites were manufactured by „in situ” technique. Aluminum nitride (AlN) has attracted large interest recently, because of its high thermal conductivity, good dielectric properties, high flexural strength, thermal expansion coefficient matches that of Si and its non-toxic nature, as a suitable material for hybrid integrated circuit substrates. AlMg alloys are the best matrix for AlN obtaining. Al2O3-AlMg, AlN-Al2O3, and AlN-AlMg binary diagrams were thermodynamically modelled. The obtained Gibbs free energies of components, solution parameters and stoichiometric phases were used to build a thermodynamic database of AlN- Al2O3-AlMg system. Obtaining of AlN with Liquid-phase of AlMg as matrix has been studied and compared with the thermodynamic results. The secondary phase microstructure has a significant effect on the final thermal conductivity of the obtained AlN. Thermodynamic modelling of AlN-Al2O3-AlMg system provided an important basis for understanding the obtaining behavior and interpreting the experimental results.

  1. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  2. Cooperation among cancer cells as public goods games on Voronoi networks.

    PubMed

    Archetti, Marco

    2016-05-07

    Cancer cells produce growth factors that diffuse and sustain tumour proliferation, a form of cooperation that can be studied using mathematical models of public goods in the framework of evolutionary game theory. Cell populations, however, form heterogeneous networks that cannot be described by regular lattices or scale-free networks, the types of graphs generally used in the study of cooperation. To describe the dynamics of growth factor production in populations of cancer cells, I study public goods games on Voronoi networks, using a range of non-linear benefits that account for the known properties of growth factors, and different types of diffusion gradients. The results are surprisingly similar to those obtained on regular graphs and different from results on scale-free networks, revealing that network heterogeneity per se does not promote cooperation when public goods diffuse beyond one-step neighbours. The exact shape of the diffusion gradient is not crucial, however, whereas the type of non-linear benefit is an essential determinant of the dynamics. Public goods games on Voronoi networks can shed light on intra-tumour heterogeneity, the evolution of resistance to therapies that target growth factors, and new types of cell therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Statistical properties of filtered pseudorandom digital sequences formed from the sum of maximum-length sequences

    NASA Technical Reports Server (NTRS)

    Wallace, G. R.; Weathers, G. D.; Graf, E. R.

    1973-01-01

    The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.

  4. Defining the Good Reading Teacher.

    ERIC Educational Resources Information Center

    Kupersmith, Judy; And Others

    In the quest for a definition of the good reading teacher, a review of the literature shows that new or copious materials, one specific teaching method, and static teaching behaviors are not responsible for effective teaching. However, observations of five reading teachers, with good references and good reputations but with widely divergent…

  5. Thermodynamics and statistical mechanics. [thermodynamic properties of gases

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The basic thermodynamic properties of gases are reviewed and the relations between them are derived from the first and second laws. The elements of statistical mechanics are then formulated and the partition function is derived. The classical form of the partition function is used to obtain the Maxwell-Boltzmann distribution of kinetic energies in the gas phase and the equipartition of energy theorem is given in its most general form. The thermodynamic properties are all derived as functions of the partition function. Quantum statistics are reviewed briefly and the differences between the Boltzmann distribution function for classical particles and the Fermi-Dirac and Bose-Einstein distributions for quantum particles are discussed.

  6. Influences of Moral, Emotional and Adversity Quotient on Good Citizenship of Rajabhat University's Students in the Northeast of Thailand

    ERIC Educational Resources Information Center

    Siphai, Sunan

    2015-01-01

    The objective of this study is to investigate the influences of moral, emotional and adversity quotient on good citizenship of Rajabhat University's students in Northeastern Region of Thailand. The samples included 1,087 undergraduate students from 8 different Rajabhat universities. Data analysis was conducted in descriptive statistics and…

  7. The level crossing rates and associated statistical properties of a random frequency response function

    NASA Astrophysics Data System (ADS)

    Langley, Robin S.

    2018-03-01

    This work is concerned with the statistical properties of the frequency response function of the energy of a random system. Earlier studies have considered the statistical distribution of the function at a single frequency, or alternatively the statistics of a band-average of the function. In contrast the present analysis considers the statistical fluctuations over a frequency band, and results are obtained for the mean rate at which the function crosses a specified level (or equivalently, the average number of times the level is crossed within the band). Results are also obtained for the probability of crossing a specified level at least once, the mean rate of occurrence of peaks, and the mean trough-to-peak height. The analysis is based on the assumption that the natural frequencies and mode shapes of the system have statistical properties that are governed by the Gaussian Orthogonal Ensemble (GOE), and the validity of this assumption is demonstrated by comparison with numerical simulations for a random plate. The work has application to the assessment of the performance of dynamic systems that are sensitive to random imperfections.

  8. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Statistical mechanics in the context of special relativity.

    PubMed

    Kaniadakis, G

    2002-11-01

    In Ref. [Physica A 296, 405 (2001)], starting from the one parameter deformation of the exponential function exp(kappa)(x)=(sqrt[1+kappa(2)x(2)]+kappax)(1/kappa), a statistical mechanics has been constructed which reduces to the ordinary Boltzmann-Gibbs statistical mechanics as the deformation parameter kappa approaches to zero. The distribution f=exp(kappa)(-beta E+betamu) obtained within this statistical mechanics shows a power law tail and depends on the nonspecified parameter beta, containing all the information about the temperature of the system. On the other hand, the entropic form S(kappa)= integral d(3)p(c(kappa) f(1+kappa)+c(-kappa) f(1-kappa)), which after maximization produces the distribution f and reduces to the standard Boltzmann-Shannon entropy S0 as kappa-->0, contains the coefficient c(kappa) whose expression involves, beside the Boltzmann constant, another nonspecified parameter alpha. In the present effort we show that S(kappa) is the unique existing entropy obtained by a continuous deformation of S0 and preserving unaltered its fundamental properties of concavity, additivity, and extensivity. These properties of S(kappa) permit to determine unequivocally the values of the above mentioned parameters beta and alpha. Subsequently, we explain the origin of the deformation mechanism introduced by kappa and show that this deformation emerges naturally within the Einstein special relativity. Furthermore, we extend the theory in order to treat statistical systems in a time dependent and relativistic context. Then, we show that it is possible to determine in a self consistent scheme within the special relativity the values of the free parameter kappa which results to depend on the light speed c and reduces to zero as c--> infinity recovering in this way the ordinary statistical mechanics and thermodynamics. The statistical mechanics here presented, does not contain free parameters, preserves unaltered the mathematical and epistemological structure of

  10. Software Used to Generate Cancer Statistics - SEER Cancer Statistics

    Cancer.gov

    Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.

  11. Good life good death according to Christiaan Barnard.

    PubMed

    Toledo-Pereyra, Luis H

    2010-06-01

    Christiaan Barnard (1922-2002), pioneering heart transplant surgeon, introduced his ideas on euthanasia in a well-written and researched book, Good Life Good Death. A Doctor's Case for Euthanasia and Suicide, published in 1980. His courage in analyzing this topic in a forthright and clear manner is worth reviewing today. In essence, Barnard supported and practiced passive euthanasia (the ending of life by indirect methods, such as stopping of life support) and discussed, but never practiced, active euthanasia (the ending of life by direct means). Barnard believed that "the primary goal of medicine was to alleviate suffering-not merely to prolong life-he argued that advances in modern medical technology demanded that we evaluate our view of death and the handling of terminal illness." Some in the surgical community took issue with Barnard when he publicized his personal views on euthanasia. We discuss Barnard's beliefs and attempt to clarify some misunderstandings regarding this particular controversial area of medicine.

  12. Radar derived spatial statistics of summer rain. Volume 1: Experiment description

    NASA Technical Reports Server (NTRS)

    Katz, I.; Arnold, A.; Goldhirsh, J.; Konrad, T. G.; Vann, W. L.; Dobson, E. B.; Rowland, J. R.

    1975-01-01

    An experiment was performed at Wallops Island, Virginia, to obtain a statistical description of summer rainstorms. Its purpose was to obtain information needed for design of earth and space communications systems in which precipitation in the earth's atmosphere scatters or attenuates the radio signal. Rainstorms were monitored with the high resolution SPANDAR radar and the 3-dimensional structures of the storms were recorded on digital tape. The equipment, the experiment, and tabulated data obtained during the experiment are described.

  13. Theoretical verification of experimentally obtained conformation-dependent electronic conductance in a biphenyl molecule

    NASA Astrophysics Data System (ADS)

    Maiti, Santanu K.

    2014-07-01

    The experimentally obtained (Venkataraman et al. [1]) cosine squared relation of electronic conductance in a biphenyl molecule is verified theoretically within a tight-binding framework. Using Green's function formalism we numerically calculate two-terminal conductance as a function of relative twist angle among the molecular rings and find that the results are in good agreement with the experimental observation.

  14. Simple Data Sets for Distinct Basic Summary Statistics

    ERIC Educational Resources Information Center

    Lesser, Lawrence M.

    2011-01-01

    It is important to avoid ambiguity with numbers because unfortunate choices of numbers can inadvertently make it possible for students to form misconceptions or make it difficult for teachers to tell if students obtained the right answer for the right reason. Therefore, it is important to make sure when introducing basic summary statistics that…

  15. Forest statistics for east Oklahoma counties - l993

    Treesearch

    Patrick E. Miller; Andrew J. Hartsell; Jack D. London

    1993-01-01

    This report contains the statistical tables and figures derived from data obtained during a recent inventory of east Oklahoma. The multiresource inventory included 18 counties and two survey regions. Data on forest acreage and timber volume involved a three-step procedure. First, estimate of forest acreage were made for each county using aerial photographs....

  16. VizieR Online Data Catalog: GOODS-MUSIC catalog updated version (Santini+, 2009)

    NASA Astrophysics Data System (ADS)

    Santini, P.; Fontana, A.; Grazian, A.; Salimbeni, S.; Fiore, F.; Fontanot, F.; Boutsia, K.; Castelllano, M.; Cristiani, S.; de Santis, C.; Gallozzi, S.; Giallongo, E.; Nonino, M.; Menci, N.; Paris, D.; Pentericci, L.; Vanzella, E.

    2009-06-01

    The GOODS-MUSIC multiwavelength catalog provides photometric and spectroscopic information for galaxies in the GOODS Southern field. It includes two U images obtained with the ESO 2.2m telescope and one U band image from VLT-VIMOS, the ACS-HST images in four optical (B,V,i,z) bands, the VLT-ISAAC J, H, and Ks bands as well as the Spitzer images at 3.6, 4.5, 5.8, and 8 micron (IRAC) and 24 micron (MIPS). Most of these images have been made publicly available in the coadded version by the GOODS team, while the U band data were retrieved in raw format and reduced by our team. We also collected all the available spectroscopic information from public spectroscopic surveys and cross-correlated the spectroscopic redshifts with our photometric catalog. For the unobserved fraction of the objects, we applied our photometric redshift code to obtain well-calibrated photometric redshifts. The final catalog is made up of 15208 objects, with 209 known stars and 61 AGNs. The major new feature of this updated release is the inclusion of 24 micron photometry. Further improvements concern a revised photometry in the four IRAC bands (mainly based on the use of new PSF-matching kernerls and on a revised procedure for estimating the background), the enlargement of the sample of galaxies with spectroscopic redshifts, the addition of objects selected on the IRAC 4.5 micron image and a more careful selection of AGN sources. (1 data file).

  17. Understanding Statistics and Statistics Education: A Chinese Perspective

    ERIC Educational Resources Information Center

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  18. Statistical mentoring at early training and career stages

    DOE PAGES

    Anderson-Cook, Christine M.; Hamada, Michael S.; Moore, Leslie M.; ...

    2016-06-27

    At Los Alamos National Laboratory (LANL), statistical scientists develop solutions for a variety of national security challenges through scientific excellence, typically as members of interdisciplinary teams. At LANL, mentoring is actively encouraged and practiced to develop statistical skills and positive career-building behaviors. Mentoring activities targeted at different career phases from student to junior staff are an important catalyst for both short and long term career development. This article discusses mentoring strategies for undergraduate and graduate students through internships as well as for postdoctoral research associates and junior staff. Topics addressed include project selection, progress, and outcome; intellectual and social activitiesmore » that complement the student internship experience; key skills/knowledge not typically obtained in academic training; and the impact of such internships on students’ careers. Experiences and strategies from a number of successful mentorships are presented. Feedback from former mentees obtained via a questionnaire is incorporated. As a result, these responses address some of the benefits the respondents received from mentoring, helpful contributions and advice from their mentors, key skills learned, and how mentoring impacted their later careers.« less

  19. Statistical mentoring at early training and career stages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.; Hamada, Michael S.; Moore, Leslie M.

    At Los Alamos National Laboratory (LANL), statistical scientists develop solutions for a variety of national security challenges through scientific excellence, typically as members of interdisciplinary teams. At LANL, mentoring is actively encouraged and practiced to develop statistical skills and positive career-building behaviors. Mentoring activities targeted at different career phases from student to junior staff are an important catalyst for both short and long term career development. This article discusses mentoring strategies for undergraduate and graduate students through internships as well as for postdoctoral research associates and junior staff. Topics addressed include project selection, progress, and outcome; intellectual and social activitiesmore » that complement the student internship experience; key skills/knowledge not typically obtained in academic training; and the impact of such internships on students’ careers. Experiences and strategies from a number of successful mentorships are presented. Feedback from former mentees obtained via a questionnaire is incorporated. As a result, these responses address some of the benefits the respondents received from mentoring, helpful contributions and advice from their mentors, key skills learned, and how mentoring impacted their later careers.« less

  20. Directory of Michigan Library Statistics. 1994 Edition. Reporting 1992 and 1993 Statistical Activities including: Public Library Statistics, Library Cooperative Statistics, Regional/Subregional Statistics.

    ERIC Educational Resources Information Center

    Leaf, Donald C., Comp.; Neely, Linda, Comp.

    This edition focuses on statistical data supplied by Michigan public libraries, public library cooperatives, and those public libraries which serve as regional or subregional outlets for blind and physically handicapped services. Since statistics in Michigan academic libraries are typically collected in odd-numbered years, they are not included…

  1. Pennsylvania StreamStats--A web-based application for obtaining water-resource-related information

    USGS Publications Warehouse

    Stuckey, Marla H.; Hoffman, Scott A.

    2010-01-01

    StreamStats is a national web-based Geographic Information System (GIS) application, developed by the U.S. Geological Survey (USGS), in cooperation with Environmental Systems Research Institute, Inc., to provide a variety of water-resource-related information. Users can easily obtain descriptive information, basin characteristics, and streamflow statistics for USGS streamgages and ungaged stream locations throughout Pennsylvania. StreamStats also allows users to search upstream and (or) downstream from user-selected points to identify locations of and obtain information for water-resource-related activities, such as dams and streamgages.

  2. Level statistics of a noncompact cosmological billiard

    NASA Astrophysics Data System (ADS)

    Csordas, Andras; Graham, Robert; Szepfalusy, Peter

    1991-08-01

    A noncompact chaotic billiard on a two-dimensional space of constant negative curvature, the infinite equilateral triangle describing anisotropy oscillations in the very early universe, is studied quantum-mechanically. A Weyl formula with a logarithmic correction term is derived for the smoothed number of states function. For one symmetry class of the eigenfunctions, the level spacing distribution, the spectral rigidity Delta3, and the Sigma2 statistics are determined numerically using the finite matrix approximation. Systematic deviations are found both from the Gaussian orthogonal ensemble (GOE) and the Poissonian ensemble. However, good agreement with the GOE is found if the fundamental triangle is deformed in such a way that it no longer tiles the space.

  3. What Are Good Universities?

    ERIC Educational Resources Information Center

    Connell, Raewyn

    2016-01-01

    This paper considers how we can arrive at a concept of the good university. It begins with ideas expressed by Australian Vice-Chancellors and in the "league tables" for universities, which essentially reproduce existing privilege. It then considers definitions of the good university via wish lists, classic texts, horror lists, structural…

  4. Does Breast Cancer Drive the Building of Survival Probability Models among States? An Assessment of Goodness of Fit for Patient Data from SEER Registries

    PubMed

    Khan, Hafiz; Saxena, Anshul; Perisetti, Abhilash; Rafiq, Aamrin; Gabbidon, Kemesha; Mende, Sarah; Lyuksyutova, Maria; Quesada, Kandi; Blakely, Summre; Torres, Tiffany; Afesse, Mahlet

    2016-12-01

    Background: Breast cancer is a worldwide public health concern and is the most prevalent type of cancer in women in the United States. This study concerned the best fit of statistical probability models on the basis of survival times for nine state cancer registries: California, Connecticut, Georgia, Hawaii, Iowa, Michigan, New Mexico, Utah, and Washington. Materials and Methods: A probability random sampling method was applied to select and extract records of 2,000 breast cancer patients from the Surveillance Epidemiology and End Results (SEER) database for each of the nine state cancer registries used in this study. EasyFit software was utilized to identify the best probability models by using goodness of fit tests, and to estimate parameters for various statistical probability distributions that fit survival data. Results: Statistical analysis for the summary of statistics is reported for each of the states for the years 1973 to 2012. Kolmogorov-Smirnov, Anderson-Darling, and Chi-squared goodness of fit test values were used for survival data, the highest values of goodness of fit statistics being considered indicative of the best fit survival model for each state. Conclusions: It was found that California, Connecticut, Georgia, Iowa, New Mexico, and Washington followed the Burr probability distribution, while the Dagum probability distribution gave the best fit for Michigan and Utah, and Hawaii followed the Gamma probability distribution. These findings highlight differences between states through selected sociodemographic variables and also demonstrate probability modeling differences in breast cancer survival times. The results of this study can be used to guide healthcare providers and researchers for further investigations into social and environmental factors in order to reduce the occurrence of and mortality due to breast cancer. Creative Commons Attribution License

  5. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  6. The global public good concept: a means of promoting good veterinary governance.

    PubMed

    Eloit, M

    2012-08-01

    At the outset, the concept of a 'public good' was associated with economic policies. However, it has now evolved not only from a national to a global concept (global public good), but also from a concept applying solely to the production of goods to one encompassing societal issues (education, environment, etc.) and fundamental rights, including the right to health and food. Through their actions, Veterinary Services, as defined by the Terrestrial Animal Health Code (Terrestrial Code) of the World Organisation for Animal Health (OIE), help to improve animal health and reduce production losses. In this way they contribute directly and indirectly to food security and to safeguarding human health and economic resources. The organisation and operating procedures of Veterinary Services are therefore key to the efficient governance required to achieve these objectives. The OIE is a major player in global cooperation and governance in the fields of animal and public health through the implementation of its strategic standardisation mission and other programmes for the benefit of Veterinary Services and OIE Member Countries. Thus, the actions of Veterinary Services and the OIE deserve to be recognised as a global public good, backed by public investment to ensure that all Veterinary Services are in a position to apply the principles of good governance and to comply with the international standards for the quality of Veterinary Services set out in the OIE Terrestrial Code (Section 3 on Quality of Veterinary Services) and Aquatic Animal Health Code (Section 3 on Quality of Aquatic Animal Health Services).

  7. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  8. Statistical distributions of extreme dry spell in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Jemain, Abdul Aziz

    2010-11-01

    Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.

  9. 19 CFR 102.12 - Fungible goods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RULES OF ORIGIN Rules of Origin § 102.12 Fungible goods. When fungible goods of different countries of origin are commingled the country of origin of the goods: (a) Is the countries of origin of those... the origin of the commingled good is not practical, the country or countries of origin may be...

  10. Spatial Statistical Data Fusion (SSDF)

    NASA Technical Reports Server (NTRS)

    Braverman, Amy J.; Nguyen, Hai M.; Cressie, Noel

    2013-01-01

    As remote sensing for scientific purposes has transitioned from an experimental technology to an operational one, the selection of instruments has become more coordinated, so that the scientific community can exploit complementary measurements. However, tech nological and scientific heterogeneity across devices means that the statistical characteristics of the data they collect are different. The challenge addressed here is how to combine heterogeneous remote sensing data sets in a way that yields optimal statistical estimates of the underlying geophysical field, and provides rigorous uncertainty measures for those estimates. Different remote sensing data sets may have different spatial resolutions, different measurement error biases and variances, and other disparate characteristics. A state-of-the-art spatial statistical model was used to relate the true, but not directly observed, geophysical field to noisy, spatial aggregates observed by remote sensing instruments. The spatial covariances of the true field and the covariances of the true field with the observations were modeled. The observations are spatial averages of the true field values, over pixels, with different measurement noise superimposed. A kriging framework is used to infer optimal (minimum mean squared error and unbiased) estimates of the true field at point locations from pixel-level, noisy observations. A key feature of the spatial statistical model is the spatial mixed effects model that underlies it. The approach models the spatial covariance function of the underlying field using linear combinations of basis functions of fixed size. Approaches based on kriging require the inversion of very large spatial covariance matrices, and this is usually done by making simplifying assumptions about spatial covariance structure that simply do not hold for geophysical variables. In contrast, this method does not require these assumptions, and is also computationally much faster. This method is

  11. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students’ Statistical Reasoning and Quantitative Literacy Skills †

    PubMed Central

    Olimpo, Jeffrey T.; Pevey, Ryan S.; McCabe, Thomas M.

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students’ reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce. PMID:29904549

  12. Assessment of corneal properties based on statistical modeling of OCT speckle.

    PubMed

    Jesus, Danilo A; Iskander, D Robert

    2017-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.

  13. Assessment of corneal properties based on statistical modeling of OCT speckle

    PubMed Central

    Jesus, Danilo A.; Iskander, D. Robert

    2016-01-01

    A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike’s Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence. PMID:28101409

  14. Development of a statistical model for cervical cancer cell death with irreversible electroporation in vitro.

    PubMed

    Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing

    2018-01-01

    The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric

  15. Dynamically biased statistical model for the ortho/para conversion in the H2 + H3+ → H3+ + H2 reaction.

    PubMed

    Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio

    2012-09-07

    In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.

  16. "Act in Good Faith."

    ERIC Educational Resources Information Center

    McKay, Robert B.

    1979-01-01

    It is argued that the Supreme Court's Bakke decision overturning the University of California's minority admissions program is good for those who favor affirmative action programs in higher education. The Supreme Court gives wide latitude for devising programs that take race and ethnic background into account if colleges are acting in good faith.…

  17. A framework for risk assessment and decision-making strategies in dangerous good transportation.

    PubMed

    Fabiano, B; Currò, F; Palazzi, E; Pastorino, R

    2002-07-01

    The risk from dangerous goods transport by road and strategies for selecting road load/routes are faced in this paper, by developing an original site-oriented framework of general applicability at local level. A realistic evaluation of the frequency must take into account on one side inherent factors (e.g. tunnels, rail bridges, bend radii, slope, characteristics of neighborhood, etc.) on the other side factors correlated to the traffic conditions (e.g. dangerous goods trucks, etc.). Field data were collected on the selected highway, by systematic investigation, providing input data for a database reporting tendencies and intrinsic parameter/site-oriented statistics. The developed technique was applied to a pilot area, considering both the individual risk and societal risk and making reference to flammable and explosive scenarios. In this way, a risk assessment, sensitive to route features and population exposed, is proposed, so that the overall uncertainties in risk analysis can be lowered.

  18. A Pretty Good Paper about Pretty Good Privacy.

    ERIC Educational Resources Information Center

    McCollum, Roy

    With today's growth in the use of electronic information systems for e-mail, data development and research, and the relative ease of access to such resources, protecting one's data and correspondence has become a great concern. "Pretty Good Privacy" (PGP), an encryption program developed by Phil Zimmermann, may be the software tool that…

  19. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  20. A Statistical Framework for the Functional Analysis of Metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharon, Itai; Pati, Amrita; Markowitz, Victor

    2008-10-01

    Metagenomic studies consider the genetic makeup of microbial communities as a whole, rather than their individual member organisms. The functional and metabolic potential of microbial communities can be analyzed by comparing the relative abundance of gene families in their collective genomic sequences (metagenome) under different conditions. Such comparisons require accurate estimation of gene family frequencies. They present a statistical framework for assessing these frequencies based on the Lander-Waterman theory developed originally for Whole Genome Shotgun (WGS) sequencing projects. They also provide a novel method for assessing the reliability of the estimations which can be used for removing seemingly unreliable measurements.more » They tested their method on a wide range of datasets, including simulated genomes and real WGS data from sequencing projects of whole genomes. Results suggest that their framework corrects inherent biases in accepted methods and provides a good approximation to the true statistics of gene families in WGS projects.« less

  1. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 1 2014-10-01 2014-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  2. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 1 2013-10-01 2013-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  3. 42 CFR 93.210 - Good faith.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 1 2012-10-01 2012-10-01 false Good faith. 93.210 Section 93.210 Public Health... MISCONDUCT Definitions § 93.210 Good faith. Good faith as applied to a complainant or witness, means having a... allegation or cooperation with a research misconduct proceeding is not in good faith if made with knowing or...

  4. The statistical analysis of global climate change studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, J.W.

    1992-01-01

    The focus of this work is to contribute to the enhancement of the relationship between climatologists and statisticians. The analysis of global change data has been underway for many years by atmospheric scientists. Much of this analysis includes a heavy reliance on statistics and statistical inference. Some specific climatological analyses are presented and the dependence on statistics is documented before the analysis is undertaken. The first problem presented involves the fluctuation-dissipation theorem and its application to global climate models. This problem has a sound theoretical niche in the literature of both climate modeling and physics, but a statistical analysis inmore » which the data is obtained from the model to show graphically the relationship has not been undertaken. It is under this motivation that the author presents this problem. A second problem concerning the standard errors in estimating global temperatures is purely statistical in nature although very little materials exists for sampling on such a frame. This problem not only has climatological and statistical ramifications, but political ones as well. It is planned to use these results in a further analysis of global warming using actual data collected on the earth. In order to simplify the analysis of these problems, the development of a computer program, MISHA, is presented. This interactive program contains many of the routines, functions, graphics, and map projections needed by the climatologist in order to effectively enter the arena of data visualization.« less

  5. Statistical Inference at Work: Statistical Process Control as an Example

    ERIC Educational Resources Information Center

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  6. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  7. Public Goods and Services.

    ERIC Educational Resources Information Center

    Zicht, Barbara, Ed.; And Others

    1982-01-01

    This document includes an introduction to the role of government in the production of public goods and services and 3 brief teaching units. The introduction describes the nature of a mixed economy and points out why most people identify the production of goods and services with private enterprise rather than government. It develops a rationale for…

  8. Productivity and Capital Goods.

    ERIC Educational Resources Information Center

    Zicht, Barbara, Ed.; And Others

    1981-01-01

    Providing teacher background on the concepts of productivity and capital goods, this document presents 3 teaching units about these ideas for different grade levels. The grade K-2 unit, "How Do They Do It?," is designed to provide students with an understanding of how physical capital goods add to productivity. Activities include a field trip to…

  9. Fully Bayesian tests of neutrality using genealogical summary statistics.

    PubMed

    Drummond, Alexei J; Suchard, Marc A

    2008-10-31

    Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.

  10. Flexible statistical modelling detects clinical functional magnetic resonance imaging activation in partially compliant subjects.

    PubMed

    Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D

    2007-02-01

    Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.

  11. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  12. Differentiation of commercial vermiculite based on statistical analysis of bulk chemical data: Fingerprinting vermiculite from Libby, Montana U.S.A

    USGS Publications Warehouse

    Gunter, M.E.; Singleton, E.; Bandli, B.R.; Lowers, H.A.; Meeker, G.P.

    2005-01-01

    Major-, minor-, and trace-element compositions, as determined by X-ray fluorescence (XRF) analysis, were obtained on 34 samples of vermiculite to ascertain whether chemical differences exist to the extent of determining the source of commercial products. The sample set included ores from four deposits, seven commercially available garden products, and insulation from four attics. The trace-element distributions of Ba, Cr, and V can be used to distinguish the Libby vermiculite samples from the garden products. In general, the overall composition of the Libby and South Carolina deposits appeared similar, but differed from the South Africa and China deposits based on simple statistical methods. Cluster analysis provided a good distinction of the four ore types, grouped the four attic samples with the Libby ore, and, with less certainty, grouped the garden samples with the South Africa ore.

  13. Frog Statistics

    Science.gov Websites

    Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

  14. 'She sort of shines': midwives' accounts of 'good' midwifery and 'good' leadership.

    PubMed

    Byrom, Sheena; Downe, Soo

    2010-02-01

    to explore midwives' accounts of the characteristics of 'good' leadership and 'good' midwifery. a phenomenological interview survey. Participants were asked about what made both good and poor midwives and leaders. two maternity departments within National Health Service trusts in the North West of England. qualified midwives, selected by random sampling stratified to encompass senior and junior grades. thematic analysis, carried out manually. ten midwives were interviewed. Sixteen codes and six sub-themes were generated. Across the responses, two clear dimensions (themes) were identified, relating on the one hand to aspects of knowledge, skill and competence (termed 'skilled competence'), and on the other hand to specific personality characteristics (termed 'emotional intelligence'). This study suggests that the ability to act knowledgeably, safely and competently was seen as a basic requirement for both clinical midwives and midwife leaders. The added element which made both the midwife and the leader 'good' was the extent of their emotional capability. this small-scale in-depth study could form the basis for hypothesis generation for larger scale work in this area in future. The findings offer some reinforcement for the potential applicability of theories of transformational leadership to midwifery management and practice. Copyright 2008 Elsevier Ltd. All rights reserved.

  15. Median statistics estimates of Hubble and Newton's constants

    NASA Astrophysics Data System (ADS)

    Bethapudi, Suryarao; Desai, Shantanu

    2017-02-01

    Robustness of any statistics depends upon the number of assumptions it makes about the measured data. We point out the advantages of median statistics using toy numerical experiments and demonstrate its robustness, when the number of assumptions we can make about the data are limited. We then apply the median statistics technique to obtain estimates of two constants of nature, Hubble constant (H0) and Newton's gravitational constant ( G , both of which show significant differences between different measurements. For H0, we update the analyses done by Chen and Ratra (2011) and Gott et al. (2001) using 576 measurements. We find after grouping the different results according to their primary type of measurement, the median estimates are given by H0 = 72.5^{+2.5}_{-8} km/sec/Mpc with errors corresponding to 95% c.l. (2 σ) and G=6.674702^{+0.0014}_{-0.0009} × 10^{-11} Nm2kg-2 corresponding to 68% c.l. (1σ).

  16. Mining a clinical data warehouse to discover disease-finding associations using co-occurrence statistics

    PubMed Central

    Cao, Hui; Markatou, Marianthi; Melton, Genevieve B.; Chiang, Michael F.; Hripcsak, George

    2005-01-01

    This paper applies co-occurrence statistics to discover disease-finding associations in a clinical data warehouse. We used two methods, χ2 statistics and the proportion confidence interval (PCI) method, to measure the dependence of pairs of diseases and findings, and then used heuristic cutoff values for association selection. An intrinsic evaluation showed that 94 percent of disease-finding associations obtained by χ2 statistics and 76.8 percent obtained by the PCI method were true associations. The selected associations were used to construct knowledge bases of disease-finding relations (KB-χ2, KB-PCI). An extrinsic evaluation showed that both KB-χ2 and KB-PCI could assist in eliminating clinically non-informative and redundant findings from problem lists generated by our automated problem list summarization system. PMID:16779011

  17. Mining a clinical data warehouse to discover disease-finding associations using co-occurrence statistics.

    PubMed

    Cao, Hui; Markatou, Marianthi; Melton, Genevieve B; Chiang, Michael F; Hripcsak, George

    2005-01-01

    This paper applies co-occurrence statistics to discover disease-finding associations in a clinical data warehouse. We used two methods, chi2 statistics and the proportion confidence interval (PCI) method, to measure the dependence of pairs of diseases and findings, and then used heuristic cutoff values for association selection. An intrinsic evaluation showed that 94 percent of disease-finding associations obtained by chi2 statistics and 76.8 percent obtained by the PCI method were true associations. The selected associations were used to construct knowledge bases of disease-finding relations (KB-chi2, KB-PCI). An extrinsic evaluation showed that both KB-chi2 and KB-PCI could assist in eliminating clinically non-informative and redundant findings from problem lists generated by our automated problem list summarization system.

  18. Statistical physics of human cooperation

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž; Jordan, Jillian J.; Rand, David G.; Wang, Zhen; Boccaletti, Stefano; Szolnoki, Attila

    2017-05-01

    Extensive cooperation among unrelated individuals is unique to humans, who often sacrifice personal benefits for the common good and work together to achieve what they are unable to execute alone. The evolutionary success of our species is indeed due, to a large degree, to our unparalleled other-regarding abilities. Yet, a comprehensive understanding of human cooperation remains a formidable challenge. Recent research in the social sciences indicates that it is important to focus on the collective behavior that emerges as the result of the interactions among individuals, groups, and even societies. Non-equilibrium statistical physics, in particular Monte Carlo methods and the theory of collective behavior of interacting particles near phase transition points, has proven to be very valuable for understanding counterintuitive evolutionary outcomes. By treating models of human cooperation as classical spin models, a physicist can draw on familiar settings from statistical physics. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among humans often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. The complexity of solutions therefore often surpasses that observed in physical systems. Here we review experimental and theoretical research that advances our understanding of human cooperation, focusing on spatial pattern formation, on the spatiotemporal dynamics of observed solutions, and on self-organization that may either promote or hinder socially favorable states.

  19. Visual and Statistical Analysis of Digital Elevation Models Generated Using Idw Interpolator with Varying Powers

    NASA Astrophysics Data System (ADS)

    Asal, F. F.

    2012-07-01

    Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical

  20. Statistics of indicated pressure in combustion engine.

    NASA Astrophysics Data System (ADS)

    Sitnik, L. J.; Andrych-Zalewska, M.

    2016-09-01

    The paper presents the classic form of pressure waveforms in burn chamber of diesel engine but based on strict analytical basis for amending the displacement volume. The pressure measurement results are obtained in the engine running on an engine dynamometer stand. The study was conducted by a 13-phase ESC test (European Stationary Cycle). In each test phase are archived 90 waveforms of pressure. As a result of extensive statistical analysis was found that while the engine is idling distribution of 90 value of pressure at any value of the angle of rotation of the crankshaft can be described uniform distribution. In the each point of characteristic of the engine corresponding to the individual phases of the ESC test, 90 of the pressure for any value of the angle of rotation of the crankshaft can be described as normal distribution. These relationships are verified using tests: Shapiro-Wilk, Jarque-Bera, Lilliefors, Anderson-Darling. In the following part, with each value of the crank angle, are obtain values of descriptive statistics for the pressure data. In its essence, are obtained a new way to approach the issue of pressure waveform analysis in the burn chamber of engine. The new method can be used to further analysis, especially the combustion process in the engine. It was found, e.g. a very large variances of pressure near the transition from compression to expansion stroke. This lack of stationarity of the process can be important both because of the emissions of exhaust gases and fuel consumption of the engine.

  1. MQSA National Statistics

    MedlinePlus

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  2. The prevalence and characterization of self-medication for obtaining pain relief among undergraduate nursing students.

    PubMed

    Souza, Layz Alves Ferreira; da Silva, Camila Damázio; Ferraz, Gisely Carvalho; Sousa, Fátima Aparecida Emm Faleiros; Pereira, Lílian Varanda

    2011-01-01

    This study investigates the prevalence of self-medication among undergraduate nursing students seeking to relieve pain and characterizes the pain and relief obtained through the used medication. This epidemiological and cross-sectional study was carried out with 211 nursing students from a public university in Goiás, GO, Brazil. A numerical scale (0-10) measured pain intensity and relief. The prevalence of self-medication was 38.8%. The source and main determining factor of this practice were the student him/herself (54.1%) and lack of time to go to a doctor (50%), respectively. The most frequently used analgesic was dipyrone (59.8%) and pain relief was classified as good (Md=8.5;Max=10;Min=0). The prevalence of self-medication was higher than that observed in similar studies. Many students reported that relief obtained through self-medication was good, a fact that can delay the clarification of a diagnosis and its appropriate treatment.

  3. Tribological behaviour and statistical experimental design of sintered iron-copper based composites

    NASA Astrophysics Data System (ADS)

    Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo

    2013-11-01

    The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases

  4. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  5. 41 CFR 302-7.201 - Is temporary storage in excess of authorized limits and excess valuation of goods and services...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... excess of authorized limits and excess valuation of goods and services payable at Government expense? 302... Government expense? No, charges for excess weight, valuation above the minimum amount, and services obtained... HOUSEHOLD GOODS AND PROFESSIONAL BOOKS, PAPERS, AND EQUIPMENT (PBP&E) Actual Expense Method § 302-7.201 Is...

  6. The Content of Statistical Requirements for Authors in Biomedical Research Journals

    PubMed Central

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-01-01

    Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research

  7. The Content of Statistical Requirements for Authors in Biomedical Research Journals.

    PubMed

    Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang

    2016-10-20

    Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting

  8. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  9. Obtaining an equivalent beam

    NASA Technical Reports Server (NTRS)

    Butler, Thomas G.

    1990-01-01

    In modeling a complex structure the researcher was faced with a component that would have logical appeal if it were modeled as a beam. The structure was a mast of a robot controlled gantry crane. The structure up to this point already had a large number of degrees of freedom, so the idea of conserving grid points by modeling the mast as a beam was attractive. The researcher decided to make a separate problem of of the mast and model it in three dimensions with plates, then extract the equivalent beam properties by setting up the loading to simulate beam-like deformation and constraints. The results could then be used to represent the mast as a beam in the full model. A comparison was made of properties derived from models of different constraints versus manual calculations. The researcher shows that the three-dimensional model is ineffective in trying to conform to the requirements of an equivalent beam representation. If a full 3-D plate model were used in the complete representation of the crane structure, good results would be obtained. Since the attempt is to economize on the size of the model, a better way to achieve the same results is to use substructuring and condense the mast to equivalent end boundary and intermediate mass points.

  10. Good Health Before Pregnancy: Preconception Care

    MedlinePlus

    ... Advocacy For Patients About ACOG Good Health Before Pregnancy: Preconception Care Home For Patients Search FAQs Good ... FAQ056, April 2017 PDF Format Good Health Before Pregnancy: Preconception Care Pregnancy What is a preconception care ...

  11. Study/experimental/research design: much more than statistics.

    PubMed

    Knight, Kenneth L

    2010-01-01

    The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes "Methods" sections hard to read and understand. To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

  12. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  13. Are family physicians good for you? Endogenous doctor supply and individual health.

    PubMed

    Gravelle, Hugh; Morris, Stephen; Sutton, Matt

    2008-08-01

    To investigate the impact of family physician (FP) supply on individual health, adjusting for factors that affect both health and FPs' choice of location. A total of 49,541 individuals in 351 English local authorities (LAs). Data on individual health and personal characteristics from three rounds (1998, 1999, and 2000) of the Health Survey for England were linked to LA data on FP supply. Three methods for analyzing self-reported health were used. FP supply, instrumented by house prices and by age-weighted capitation payments for patients on FP lists, was included in individual-level health regressions along with individual and LA covariates. When no instruments are used FPs have a positive but statistically insignificant effect on health. When FP supply is instrumented by age-related capitation it has markedly larger and statistically significant effects. A 10 percent increase in FP supply increases the probability of reporting very good health by 6 percent. After allowing for endogeneity, an increase in FP supply has a significant positive effect on self-reported individual health.

  14. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  15. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  16. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  17. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  18. 28 CFR 523.14 - Industrial good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.14 Industrial good time. Extra good time... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Industrial good time. 523.14 Section 523... Industries is not awarded industrial good time until actually employed. ...

  19. Statistical Diversions

    ERIC Educational Resources Information Center

    Petocz, Peter; Sowey, Eric

    2008-01-01

    As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.

  20. A system of registration and statistics.

    PubMed

    Blayo, C

    1993-06-01

    In 1971, WHO recommended obligatory reporting to countries preparing to legalize induced abortion, however, there is no registration of abortions in Austria. Greece, Luxembourg, and Portugal, or in Northern Ireland, Ireland, and Malta, where abortion is prohibited, or in Switzerland, where it is limited. Albania is preparing to institute such a provision. Registration is not always complete in Germany, France, Italy, Poland, and Spain, and in the republics of the former USSR, particularly Lithuania. The data gathered are often further impoverished at the stage of the publication of the statistics. Certain estimations, or even results of surveys, make up for these shortcomings. A retrospective survey of a sample representing all women age 15 years or older would allow the reconstruction of statistics on abortions of past years. Systematic registration must be accompanied by the publication of a statistical record. Sterilization appears to be spreading in Europe, but it is only very rarely registered. The proportion of couples sterilized is sometimes obtained by surveys, but there is hardly any information on the characteristics of this group. On the other hand, the practice of contraception can be easily assessed, as in the majority of countries contraceptives are dispensed through pharmacies, public family planning centers, and private practitioners. Family planning centers sometimes are sources of statistical data. In some countries producers' associations make statistics available on the sale of contraceptives. Exact surveys facilitate the characterization of the users and reveal the methods they employ. Many countries carried out such surveys at the end of the 1970s under the aegis of world fertility surveys. It is urgent to invest in data collection suitable for learning the proportion of women who utilize each method of contraception in all the countries of Europe.

  1. Lagrangian statistics of mesoscale turbulence in a natural environment: The Agulhas return current.

    PubMed

    Carbone, Francesco; Gencarelli, Christian N; Hedgecock, Ian M

    2016-12-01

    The properties of mesoscale geophysical turbulence in an oceanic environment have been investigated through the Lagrangian statistics of sea surface temperature measured by a drifting buoy within the Agulhas return current, where strong temperature mixing produces locally sharp temperature gradients. By disentangling the large-scale forcing which affects the small-scale statistics, we found that the statistical properties of intermittency are identical to those obtained from the multifractal prediction in the Lagrangian frame for the velocity trajectory. The results suggest a possible universality of turbulence scaling.

  2. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc

  3. Ecological statistics of Gestalt laws for the perceptual organization of contours.

    PubMed

    Elder, James H; Goldberg, Richard M

    2002-01-01

    Although numerous studies have measured the strength of visual grouping cues for controlled psychophysical stimuli, little is known about the statistical utility of these various cues for natural images. In this study, we conducted experiments in which human participants trace perceived contours in natural images. These contours are automatically mapped to sequences of discrete tangent elements detected in the image. By examining relational properties between pairs of successive tangents on these traced curves, and between randomly selected pairs of tangents, we are able to estimate the likelihood distributions required to construct an optimal Bayesian model for contour grouping. We employed this novel methodology to investigate the inferential power of three classical Gestalt cues for contour grouping: proximity, good continuation, and luminance similarity. The study yielded a number of important results: (1) these cues, when appropriately defined, are approximately uncorrelated, suggesting a simple factorial model for statistical inference; (2) moderate image-to-image variation of the statistics indicates the utility of general probabilistic models for perceptual organization; (3) these cues differ greatly in their inferential power, proximity being by far the most powerful; and (4) statistical modeling of the proximity cue indicates a scale-invariant power law in close agreement with prior psychophysics.

  4. Statistical Thermodynamic Approach to Vibrational Solitary Waves in Acetanilide

    NASA Astrophysics Data System (ADS)

    Vasconcellos, Áurea R.; Mesquita, Marcus V.; Luzzi, Roberto

    1998-03-01

    We analyze the behavior of the macroscopic thermodynamic state of polymers, centering on acetanilide. The nonlinear equations of evolution for the populations and the statistically averaged field amplitudes of CO-stretching modes are derived. The existence of excitations of the solitary wave type is evidenced. The infrared spectrum is calculated and compared with the experimental data of Careri et al. [Phys. Rev. Lett. 51, 104 (1983)], resulting in a good agreement. We also consider the situation of a nonthermally highly excited sample, predicting the occurrence of a large increase in the lifetime of the solitary wave excitation.

  5. Descriptive statistics.

    PubMed

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  6. Output statistics of laser anemometers in sparsely seeded flows

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Jensen, A. S.

    1982-01-01

    It is noted that until very recently, research on this topic concentrated on the particle arrival statistics and the influence of the optical parameters on them. Little attention has been paid to the influence of subsequent processing on the measurement statistics. There is also controversy over whether the effects of the particle statistics can be measured. It is shown here that some of the confusion derives from a lack of understanding of the experimental parameters that are to be controlled or known. A rigorous framework is presented for examining the measurement statistics of such systems. To provide examples, two problems are then addressed. The first has to do with a sample and hold processor, the second with what is called a saturable processor. The sample and hold processor converts the output to a continuous signal by holding the last reading until a new one is obtained. The saturable system is one where the maximum processable rate is arrived at by the dead time of some unit in the system. At high particle rates, the processed rate is determined through the dead time.

  7. Redefining the lower statistical limit in x-ray phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Marschner, M.; Birnbacher, L.; Willner, M.; Chabior, M.; Fehringer, A.; Herzen, J.; Noël, P. B.; Pfeiffer, F.

    2015-03-01

    Phase-contrast x-ray computed tomography (PCCT) is currently investigated and developed as a potentially very interesting extension of conventional CT, because it promises to provide high soft-tissue contrast for weakly absorbing samples. For data acquisition several images at different grating positions are combined to obtain a phase-contrast projection. For short exposure times, which are necessary for lower radiation dose, the photon counts in a single stepping position are very low. In this case, the currently used phase-retrieval does not provide reliable results for some pixels. This uncertainty results in statistical phase wrapping, which leads to a higher standard deviation in the phase-contrast projections than theoretically expected. For even lower statistics, the phase retrieval breaks down completely and the phase information is lost. New measurement procedures rely on a linear approximation of the sinusoidal phase stepping curve around the zero crossings. In this case only two images are acquired to obtain the phase-contrast projection. The approximation is only valid for small phase values. However, typically nearly all pixels are within this regime due to the differential nature of the signal. We examine the statistical properties of a linear approximation method and illustrate by simulation and experiment that the lower statistical limit can be redefined using this method. That means that the phase signal can be retrieved even with very low photon counts and statistical phase wrapping can be avoided. This is an important step towards enhanced image quality in PCCT with very low photon counts.

  8. A Statistical Comparison between Photospheric Vector Magnetograms Obtained by SDO/HMI and Hinode/SP

    NASA Astrophysics Data System (ADS)

    Sainz Dalda, Alberto

    2017-12-01

    Since 2010 May 1, we have been able to study (almost) continuously the vector magnetic field in the Sun, thanks to two space-based observatories: the Solar Dynamics Observatory (SDO) and Hinode. Both are equipped with instruments able to measure the Stokes parameters of Zeeman-induced polarization of photospheric line radiation. But the observation modes; the spectral lines; the spatial, spectral, and temporal sampling; and even the inversion codes used to recover magnetic and thermodynamic information from the Stokes profiles are different. We compare the vector magnetic fields derived from observations with the HMI instrument on board SDO with those observed by the SP instrument on Hinode. We have obtained relationships between components of magnetic vectors in the umbra, penumbra, and plage observed in 14 maps of NOAA Active Region 11084. Importantly, we have transformed SP data into observables comparable to those of HMI, to explore possible influences of the different modes of operation of the two instruments and the inversion schemes used to infer the magnetic fields. The assumed filling factor (fraction of each pixel containing a Zeeman signature) produces the most significant differences in derived magnetic properties, especially in the plage. The spectral and angular samplings have the next-largest effects. We suggest to treat the disambiguation in the same way in the data provided by HMI and SP. That would make the relationship between the vector magnetic field recovered from these data stronger, which would favor the simultaneous or complementary use of both instruments.

  9. Statistical properties of superimposed stationary spike trains.

    PubMed

    Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan

    2012-06-01

    The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.

  10. Statistical study of auroral omega bands

    NASA Astrophysics Data System (ADS)

    Partamies, Noora; Weygand, James M.; Juusola, Liisa

    2017-09-01

    The presence of very few statistical studies on auroral omega bands motivated us to test-use a semi-automatic method for identifying large-scale undulations of the diffuse aurora boundary and to investigate their occurrence. Five identical all-sky cameras with overlapping fields of view provided data for 438 auroral omega-like structures over Fennoscandian Lapland from 1996 to 2007. The results from this set of omega band events agree remarkably well with previous observations of omega band occurrence in magnetic local time (MLT), lifetime, location between the region 1 and 2 field-aligned currents, as well as current density estimates. The average peak emission height of omega forms corresponds to the estimated precipitation energies of a few keV, which experienced no significant change during the events. Analysis of both local and global magnetic indices demonstrates that omega bands are observed during substorm expansion and recovery phases that are more intense than average substorm expansion and recovery phases in the same region. The omega occurrence with respect to the substorm expansion and recovery phases is in a very good agreement with an earlier observed distribution of fast earthward flows in the plasma sheet during expansion and recovery phases. These findings support the theory that omegas are produced by fast earthward flows and auroral streamers, despite the rarity of good conjugate observations.

  11. Orthogonality catastrophe and fractional exclusion statistics

    NASA Astrophysics Data System (ADS)

    Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.

    2018-02-01

    We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  12. Orthogonality catastrophe and fractional exclusion statistics.

    PubMed

    Ares, Filiberto; Gupta, Kumar S; de Queiroz, Amilcar R

    2018-02-01

    We show that the N-particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N-body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  13. An order statistics approach to the halo model for galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-04-01

    We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.

  14. Antimüllerian hormone as a predictor of good-quality supernumerary blastocyst cryopreservation among women with levels <1 ng/mL versus 1-4 ng/mL.

    PubMed

    Kavoussi, Shahryar K; Odenwald, Kate C; Boehnlein, Lynn M; Summers-Colquitt, Roxanne B; Pool, Thomas B; Swain, Jason E; Jones, Jeffrey M; Lindstrom, Mary J; Lebovic, Dan I

    2015-09-01

    To determine whether antimüllerian hormone (AMH) levels predict the availability of good-quality supernumerary blastocysts for cryopreservation. Retrospective study. Two fertility centers. First fresh IVF cycles (n = 247) grouped as follows: 40 women <35 year old with AMH <1 ng/mL and 77 women with AMH 1-4 ng/mL; 62 women ≥35 year old with AMH <1 ng/mL, and 68 women with AMH 1-4 ng/mL. AMH level measured before IVF with ovarian stimulation protocols based on patient age and AMH level, including short gonadotropin-releasing hormone (GnRH) agonist, GnRH antagonist, or GnRH agonist microdose flare; supernumerary good-quality blastocysts cryopreserved on days 5 or 6 after retrieval. Supernumerary good-quality blastocysts for cryopreservation in relation to AMH levels. Among women <35 years of age, there was a statistically significant difference in the number of patients with supernumerary good-quality blastocysts for cryopreservation between the groups with AMH <1 ng/mL and AMH 1-4 ng/mL (30.0% vs. 58.4%) when adjusted for age. Among women ≥35 years of age, there was a statistically significant difference in the number of patients with supernumerary good-quality blastocyst cryopreservation between groups with AMH <1 ng/mL and AMH 1-4 ng/mL (16.1% vs. 42.6%), when adjusted for age. Low AMH levels are associated with a statistically significantly lower likelihood of blastocysts for cryopreservation as compared with higher AMH levels. This effect was seen among women both <35 and ≥35 years of age. Patient counseling should include realistic expectations for the probability of good-quality supernumerary blastocysts available for cryopreservation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Are general practice characteristics predictors of good glycaemic control in patients with diabetes? A cross-sectional study.

    PubMed

    Esterman, Adrian J; Fountaine, Tim; McDermott, Robyn

    2016-01-18

    To determine whether certain characteristics of general practices are associated with good glycaemic control in patients with diabetes and with completing an annual cycle of care (ACC). Our cross-sectional analysis used baseline data from the Australian Diabetes Care Project conducted between 2011 and 2014. Practice characteristics were self-reported. Characteristics of the patients that were assessed included glycaemic control (HbA1c level ≤ 53 mmol/mol), age, sex, duration of diabetes, socio-economic disadvantage (SEIFA) score, the complexity of the patient's condition, and whether the patient had completed an ACC for diabetes in the past 18 months. Clustered logistic regression was used to establish predictors of glycaemic control and a completed ACC. Data were available from 147 general practices and 5455 patients with established type 1 or type 2 diabetes in three Australian states. After adjustment for other patient characteristics, only the patient completing an ACC was statistically significant as a predictor of glycaemic control (P = 0.011). In a multivariate model, the practice having a chronic disease-focused practice nurse (P = 0.036) and running educational events for patients with diabetes (P = 0.004) were statistically significant predictors of the patient having complete an ACC. Patient characteristics are moderately good predictors of whether the patient is in glycaemic control, whereas practice characteristics appear to predict only the likelihood of patients completing an ACC. The ACC is an established indicator of good diabetes management. This is the first study to report a positive association between having completed an ACC and the patient being in glycaemic control.

  16. Qualities of a good Singaporean psychiatrist: Qualitative differences between psychiatrists and patients.

    PubMed

    Tor, Phern-Chern; Tan, Jacinta O A

    2015-06-01

    Pilot studies in Singapore established four themes (personal values, professional, relationship, academic-executive) relating to the qualities of a good psychiatrist, and suggested potential differences of opinion between patients and psychiatrists. We sought to explore differences between patients and psychiatrists regarding the qualities of a good psychiatrist. Qualitative analysis of interviews using a modified grounded theory approach with 21 voluntary psychiatric inpatients and 18 psychiatrists. One hundred thirty-one separate qualities emerged from the data. The qualities of a good psychiatrist were viewed in the context of motivations, functions, methods, and results obtained, mirroring the themes established in the pilot studies. Patients and psychiatrists mostly concurred on the qualities of a good psychiatrist, with 62.6% of the qualities emerging from both groups. However significant differences existed. Patient-specific qualities included proof of altruistic motives, diligence, clinical competence, and positive results. What the psychiatrist represented to patients in relation to gender, culture, and clinical prestige also mattered to patients. Psychiatrist-specific qualities related to societal (e.g. public protection) and professional concerns (e.g. boundary issues). The results of this study demonstrate that patients and psychiatrists have different views about the qualities of a good psychiatrist. Patients may expect proof of care, diligence, and competence from the psychiatrist, along with positive results. In addition, psychiatrists should be mindful of what they represent to patients and how that can impact the doctor-patient relationship. © 2014 Wiley Publishing Asia Pty Ltd.

  17. One-pot synthesis of fluorescent nitrogen-doped carbon dots with good biocompatibility for cell labeling.

    PubMed

    Zhang, Zhengwei; Yan, Kun; Yang, Qiulian; Liu, Yanhua; Yan, Zhengyu; Chen, Jianqiu

    2017-12-01

    Here we report an easy and economical hydrothermal carbonization approach to synthesize the fluorescent nitrogen-doped carbon dots (N-CDs) that was developed using citric acid and triethanolamine as the precursors. The synthesis conditions were optimized to obtain the N-CDs with superior fluorescence performances. The as-prepared N-CDs are monodispersed sphere nanoparticles with good water solubility, and exhibited strong fluorescence, favourable photostability and excitation wavelength-dependent behavior. Furthermore, the in vitro cytotoxicity and cellular labeling of N-CDs were investigated using the rat glomerular mesangial cells. The results showed the N-CDs have more inconspicuous cytotoxicity and better biosafety in comparison with ZnSe quantum dots, although both targeted the cells successfully. Considering their admirable photostability, low toxicity and good compatibility, the as-obtained N-CDs could have potential applications in biosensors, cellular imaging, and other fields. Copyright © 2017 John Wiley & Sons, Ltd.

  18. High-performance parallel computing in the classroom using the public goods game as an example

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2017-07-01

    The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.

  19. Classical Statistics and Statistical Learning in Imaging Neuroscience

    PubMed Central

    Bzdok, Danilo

    2017-01-01

    Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896

  20. Statistical properties of a cloud ensemble - A numerical study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Simpson, Joanne; Soong, Su-Tzai

    1987-01-01

    The statistical properties of cloud ensembles under a specified large-scale environment, such as mass flux by cloud drafts and vertical velocity as well as the condensation and evaporation associated with these cloud drafts, are examined using a three-dimensional numerical cloud ensemble model described by Soong and Ogura (1980) and Tao and Soong (1986). The cloud drafts are classified as active and inactive, and separate contributions to cloud statistics in areas of different cloud activity are then evaluated. The model results compare well with results obtained from aircraft measurements of a well-organized ITCZ rainband that occurred on August 12, 1974, during the Global Atmospheric Research Program's Atlantic Tropical Experiment.