Sample records for statistical tests show

  1. New heterogeneous test statistics for the unbalanced fixed-effect nested design.

    PubMed

    Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming

    2011-05-01

    When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.

  2. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    PubMed

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  3. The intermediates take it all: asymptotics of higher criticism statistics and a powerful alternative based on equal local levels.

    PubMed

    Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut

    2015-01-01

    The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    PubMed

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  5. A statistical test to show negligible trend

    Treesearch

    Philip M. Dixon; Joseph H.K. Pechmann

    2005-01-01

    The usual statistical tests of trend are inappropriate for demonstrating the absence of trend. This is because failure to reject the null hypothesis of no trend does not prove that null hypothesis. The appropriate statistical method is based on an equivalence test. The null hypothesis is that the trend is not zero, i.e., outside an a priori specified equivalence region...

  6. Prospective Elementary and Secondary School Mathematics Teachers' Statistical Reasoning

    ERIC Educational Resources Information Center

    Karatoprak, Rabia; Karagöz Akar, Gülseren; Börkan, Bengü

    2015-01-01

    This study investigated prospective elementary (PEMTs) and secondary (PSMTs) school mathematics teachers' statistical reasoning. The study began with the adaptation of the Statistical Reasoning Assessment (Garfield, 2003) test. Then, the test was administered to 82 PEMTs and 91 PSMTs in a metropolitan city of Turkey. Results showed that both…

  7. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    PubMed

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  8. Perspectives on the Use of Null Hypothesis Statistical Testing. Part III: the Various Nuts and Bolts of Statistical and Hypothesis Testing

    ERIC Educational Resources Information Center

    Marmolejo-Ramos, Fernando; Cousineau, Denis

    2017-01-01

    The number of articles showing dissatisfaction with the null hypothesis statistical testing (NHST) framework has been progressively increasing over the years. Alternatives to NHST have been proposed and the Bayesian approach seems to have achieved the highest amount of visibility. In this last part of the special issue, a few alternative…

  9. [Clinical research IV. Relevancy of the statistical test chosen].

    PubMed

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  10. An entropy-based statistic for genomewide association studies.

    PubMed

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-07-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.

  11. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  12. Which Statistic Should Be Used to Detect Item Preknowledge When the Set of Compromised Items Is Known?

    PubMed

    Sinharay, Sandip

    2017-09-01

    Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.

  13. Heteroscedastic Tests Statistics for One-Way Analysis of Variance: The Trimmed Means and Hall's Transformation Conjunction

    ERIC Educational Resources Information Center

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2005-01-01

    To deal with nonnormal and heterogeneous data for the one-way fixed effect analysis of variance model, the authors adopted a trimmed means method in conjunction with Hall's invertible transformation into a heteroscedastic test statistic (Alexander-Govern test or Welch test). The results of simulation experiments showed that the proposed technique…

  14. A new test of multivariate nonlinear causality

    PubMed Central

    Bai, Zhidong; Jiang, Dandan; Lv, Zhihui; Wong, Wing-Keung; Zheng, Shurong

    2018-01-01

    The multivariate nonlinear Granger causality developed by Bai et al. (2010) (Mathematics and Computers in simulation. 2010; 81: 5-17) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994) (Journal of Finance. 1994; 49(5): 1639-1664), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate U-statistic. However, Bai et al. (2016) (2016; arXiv: 1701.03992) revisit the HJ test and find that the test statistic given by HJ is NOT a function of U-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power. PMID:29304085

  15. A new test of multivariate nonlinear causality.

    PubMed

    Bai, Zhidong; Hui, Yongchang; Jiang, Dandan; Lv, Zhihui; Wong, Wing-Keung; Zheng, Shurong

    2018-01-01

    The multivariate nonlinear Granger causality developed by Bai et al. (2010) (Mathematics and Computers in simulation. 2010; 81: 5-17) plays an important role in detecting the dynamic interrelationships between two groups of variables. Following the idea of Hiemstra-Jones (HJ) test proposed by Hiemstra and Jones (1994) (Journal of Finance. 1994; 49(5): 1639-1664), they attempt to establish a central limit theorem (CLT) of their test statistic by applying the asymptotical property of multivariate U-statistic. However, Bai et al. (2016) (2016; arXiv: 1701.03992) revisit the HJ test and find that the test statistic given by HJ is NOT a function of U-statistics which implies that the CLT neither proposed by Hiemstra and Jones (1994) nor the one extended by Bai et al. (2010) is valid for statistical inference. In this paper, we re-estimate the probabilities and reestablish the CLT of the new test statistic. Numerical simulation shows that our new estimates are consistent and our new test performs decent size and power.

  16. Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    2002-01-01

    Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…

  17. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics

    PubMed Central

    Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.

    2015-01-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  18. Detecting Answer Copying Using Alternate Test Forms and Seat Locations in Small-Scale Examinations

    ERIC Educational Resources Information Center

    van der Ark, L. Andries; Emons, Wilco H. M.; Sijtsma, Klaas

    2008-01-01

    Two types of answer-copying statistics for detecting copiers in small-scale examinations are proposed. One statistic identifies the "copier-source" pair, and the other in addition suggests who is copier and who is source. Both types of statistics can be used when the examination has alternate test forms. A simulation study shows that the…

  19. Exploiting excess sharing: a more powerful test of linkage for affected sib pairs than the transmission/disequilibrium test.

    PubMed Central

    Wicks, J

    2000-01-01

    The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs. PMID:10788332

  20. Exploiting excess sharing: a more powerful test of linkage for affected sib pairs than the transmission/disequilibrium test.

    PubMed

    Wicks, J

    2000-06-01

    The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs.

  1. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression.

    PubMed

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson's statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China's regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test.

  2. A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.

    PubMed

    Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa

    2011-05-26

    Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.

  3. Online incidental statistical learning of audiovisual word sequences in adults: a registered report.

    PubMed

    Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy

    2018-02-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r  = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.

  4. On the assessment of the added value of new predictive biomarkers.

    PubMed

    Chen, Weijie; Samuelson, Frank W; Gallas, Brandon D; Kang, Le; Sahiner, Berkman; Petrick, Nicholas

    2013-07-29

    The surge in biomarker development calls for research on statistical evaluation methodology to rigorously assess emerging biomarkers and classification models. Recently, several authors reported the puzzling observation that, in assessing the added value of new biomarkers to existing ones in a logistic regression model, statistical significance of new predictor variables does not necessarily translate into a statistically significant increase in the area under the ROC curve (AUC). Vickers et al. concluded that this inconsistency is because AUC "has vastly inferior statistical properties," i.e., it is extremely conservative. This statement is based on simulations that misuse the DeLong et al. method. Our purpose is to provide a fair comparison of the likelihood ratio (LR) test and the Wald test versus diagnostic accuracy (AUC) tests. We present a test to compare ideal AUCs of nested linear discriminant functions via an F test. We compare it with the LR test and the Wald test for the logistic regression model. The null hypotheses of these three tests are equivalent; however, the F test is an exact test whereas the LR test and the Wald test are asymptotic tests. Our simulation shows that the F test has the nominal type I error even with a small sample size. Our results also indicate that the LR test and the Wald test have inflated type I errors when the sample size is small, while the type I error converges to the nominal value asymptotically with increasing sample size as expected. We further show that the DeLong et al. method tests a different hypothesis and has the nominal type I error when it is used within its designed scope. Finally, we summarize the pros and cons of all four methods we consider in this paper. We show that there is nothing inherently less powerful or disagreeable about ROC analysis for showing the usefulness of new biomarkers or characterizing the performance of classification models. Each statistical method for assessing biomarkers and classification models has its own strengths and weaknesses. Investigators need to choose methods based on the assessment purpose, the biomarker development phase at which the assessment is being performed, the available patient data, and the validity of assumptions behind the methodologies.

  5. Cognitive Stimulation of Elderly Residents in Social Protection Centers in Cartagena, 2014.

    PubMed

    Melguizo Herrera, Estela; Bertel De La Hoz, Anyel; Paternina Osorio, Diego; Felfle Fuentes, Yurani; Porto Osorio, Leidy

    To determine the effectiveness of a program of cognitive stimulation of the elderly residents in Social Protection Centers in Cartagena, 2014. Quasi-experimental study with pre and post tests in control and experimental groups. A sample of 37 elderly residents in Social Protection Centers participated: 23 in the experimental group and 14 in the control group. A survey and a mental evaluation test (Pfeiffer) were applied. The experimental group participated in 10 sessions of cognitive stimulation. The paired t-test showed statistically significant differences in the Pfeiffer test, pre and post intervention, compared to the experimental group (P=.0005). The unpaired t-test showed statistically significant differences in Pfeiffer test results to the experimental and control groups (P=.0450). The analysis of the main components showed that more interrelated variables were: age, diseases, number of errors and test results; which were grouped around the disease variable, with a negative association. The intervention demonstrated a statistically significant improvement in cognitive functionality of the elderly. Nursing can lead this type of intervention. It should be studied further to strengthen and clarify these results. Copyright © 2016 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.

  6. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).

    PubMed

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal

    2016-01-01

    This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.

  7. Relationship between sitting volleyball performance and field fitness of sitting volleyball players in Korea

    PubMed Central

    Jeoung, Bogja

    2017-01-01

    The purpose of this study was to evaluate the relationship between sitting volleyball performance and the field fitness of sitting volleyball players. Forty-five elite sitting volleyball players participated in 10 field fitness tests. Additionally, the players’ head coach and coach assessed their volleyball performance (receive and defense, block, attack, and serve). Data were analyzed with SPSS software version 21 by using correlation and regression analyses, and the significance level was set at P< 0.05. The results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, splint, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on the players’ abilities to attack, serve, and block. Grip strength, t-test, speed, and agility showed a statistically significant relationship with the players’ skill at defense and receive. Our results showed that chest pass, overhand throw, one-hand throw, one-hand side throw, speed endurance, reaction time, and graded exercise test results had a statistically significant influence on volleyball performance. PMID:29326896

  8. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.

  9. TRANSIT TIMING OBSERVATIONS FROM KEPLER. VI. POTENTIALLY INTERESTING CANDIDATE SYSTEMS FROM FOURIER-BASED STATISTICAL TESTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; Ford, Eric B.; Rowe, Jason F.

    2012-09-10

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through quarter six of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  10. Transit Timing Observations from Kepler: VII. Potentially interesting candidate systems from Fourier-based statistical tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steffen, Jason H.; /Fermilab; Ford, Eric B.

    2012-01-01

    We analyze the deviations of transit times from a linear ephemeris for the Kepler Objects of Interest (KOI) through Quarter six (Q6) of science data. We conduct two statistical tests for all KOIs and a related statistical test for all pairs of KOIs in multi-transiting systems. These tests identify several systems which show potentially interesting transit timing variations (TTVs). Strong TTV systems have been valuable for the confirmation of planets and their mass measurements. Many of the systems identified in this study should prove fruitful for detailed TTV studies.

  11. Efficient statistical tests to compare Youden index: accounting for contingency correlation.

    PubMed

    Chen, Fangyao; Xue, Yuqiang; Tan, Ming T; Chen, Pingyan

    2015-04-30

    Youden index is widely utilized in studies evaluating accuracy of diagnostic tests and performance of predictive, prognostic, or risk models. However, both one and two independent sample tests on Youden index have been derived ignoring the dependence (association) between sensitivity and specificity, resulting in potentially misleading findings. Besides, paired sample test on Youden index is currently unavailable. This article develops efficient statistical inference procedures for one sample, independent, and paired sample tests on Youden index by accounting for contingency correlation, namely associations between sensitivity and specificity and paired samples typically represented in contingency tables. For one and two independent sample tests, the variances are estimated by Delta method, and the statistical inference is based on the central limit theory, which are then verified by bootstrap estimates. For paired samples test, we show that the estimated covariance of the two sensitivities and specificities can be represented as a function of kappa statistic so the test can be readily carried out. We then show the remarkable accuracy of the estimated variance using a constrained optimization approach. Simulation is performed to evaluate the statistical properties of the derived tests. The proposed approaches yield more stable type I errors at the nominal level and substantially higher power (efficiency) than does the original Youden's approach. Therefore, the simple explicit large sample solution performs very well. Because we can readily implement the asymptotic and exact bootstrap computation with common software like R, the method is broadly applicable to the evaluation of diagnostic tests and model performance. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Spatial Autocorrelation Approaches to Testing Residuals from Least Squares Regression

    PubMed Central

    Chen, Yanguang

    2016-01-01

    In geo-statistics, the Durbin-Watson test is frequently employed to detect the presence of residual serial correlation from least squares regression analyses. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the test will be ineffectual because the value of Durbin-Watson’s statistic depends on the sequence of data points. This paper develops two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran’s index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then by analogy with the Durbin-Watson statistic, two types of new serial correlation indices are constructed. As a case study, the two newly presented statistics are applied to a spatial sample of 29 China’s regions. These results show that the new spatial autocorrelation models can be used to test the serial correlation of residuals from regression analysis. In practice, the new statistics can make up for the deficiencies of the Durbin-Watson test. PMID:26800271

  13. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  14. Statistical inference involving binomial and negative binomial parameters.

    PubMed

    García-Pérez, Miguel A; Núñez-Antón, Vicente

    2009-05-01

    Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.

  15. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  16. A Discussion of the Effect of Open-Book and Closed-Book Exams on Student Achievement in an Introductory Statistics Course

    ERIC Educational Resources Information Center

    Block, Robert M.

    2012-01-01

    The use of open-book tests, closed-book tests, and notecards on tests in an introductory statistics course is described in this article. A review of the literature shows that open-book assessments are universally recognized to reduce anxiety. The literature is mixed however on whether deeper learning or better preparation occurs with open-book…

  17. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)

    PubMed Central

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal

    2016-01-01

    Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365

  18. Reproducibility-optimized test statistic for ranking genes in microarray studies.

    PubMed

    Elo, Laura L; Filén, Sanna; Lahesmaa, Riitta; Aittokallio, Tero

    2008-01-01

    A principal goal of microarray studies is to identify the genes showing differential expression under distinct conditions. In such studies, the selection of an optimal test statistic is a crucial challenge, which depends on the type and amount of data under analysis. While previous studies on simulated or spike-in datasets do not provide practical guidance on how to choose the best method for a given real dataset, we introduce an enhanced reproducibility-optimization procedure, which enables the selection of a suitable gene- anking statistic directly from the data. In comparison with existing ranking methods, the reproducibilityoptimized statistic shows good performance consistently under various simulated conditions and on Affymetrix spike-in dataset. Further, the feasibility of the novel statistic is confirmed in a practical research setting using data from an in-house cDNA microarray study of asthma-related gene expression changes. These results suggest that the procedure facilitates the selection of an appropriate test statistic for a given dataset without relying on a priori assumptions, which may bias the findings and their interpretation. Moreover, the general reproducibilityoptimization procedure is not limited to detecting differential expression only but could be extended to a wide range of other applications as well.

  19. Distinguishing Positive Selection From Neutral Evolution: Boosting the Performance of Summary Statistics

    PubMed Central

    Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas

    2011-01-01

    Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556

  20. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

    ERIC Educational Resources Information Center

    Ho, Andrew D.; Yu, Carol C.

    2015-01-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…

  1. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test

    PubMed Central

    2013-01-01

    Background The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. Results One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to “filter” redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. Conclusion We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known. PMID:24199751

  2. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    PubMed

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the summary-statistic based approach. We also implement the summary-statistic test using Z-statistics from an already-published GWAS of Chronic Obstructive Pulmonary Disorder (COPD) and correlation structure obtained from HapMap. We experiment with the modification of this test because the correlation structure is assumed imperfectly known.

  3. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  4. Robustness of S1 statistic with Hodges-Lehmann for skewed distributions

    NASA Astrophysics Data System (ADS)

    Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping

    2016-10-01

    Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.

  5. Why Current Statistics of Complementary Alternative Medicine Clinical Trials is Invalid.

    PubMed

    Pandolfi, Maurizio; Carreras, Giulia

    2018-06-07

    It is not sufficiently known that frequentist statistics cannot provide direct information on the probability that the research hypothesis tested is correct. The error resulting from this misunderstanding is compounded when the hypotheses under scrutiny have precarious scientific bases, which, generally, those of complementary alternative medicine (CAM) are. In such cases, it is mandatory to use inferential statistics, considering the prior probability that the hypothesis tested is true, such as the Bayesian statistics. The authors show that, under such circumstances, no real statistical significance can be achieved in CAM clinical trials. In this respect, CAM trials involving human material are also hardly defensible from an ethical viewpoint.

  6. Significance levels for studies with correlated test statistics.

    PubMed

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  7. Admixture, Population Structure, and F-Statistics.

    PubMed

    Peter, Benjamin M

    2016-04-01

    Many questions about human genetic history can be addressed by examining the patterns of shared genetic variation between sets of populations. A useful methodological framework for this purpose isF-statistics that measure shared genetic drift between sets of two, three, and four populations and can be used to test simple and complex hypotheses about admixture between populations. This article provides context from phylogenetic and population genetic theory. I review how F-statistics can be interpreted as branch lengths or paths and derive new interpretations, using coalescent theory. I further show that the admixture tests can be interpreted as testing general properties of phylogenies, allowing extension of some ideas applications to arbitrary phylogenetic trees. The new results are used to investigate the behavior of the statistics under different models of population structure and show how population substructure complicates inference. The results lead to simplified estimators in many cases, and I recommend to replace F3 with the average number of pairwise differences for estimating population divergence. Copyright © 2016 by the Genetics Society of America.

  8. Critical analysis of adsorption data statistically

    NASA Astrophysics Data System (ADS)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are <1, indicating favourable isotherms. Karl Pearson's correlation coefficient values for Langmuir and Freundlich adsorption isotherms were obtained as 0.99 and 0.95 respectively, which show higher degree of correlation between the variables. This validates the data obtained for adsorption of zinc ions from the contaminated aqueous solution with the help of mango leaf powder.

  9. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  10. Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies

    PubMed Central

    Liu, Zhonghua; Lin, Xihong

    2017-01-01

    Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391

  11. Multiple phenotype association tests using summary statistics in genome-wide association studies.

    PubMed

    Liu, Zhonghua; Lin, Xihong

    2018-03-01

    We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.

  12. Effectiveness of groundwater governance structures and institutions in Tanzania

    NASA Astrophysics Data System (ADS)

    Gudaga, J. L.; Kabote, S. J.; Tarimo, A. K. P. R.; Mosha, D. B.; Kashaigili, J. J.

    2018-05-01

    This paper examines effectiveness of groundwater governance structures and institutions in Mbarali District, Mbeya Region. The paper adopts exploratory sequential research design to collect quantitative and qualitative data. A random sample of 90 groundwater users with 50% women was involved in the survey. Descriptive statistics, Kruskal-Wallis H test and Mann-Whitney U test were used to compare the differences in responses between groups, while qualitative data were subjected to content analysis. The results show that the Village Councils and Community Water Supply Organizations (COWSOs) were effective in governing groundwater. The results also show statistical significant difference on the overall extent of effectiveness of the Village Councils in governing groundwater between villages ( P = 0.0001), yet there was no significant difference ( P > 0.05) between male and female responses on the effectiveness of Village Councils, village water committees and COWSOs. The Mann-Whitney U test showed statistical significant difference between male and female responses on effectiveness of formal and informal institutions ( P = 0.0001), such that informal institutions were effective relative to formal institutions. The Kruskal-Wallis H test also showed statistical significant difference ( P ≤ 0.05) on the extent of effectiveness of formal institutions, norms and values between low, medium and high categories. The paper concludes that COWSOs were more effective in governing groundwater than other groundwater governance structures. Similarly, norms and values were more effective than formal institutions. The paper recommends sensitization and awareness creation on formal institutions so that they can influence water users' behaviour to govern groundwater.

  13. Statistical studies of animal response data from USF toxicity screening test method

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Machado, A. M.

    1978-01-01

    Statistical examination of animal response data obtained using Procedure B of the USF toxicity screening test method indicates that the data deviate only slightly from a normal or Gaussian distribution. This slight departure from normality is not expected to invalidate conclusions based on theoretical statistics. Comparison of times to staggering, convulsions, collapse, and death as endpoints shows that time to death appears to be the most reliable endpoint because it offers the lowest probability of missed observations and premature judgements.

  14. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  15. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    PubMed

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  16. Depression and Self-Esteem in Early Adolescence.

    PubMed

    Tripković, Ingrid; Roje, Romilda; Krnić, Silvana; Nazor, Mirjana; Karin, Željka; Čapkun, Vesna

    2015-06-01

    Depression prevalence has increased in the last few decades, affecting younger age groups. The aim of this research was to determine the range of depression and low self-esteem in elementary school children in the city of Split. Testing was carried out at school and the sample comprised 1,549 children (714 boys and 832 girls, aged 13). Two psychological instruments were used: the Coopersmith Self-Esteem Inventory (SEI) and the Children and Adolescent Depression Scale (SDD). The average value of scores obtained by SEI test was 17.8 for all tested children. No statistically significant difference was found be-tween boys and girls. It was found that 11.9% of children showed signs of clinically significant depression, and 16.2% showed signs of depression. Statistically significant association between low self-esteem and clinically significant depression was found. No statistically significant difference among boys and girls according to dimension of cognitive depression was found, whereas statistically significant level of emotional depression was higher in girls than boys. It was found that both dimensions of depression decreased proportionally with the increase of SEI test score values: cognitive and emotional dimension of depression. The results of this study show that it is necessary to provide early detection of emotional difficulties in order to prevent serious mental disorders. Copyright© by the National Institute of Public Health, Prague 2015.

  17. Systems for measuring response statistics of gigahertz bandwidth photomultipliers

    NASA Technical Reports Server (NTRS)

    Abshire, J. B.; Rowe, H. E.

    1977-01-01

    New systems have been developed for measuring the average impulse response, the pulse-height spectrum, the transit-time statistics as a function of signal level, and the dark-count spectrum of gigahertz bandwidth photomultipliers. Measurements showed that the 0.53 microns pulse used as an optical test source had a 30 picoseconds and less than 70 ps pulse width. Calibration data showed the system resolution to be less than 20 ps for root mean square transit-time measurements. Test data for a static crossed-field photomultiplier showed 2-photoelectron resolution and less than 30-ps time jitter over the 1- to 100-photoelectron range.

  18. Data from the Television Game Show "Friend or Foe?"

    ERIC Educational Resources Information Center

    Kalist, David E.

    2004-01-01

    The data discussed in this paper are from the television game show "Friend or Foe", and can be used to examine whether age, gender, race, and the amount of prize money affect contestants' strategies. The data are suitable for a variety of statistical analyses, such as descriptive statistics, testing for differences in means or proportions, and…

  19. Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network

    NASA Technical Reports Server (NTRS)

    Wiesenberg, Brent

    1999-01-01

    Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.

  20. Derivation and Applicability of Asymptotic Results for Multiple Subtests Person-Fit Statistics

    PubMed Central

    Albers, Casper J.; Meijer, Rob R.; Tendeiro, Jorge N.

    2016-01-01

    In high-stakes testing, it is important to check the validity of individual test scores. Although a test may, in general, result in valid test scores for most test takers, for some test takers, test scores may not provide a good description of a test taker’s proficiency level. Person-fit statistics have been proposed to check the validity of individual test scores. In this study, the theoretical asymptotic sampling distribution of two person-fit statistics that can be used for tests that consist of multiple subtests is first discussed. Second, simulation study was conducted to investigate the applicability of this asymptotic theory for tests of finite length, in which the correlation between subtests and number of items in the subtests was varied. The authors showed that these distributions provide reasonable approximations, even for tests consisting of subtests of only 10 items each. These results have practical value because researchers do not have to rely on extensive simulation studies to simulate sampling distributions. PMID:29881053

  1. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  2. Comparative Analysis of Serum (Anti)oxidative Status Parameters in Healthy Persons

    PubMed Central

    Jansen, Eugène HJM; Ruskovska, Tatjana

    2013-01-01

    Five antioxidant and two oxidative stress assays were applied to serum samples of 43 healthy males. The antioxidant tests showed different inter-assay correlations. A very good correlation of 0.807 was observed between the ferric reducing ability of plasma (FRAP) and total antioxidant status (TAS) assay and also a fair correlation of 0.501 between the biological antioxidant potential (BAP) and TAS assay. There was no statistically significant correlation between the BAP and FRAP assay. The anti-oxidant assays have a high correlation with uric acid, especially the TAS (0.922) and FRAP assay (0.869). The BAP assay has a much lower and no statistically significant correlation with uric acid (0.302), which makes BAP more suitable for the antioxidant status. The total thiol assay showed no statistically significant correlation with uric acid (0.114). The total thiol assay, which is based on a completely different principle, showed a good and statistically significant correlation with the BAP assay (0.510) and also to the TAS assay, but to a lower and not significant extent (0.279) and not with the FRAP assay (−0.008). The oxy-adsorbent test (OXY) assay has no correlation with any of the other assays tested. The oxidative stress assays, reactive oxygen metabolites (ROM) and total oxidant status (TOS), based on a different principle, do not show a statistically significant correlation with the serum samples in this study. Both assays showed a negative, but not significant, correlation with the antioxidant assays. In conclusion, the ROM, TOS, BAP and TTP assays are based on different principles and will have an additional value when a combination of these assays will be applied in large-scale population studies. PMID:23507749

  3. A Comparison of Student Understanding of Seasons Using Inquiry and Didactic Teaching Methods

    NASA Astrophysics Data System (ADS)

    Ashcraft, Paul G.

    2006-02-01

    Student performance on open-ended questions concerning seasons in a university physical science content course was examined to note differences between classes that experienced inquiry using a 5-E lesson planning model and those that experienced the same content with a traditional, didactic lesson. The class examined is a required content course for elementary education majors and understanding the seasons is part of the university's state's elementary science standards. The two self-selected groups of students showed no statistically significant differences in pre-test scores, while there were statistically significant differences between the groups' post-test scores with those who participated in inquiry-based activities scoring higher. There were no statistically significant differences between the pre-test and the post-test for the students who experienced didactic teaching, while there were statistically significant improvements for the students who experienced the 5-E lesson.

  4. Test-retest reliability of biodex system 4 pro for isometric ankle-eversion and -inversion measurement.

    PubMed

    Tankevicius, Gediminas; Lankaite, Doanata; Krisciunas, Aleksandras

    2013-08-01

    The lack of knowledge about isometric ankle testing indicates the need for research in this area. to assess test-retest reliability and to determine the optimal position for isometric ankle-eversion and -inversion testing. Test-retest reliability study. Isometric ankle eversion and inversion were assessed in 3 different dynamometer foot-plate positions: 0°, 7°, and 14° of inversion. Two maximal repetitions were performed at each angle. Both limbs were tested (40 ankles in total). The test was performed 2 times with a period of 7 d between the tests. University hospital. The study was carried out on 20 healthy athletes with no history of ankle sprains. Reliability was assessed using intraclass correlation coefficient (ICC2,1); minimal detectable change (MDC) was calculated using a 95% confidence interval. Paired t test was used to measure statistically significant changes, and P <.05 was considered statistically significant. Eversion and inversion peak torques showed high ICCs in all 3 angles (ICC values .87-.96, MDC values 3.09-6.81 Nm). Eversion peak torque was the smallest when testing at the 0° angle and gradually increased, reaching maximum values at 14° angle. The increase of eversion peak torque was statistically significant at 7 ° and 14° of inversion. Inversion peak torque showed an opposite pattern-it was the smallest when measured at the 14° angle and increased at the other 2 angles; statistically significant changes were seen only between measures taken at 0° and 14°. Isometric eversion and inversion testing using the Biodex 4 Pro system is a reliable method. The authors suggest that the angle of 7° of inversion is the best for isometric eversion and inversion testing.

  5. On two-sample McNemar test.

    PubMed

    Xiang, Jim X

    2016-01-01

    Measuring a change in the existence of disease symptoms before and after a treatment is examined for statistical significance by means of the McNemar test. When comparing two treatments, Feuer and Kessler (1989) proposed a two-sample McNemar test. In this article, we show that this test usually inflates the type I error in the hypothesis testing, and propose a new two-sample McNemar test that is superior in terms of preserving type I error. We also make the connection between the two-sample McNemar test and the test statistic for the equal residual effects in a 2 × 2 crossover design. The limitations of the two-sample McNemar test are also discussed.

  6. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  7. A test for patterns of modularity in sequences of developmental events.

    PubMed

    Poe, Steven

    2004-08-01

    This study presents a statistical test for modularity in the context of relative timing of developmental events. The test assesses whether sets of developmental events show special phylogenetic conservation of rank order. The test statistic is the correlation coefficient of developmental ranks of the N events of the hypothesized module across taxa. The null distribution is obtained by taking correlation coefficients for randomly sampled sets of N events. This test was applied to two datasets, including one where phylogenetic information was taken into account. The events of limb development in two frog species were found to behave as a module.

  8. Computation of the Molenaar Sijtsma Statistic

    NASA Astrophysics Data System (ADS)

    Andries van der Ark, L.

    The Molenaar Sijtsma statistic is an estimate of the reliability of a test score. In some special cases, computation of the Molenaar Sijtsma statistic requires provisional measures. These provisional measures have not been fully described in the literature, and we show that they have not been implemented in the software. We describe the required provisional measures as to allow the computation of the Molenaar Sijtsma statistic for all data sets.

  9. Statistical Analysis of Zebrafish Locomotor Response.

    PubMed

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  10. Statistical Analysis of Zebrafish Locomotor Response

    PubMed Central

    Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling’s T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling’s T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure. PMID:26437184

  11. Testing for significance of phase synchronisation dynamics in the EEG.

    PubMed

    Daly, Ian; Sweeney-Reed, Catherine M; Nasuto, Slawomir J

    2013-06-01

    A number of tests exist to check for statistical significance of phase synchronisation within the Electroencephalogram (EEG); however, the majority suffer from a lack of generality and applicability. They may also fail to account for temporal dynamics in the phase synchronisation, regarding synchronisation as a constant state instead of a dynamical process. Therefore, a novel test is developed for identifying the statistical significance of phase synchronisation based upon a combination of work characterising temporal dynamics of multivariate time-series and Markov modelling. We show how this method is better able to assess the significance of phase synchronisation than a range of commonly used significance tests. We also show how the method may be applied to identify and classify significantly different phase synchronisation dynamics in both univariate and multivariate datasets.

  12. Statistical methods for the quality control of steam cured concrete : final report.

    DOT National Transportation Integrated Search

    1971-01-01

    Concrete strength test results from three prestressing plants utilizing steam curing were evaluated statistically in terms of the concrete as received and the effectiveness of the plants' steaming procedures. Control charts were prepared to show tren...

  13. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  14. Establishment of an equivalence acceptance criterion for accelerated stability studies.

    PubMed

    Burdick, Richard K; Sidor, Leslie

    2013-01-01

    In this article, the use of statistical equivalence testing for providing evidence of process comparability in an accelerated stability study is advocated over the use of a test of differences. The objective of such a study is to demonstrate comparability by showing that the stability profiles under nonrecommended storage conditions of two processes are equivalent. Because it is difficult at accelerated conditions to find a direct link to product specifications, and hence product safety and efficacy, an equivalence acceptance criterion is proposed that is based on the statistical concept of effect size. As with all statistical tests of equivalence, it is important to collect input from appropriate subject-matter experts when defining the acceptance criterion.

  15. Increasing the statistical significance of entanglement detection in experiments.

    PubMed

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  16. Statistical inference, the bootstrap, and neural-network modeling with application to foreign exchange rates.

    PubMed

    White, H; Racine, J

    2001-01-01

    We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.

  17. Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls

    PubMed Central

    Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.

    2013-01-01

    As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950

  18. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  19. [Statistical validity of the Mexican Food Security Scale and the Latin American and Caribbean Food Security Scale].

    PubMed

    Villagómez-Ornelas, Paloma; Hernández-López, Pedro; Carrasco-Enríquez, Brenda; Barrios-Sánchez, Karina; Pérez-Escamilla, Rafael; Melgar-Quiñónez, Hugo

    2014-01-01

    This article validates the statistical consistency of two food security scales: the Mexican Food Security Scale (EMSA) and the Latin American and Caribbean Food Security Scale (ELCSA). Validity tests were conducted in order to verify that both scales were consistent instruments, conformed by independent, properly calibrated and adequately sorted items, arranged in a continuum of severity. The following tests were developed: sorting of items; Cronbach's alpha analysis; parallelism of prevalence curves; Rasch models; sensitivity analysis through mean differences' hypothesis test. The tests showed that both scales meet the required attributes and are robust statistical instruments for food security measurement. This is relevant given that the lack of access to food indicator, included in multidimensional poverty measurement in Mexico, is calculated with EMSA.

  20. PEPA test: fast and powerful differential analysis from relative quantitative proteomics data using shared peptides.

    PubMed

    Jacob, Laurent; Combes, Florence; Burger, Thomas

    2018-06-18

    We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.

  1. A study of correlations between crude oil spot and futures markets: A rolling sample test

    NASA Astrophysics Data System (ADS)

    Liu, Li; Wan, Jieqiu

    2011-10-01

    In this article, we investigate the asymmetries of exceedance correlations and cross-correlations between West Texas Intermediate (WTI) spot and futures markets. First, employing the test statistic proposed by Hong et al. [Asymmetries in stock returns: statistical tests and economic evaluation, Review of Financial Studies 20 (2007) 1547-1581], we find that the exceedance correlations were overall symmetric. However, the results from rolling windows show that some occasional events could induce the significant asymmetries of the exceedance correlations. Second, employing the test statistic proposed by Podobnik et al. [Quantifying cross-correlations using local and global detrending approaches, European Physics Journal B 71 (2009) 243-250], we find that the cross-correlations were significant even for large lagged orders. Using the detrended cross-correlation analysis proposed by Podobnik and Stanley [Detrended cross-correlation analysis: a new method for analyzing two nonstationary time series, Physics Review Letters 100 (2008) 084102], we find that the cross-correlations were weakly persistent and were stronger between spot and futures contract with larger maturity. Our results from rolling sample test also show the apparent effects of the exogenous events. Additionally, we have some relevant discussions on the obtained evidence.

  2. Introducing 3D U-statistic method for separating anomaly from background in exploration geochemical data with associated software development

    NASA Astrophysics Data System (ADS)

    Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir

    2016-03-01

    The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.

  3. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  4. Statistical tests for detecting associations with groups of genetic variants: generalization, evaluation, and implementation

    PubMed Central

    Ferguson, John; Wheeler, William; Fu, YiPing; Prokunina-Olsson, Ludmila; Zhao, Hongyu; Sampson, Joshua

    2013-01-01

    With recent advances in sequencing, genotyping arrays, and imputation, GWAS now aim to identify associations with rare and uncommon genetic variants. Here, we describe and evaluate a class of statistics, generalized score statistics (GSS), that can test for an association between a group of genetic variants and a phenotype. GSS are a simple weighted sum of single-variant statistics and their cross-products. We show that the majority of statistics currently used to detect associations with rare variants are equivalent to choosing a specific set of weights within this framework. We then evaluate the power of various weighting schemes as a function of variant characteristics, such as MAF, the proportion associated with the phenotype, and the direction of effect. Ultimately, we find that two classical tests are robust and powerful, but details are provided as to when other GSS may perform favorably. The software package CRaVe is available at our website (http://dceg.cancer.gov/bb/tools/crave). PMID:23092956

  5. Biofeedback-assisted relaxation training to decrease test anxiety in nursing students.

    PubMed

    Prato, Catherine A; Yucha, Carolyn B

    2013-01-01

    Nursing students experiencing debilitating test anxiety may be unable to demonstrate their knowledge and have potential for poor academic performance. A biofeedback-assisted relaxation training program was created to reduce test anxiety. Anxiety was measured using Spielberger's Test Anxiety Inventory and monitoring peripheral skin temperature, pulse, and respiration rates during the training. Participants were introduced to diaphragmatic breathing, progressive muscle relaxation, and autogenic training. Statistically significant changes occurred in respiratory rates and skin temperatures during the diaphragmatic breathing session; respiratory rates and peripheral skin temperatures during progressive muscle relaxation session; respiratory and pulse rates, and peripheral skin temperatures during the autogenic sessions. No statistically significant difference was noted between the first and second TAI. Subjective test anxiety scores of the students did not decrease by the end of training. Autogenic training session was most effective in showing a statistically significant change in decreased respiratory and pulse rates and increased peripheral skin temperature.

  6. Test 6, Test 7, and Gas Standard Analysis Results

    NASA Technical Reports Server (NTRS)

    Perez, Horacio, III

    2007-01-01

    This viewgraph presentation shows results of analyses on odor, toxic off gassing and gas standards. The topics include: 1) Statistical Analysis Definitions; 2) Odor Analysis Results NASA Standard 6001 Test 6; 3) Toxic Off gassing Analysis Results NASA Standard 6001 Test 7; and 4) Gas Standard Results NASA Standard 6001 Test 7;

  7. Initial Experience with Balloon-Occluded Trans-catheter Arterial Chemoembolization (B-TACE) for Hepatocellular Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maruyama, Mitsunari, E-mail: mitunari@med-shimane.u.ac.jp; Yoshizako, Takeshi, E-mail: yosizako@med.shimane-u.ac.jp; Nakamura, Tomonori, E-mail: t-naka@med.shimane-u.ac.jp

    2016-03-15

    PurposeThis study was performed to evaluate the accumulation of lipiodol emulsion (LE) and adverse events during our initial experience of balloon-occluded trans-catheter arterial chemoembolization (B-TACE) for hepatocellular carcinoma (HCC) compared with conventional TACE (C-TACE).MethodsB-TACE group (50 cases) was compared with C-TACE group (50 cases). The ratio of the LE concentration in the tumor to that in the surrounding embolized liver parenchyma (LE ratio) was calculated after each treatment. Adverse events were evaluated according to the Common Terminology Criteria for Adverse Effects (CTCAE) version 4.0.ResultsThe LE ratio at the level of subsegmental showed a statistically significant difference between the groups (tmore » test: P < 0.05). Only elevation of alanine aminotransferase was more frequent in the B-TACE group, showing a statistically significant difference (Mann–Whitney test: P < 0.05). While B-TACE caused severe adverse events (liver abscess and infarction) in patients with bile duct dilatation, there was no statistically significant difference in incidence between the groups. Multivariate logistic regression analysis suggested that the significant risk factor for liver abscess/infarction was bile duct dilatation (P < 0.05).ConclusionThe LE ratio at the level of subsegmental showed a statistically significant difference between the groups (t test: P < 0.05). B-TACE caused severe adverse events (liver abscess and infarction) in patients with bile duct dilatation.« less

  8. The score statistic of the LD-lod analysis: detecting linkage adaptive to linkage disequilibrium.

    PubMed

    Huang, J; Jiang, Y

    2001-01-01

    We study the properties of a modified lod score method for testing linkage that incorporates linkage disequilibrium (LD-lod). By examination of its score statistic, we show that the LD-lod score method adaptively combines two sources of information: (a) the IBD sharing score which is informative for linkage regardless of the existence of LD and (b) the contrast between allele-specific IBD sharing scores which is informative for linkage only in the presence of LD. We also consider the connection between the LD-lod score method and the transmission-disequilibrium test (TDT) for triad data and the mean test for affected sib pair (ASP) data. We show that, for triad data, the recessive LD-lod test is asymptotically equivalent to the TDT; and for ASP data, it is an adaptive combination of the TDT and the ASP mean test. We demonstrate that the LD-lod score method has relatively good statistical efficiency in comparison with the ASP mean test and the TDT for a broad range of LD and the genetic models considered in this report. Therefore, the LD-lod score method is an interesting approach for detecting linkage when the extent of LD is unknown, such as in a genome-wide screen with a dense set of genetic markers. Copyright 2001 S. Karger AG, Basel

  9. Identifying Pleiotropic Genes in Genome-Wide Association Studies for Multivariate Phenotypes with Mixed Measurement Scales

    PubMed Central

    Williams, L. Keoki; Buu, Anne

    2017-01-01

    We propose a multivariate genome-wide association test for mixed continuous, binary, and ordinal phenotypes. A latent response model is used to estimate the correlation between phenotypes with different measurement scales so that the empirical distribution of the Fisher’s combination statistic under the null hypothesis is estimated efficiently. The simulation study shows that our proposed correlation estimation methods have high levels of accuracy. More importantly, our approach conservatively estimates the variance of the test statistic so that the type I error rate is controlled. The simulation also shows that the proposed test maintains the power at the level very close to that of the ideal analysis based on known latent phenotypes while controlling the type I error. In contrast, conventional approaches–dichotomizing all observed phenotypes or treating them as continuous variables–could either reduce the power or employ a linear regression model unfit for the data. Furthermore, the statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that conducting a multivariate test on multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests. The proposed method also offers a new approach to analyzing the Fagerström Test for Nicotine Dependence as multivariate phenotypes in genome-wide association studies. PMID:28081206

  10. Statistical validation of normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. How allele frequency and study design affect association test statistics with misrepresentation errors.

    PubMed

    Escott-Price, Valentina; Ghodsi, Mansoureh; Schmidt, Karl Michael

    2014-04-01

    We evaluate the effect of genotyping errors on the type-I error of a general association test based on genotypes, showing that, in the presence of errors in the case and control samples, the test statistic asymptotically follows a scaled non-central $\\chi ^2$ distribution. We give explicit formulae for the scaling factor and non-centrality parameter for the symmetric allele-based genotyping error model and for additive and recessive disease models. They show how genotyping errors can lead to a significantly higher false-positive rate, growing with sample size, compared with the nominal significance levels. The strength of this effect depends very strongly on the population distribution of the genotype, with a pronounced effect in the case of rare alleles, and a great robustness against error in the case of large minor allele frequency. We also show how these results can be used to correct $p$-values.

  12. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  13. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  14. Radiographic comparison of different concentrations of recombinant human bone morphogenetic protein with allogenic bone compared with the use of 100% mineralized cancellous bone allograft in maxillary sinus grafting.

    PubMed

    Froum, Stuart J; Wallace, Stephen; Cho, Sang-Choon; Khouly, Ismael; Rosenberg, Edwin; Corby, Patricia; Froum, Scott; Mascarenhas, Patrick; Tarnow, Dennis P

    2014-01-01

    The purpose of this study was to radiographically evaluate, then analyze, bone height, volume, and density with reference to percentage of vital bone after maxillary sinuses were grafted using two different doses of recombinant human bone morphogenetic protein 2/acellular collagen sponge (rhBMP-2/ACS) combined with mineralized cancellous bone allograft (MCBA) and a control sinus grafted with MCBA only. A total of 18 patients (36 sinuses) were used for analysis of height and volume measurements, having two of three graft combinations (one in each sinus): (1) control, MCBA only; (2) test 1, MCBA + 5.6 mL of rhBMP-2/ACS (containing 8.4 mg of rhBMP-2); and (3) test 2, MCBA + 2.8 mL of rhBMP-2/ACS (containing 4.2 mg of rhBMP-2). The study was completed with 16 patients who also had bilateral cores removed 6 to 9 months following sinus augmentation. A computer software system was used to evaluate 36 computed tomography scans. Two time points where selected for measurements of height: The results indicated that height of the grafted sinus was significantly greater in the treatment groups compared with the control. However, by the second time point, there were no statistically significant differences. Three weeks post-surgery bone volume measurements showed similar statistically significant differences between test and controls. However, prior to core removal, test group 1 with the greater dose of rhBMP-2 showed a statistically significant greater increase compared with test group 2 and the control. There was no statistically significant difference between the latter two groups. All three groups had similar volume and shrinkage. Density measurements varied from the above results, with the control showing statistically significant greater density at both time points. By contrast, the density increase over time in both rhBMP groups was similar and statistically higher than in the control group. There were strong associations between height and volume in all groups and between volume and new vital bone only in the control group. There were no statistically significant relationships observed between height and bone density or between volume and bone density for any parameter measured. More cases and monitoring of the future survival of implants placed in these augmented sinuses are needed to verify these results.

  15. Mysid (Mysidopsis bahia) life-cycle test: Design comparisons and assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lussier, S.M.; Champlin, D.; Kuhn, A.

    1996-12-31

    This study examines ASTM Standard E1191-90, ``Standard Guide for Conducting Life-cycle Toxicity Tests with Saltwater Mysids,`` 1990, using Mysidopsis bahia, by comparing several test designs to assess growth, reproduction, and survival. The primary objective was to determine the most labor efficient and statistically powerful test design for the measurement of statistically detectable effects on biologically sensitive endpoints. Five different test designs were evaluated varying compartment size, number of organisms per compartment and sex ratio. Results showed that while paired organisms in the ASTM design had the highest rate of reproduction among designs tested, no individual design had greater statistical powermore » to detect differences in reproductive effects. Reproduction was not statistically different between organisms paired in the ASTM design and those with randomized sex ratios using larger test compartments. These treatments had numerically higher reproductive success and lower within tank replicate variance than treatments using smaller compartments where organisms were randomized, or had a specific sex ratio. In this study, survival and growth were not statistically different among designs tested. Within tank replicate variability can be reduced by using many exposure compartments with pairs, or few compartments with many organisms in each. While this improves variance within replicate chambers, it does not strengthen the power of detection among treatments in the test. An increase in the number of true replicates (exposure chambers) to eight will have the effect of reducing the percent detectable difference by a factor of two.« less

  16. Variations in intensity statistics for representational and abstract art, and for art from the Eastern and Western hemispheres.

    PubMed

    Graham, Daniel J; Field, David J

    2008-01-01

    Two recent studies suggest that natural scenes and paintings show similar statistical properties. But does the content or region of origin of an artwork affect its statistical properties? We addressed this question by having judges place paintings from a large, diverse collection of paintings into one of three subject-matter categories using a forced-choice paradigm. Basic statistics for images whose caterogization was agreed by all judges showed no significant differences between those judged to be 'landscape' and 'portrait/still-life', but these two classes differed from paintings judged to be 'abstract'. All categories showed basic spatial statistical regularities similar to those typical of natural scenes. A test of the full painting collection (140 images) with respect to the works' place of origin (provenance) showed significant differences between Eastern works and Western ones, differences which we find are likely related to the materials and the choice of background color. Although artists deviate slightly from reproducing natural statistics in abstract art (compared to representational art), the great majority of human art likely shares basic statistical limitations. We argue that statistical regularities in art are rooted in the need to make art visible to the eye, not in the inherent aesthetic value of natural-scene statistics, and we suggest that variability in spatial statistics may be generally imposed by manufacture.

  17. Sequential CFAR detectors using a dead-zone limiter

    NASA Astrophysics Data System (ADS)

    Tantaratana, Sawasd

    1990-09-01

    The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.

  18. General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies

    PubMed Central

    Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong

    2013-01-01

    We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515

  19. Properties of different selection signature statistics and a new strategy for combining them.

    PubMed

    Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H

    2015-11-01

    Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.

  20. SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalski, D; Huq, M; Bednarz, G

    Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less

  1. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  2. The breaking load method - Results and statistical modification from the ASTM interlaboratory test program

    NASA Technical Reports Server (NTRS)

    Colvin, E. L.; Emptage, M. R.

    1992-01-01

    The breaking load test provides quantitative stress corrosion cracking data by determining the residual strength of tension specimens that have been exposed to corrosive environments. Eight laboratories have participated in a cooperative test program under the auspices of ASTM Committee G-1 to evaluate the new test method. All eight laboratories were able to distinguish between three tempers of aluminum alloy 7075. The statistical analysis procedures that were used in the test program do not work well in all situations. An alternative procedure using Box-Cox transformations shows a great deal of promise. An ASTM standard method has been drafted which incorporates the Box-Cox procedure.

  3. [Hydrologic variability and sensitivity based on Hurst coefficient and Bartels statistic].

    PubMed

    Lei, Xu; Xie, Ping; Wu, Zi Yi; Sang, Yan Fang; Zhao, Jiang Yan; Li, Bin Bin

    2018-04-01

    Due to the global climate change and frequent human activities in recent years, the pure stochastic components of hydrological sequence is mixed with one or several of the variation ingredients, including jump, trend, period and dependency. It is urgently needed to clarify which indices should be used to quantify the degree of their variability. In this study, we defined the hydrological variability based on Hurst coefficient and Bartels statistic, and used Monte Carlo statistical tests to test and analyze their sensitivity to different variants. When the hydrological sequence had jump or trend variation, both Hurst coefficient and Bartels statistic could reflect the variation, with the Hurst coefficient being more sensitive to weak jump or trend variation. When the sequence had period, only the Bartels statistic could detect the mutation of the sequence. When the sequence had a dependency, both the Hurst coefficient and the Bartels statistics could reflect the variation, with the latter could detect weaker dependent variations. For the four variations, both the Hurst variability and Bartels variability increased with the increases of variation range. Thus, they could be used to measure the variation intensity of the hydrological sequence. We analyzed the temperature series of different weather stations in the Lancang River basin. Results showed that the temperature of all stations showed the upward trend or jump, indicating that the entire basin had experienced warming in recent years and the temperature variability in the upper and lower reaches was much higher. This case study showed the practicability of the proposed method.

  4. Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.

    ERIC Educational Resources Information Center

    Harman, Donna; And Others

    1991-01-01

    Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…

  5. How many spectral lines are statistically significant?

    NASA Astrophysics Data System (ADS)

    Freund, J.

    When experimental line spectra are fitted with least squares techniques one frequently does not know whether n or n + 1 lines may be fitted safely. This paper shows how an F-test can be applied in order to determine the statistical significance of including an extra line into the fitting routine.

  6. Color stability and degree of cure of direct composite restoratives after accelerated aging.

    PubMed

    Sarafianou, Aspasia; Iosifidou, Soultana; Papadopoulos, Triantafillos; Eliades, George

    2007-01-01

    This study evaluated the color changes and amount of remaining C = C bonds (%RDB) in three dental composites after hydrothermal- and photoaging. The materials tested were Estelite sigma, Filtek Supreme and Tetric Ceram. Specimens were fabricated from each material and subjected to L* a* b* colorimetry and FTIR spectroscopy before and after aging. Statistical evaluation of the deltaL,* deltaa,* deltab,* deltaE and %deltaRDB data was performed by one-way ANOVA and Tukey's test. The %RDB data before and after aging were statistically analyzed using two-way ANOVA and Student-Newman-Keuls test. In all cases an alpha = 0.05 significance level was used. No statistically significant differences were found in deltaL*, deltaa*, deltaE and %deltaRDB among the materials tested. Tetric Ceram demonstrated a significant difference in deltab*. All the materials showed visually perceptible (deltaE >1) but clinically acceptable values (deltaE < 3.3). Within each material group, statistically significant differences in %RDB were noticed before and after aging (p < 0.05). Filtek Supreme presented the lowest %RDB before aging, with Tetric Ceram presenting the lowest %RDB after aging (p < 0.05). The %deltaRDB mean values were statistically significantly different among all the groups tested. No correlation was found between deltaE and %deltaRDB.

  7. Definition of simulated driving tests for the evaluation of drivers' reactions and responses.

    PubMed

    Bartolozzi, Riccardo; Frendo, Francesco

    2014-01-01

    This article aims at identifying the most significant measures in 2 perception-response (PR) tests performed at a driving simulator: a braking test and a lateral skid test, which were developed in this work. Forty-eight subjects (26 females and 22 males) with a mean age of 24.9 ± 3.0 years were enrolled for this study. They were asked to perform a drive on the driving simulator at the University of Pisa (Italy) following a specific test protocol, including 8-10 braking tests and 8-10 lateral skid tests. Driver input signals and vehicle model signals were recorded during the drives and analyzed to extract measures such as the reaction time, first response time, etc. Following a statistical procedure (based on analysis of variance [ANOVA] and post hoc tests), all test measures (3 for the braking test and 8 for the lateral skid test) were analyzed in terms of statistically significant differences among different drivers. The presented procedure allows evaluation of the capability of a given test to distinguish among different drivers. In the braking test, the reaction time showed a high dispersion among single drivers, leading to just 4.8 percent of statistically significant driver pairs (using the Games-Howell post hoc test), whereas the pedal transition time scored 31.9 percent. In the lateral skid test, 28.5 percent of the 2 × 2 comparisons showed significantly different reaction times, 19.5 percent had different response times, 35.2 percent had a different second peak of the steering wheel signal, and 33 percent showed different values of the integral of the steering wheel signal. For the braking test, which has been widely employed in similar forms in the literature, it was shown how the reaction time, with respect to the pedal transition time, can have a higher dispersion due to the influence of external factors. For the lateral skid test, the following measures were identified as the most significant for application studies: the reaction time for the reaction phase, the second peak of the steering wheel angle for the first instinctive response, and the integral of the steering wheel angle for the complete response. The methodology used to analyze the test measures was founded on statistically based and objective evaluation criteria and could be applied to other tests. Even if obtained with a fixed-base simulator, the obtained results represent useful information for applications of the presented PR tests in experimental campaigns with driving simulators.

  8. Association analysis of multiple traits by an approach of combining P values.

    PubMed

    Chen, Lili; Wang, Yong; Zhou, Yajing

    2018-03-01

    Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.

  9. Emergence of patterns in random processes

    NASA Astrophysics Data System (ADS)

    Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.

    2012-08-01

    Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.

  10. Determination of the criterion-related validity of hip joint angle test for estimating hamstring flexibility using a contemporary statistical approach.

    PubMed

    Sainz de Baranda, Pilar; Rodríguez-Iniesta, María; Ayala, Francisco; Santonja, Fernando; Cejudo, Antonio

    2014-07-01

    To examine the criterion-related validity of the horizontal hip joint angle (H-HJA) test and vertical hip joint angle (V-HJA) test for estimating hamstring flexibility measured through the passive straight-leg raise (PSLR) test using contemporary statistical measures. Validity study. Controlled laboratory environment. One hundred thirty-eight professional trampoline gymnasts (61 women and 77 men). Hamstring flexibility. Each participant performed 2 trials of H-HJA, V-HJA, and PSLR tests in a randomized order. The criterion-related validity of H-HJA and V-HJA tests was measured through the estimation equation, typical error of the estimate (TEEST), validity correlation (β), and their respective confidence limits. The findings from this study suggest that although H-HJA and V-HJA tests showed moderate to high validity scores for estimating hamstring flexibility (standardized TEEST = 0.63; β = 0.80), the TEEST statistic reported for both tests was not narrow enough for clinical purposes (H-HJA = 10.3 degrees; V-HJA = 9.5 degrees). Subsequently, the predicted likely thresholds for the true values that were generated were too wide (H-HJA = predicted value ± 13.2 degrees; V-HJA = predicted value ± 12.2 degrees). The results suggest that although the HJA test showed moderate to high validity scores for estimating hamstring flexibility, the prediction intervals between the HJA and PSLR tests are not strong enough to suggest that clinicians and sport medicine practitioners should use the HJA and PSLR tests interchangeably as gold standard measurement tools to evaluate and detect short hamstring muscle flexibility.

  11. Registered nurses' medication management of the elderly in aged care facilities.

    PubMed

    Lim, L M; Chiu, L H; Dohrmann, J; Tan, K-L

    2010-03-01

    Data on adverse drug reactions (ADRs) showed a rising trend in the elderly over 65 years using multiple medications. To identify registered nurses' (RNs) knowledge of medication management and ADRs in the elderly in aged care facilities; evaluate an education programme to increase pharmacology knowledge and prevent ADRs in the elderly; and develop a learning programme with a view to extending provision, if successful. This exploratory study used a non-randomized pre- and post-test one group quasi-experimental design without comparators. It comprised a 23-item knowledge-based test questionnaire, one-hour teaching session and a self-directed learning package. The volunteer sample was RNs from residential aged care facilities, involved in medication management. Participants sat a pre-test immediately before the education, and post-test 4 weeks later (same questionnaire). Participants' perceptions obtained. Pre-test sample n = 58, post-test n = 40, attrition rate of 31%. Using Microsoft Excel 2000, descriptive statistical data analysis of overall pre- and post-test incorrect responses showed: pre-test proportion of incorrect responses = 0.40; post-test proportion of incorrect responses = 0.27; Z-test comparing pre- and post-tests scores of incorrect responses = 6.55 and one-sided P-value = 2.8E-11 (P < 0.001). Pre-test showed knowledge deficits in medication management and ADRs in the elderly; post-test showed statistically significant improvement in RNs' knowledge. It highlighted a need for continuing professional education. Further studies are required on a larger sample of RNs in other aged care facilities, and on the clinical impact of education by investigating nursing practice and elderly residents' outcomes.

  12. Meta-analysis of gene-level associations for rare variants based on single-variant statistics.

    PubMed

    Hu, Yi-Juan; Berndt, Sonja I; Gustafsson, Stefan; Ganna, Andrea; Hirschhorn, Joel; North, Kari E; Ingelsson, Erik; Lin, Dan-Yu

    2013-08-08

    Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. Comparison of hypertabastic survival model with other unimodal hazard rate functions using a goodness-of-fit test.

    PubMed

    Tahir, M Ramzan; Tran, Quang X; Nikulin, Mikhail S

    2017-05-30

    We studied the problem of testing a hypothesized distribution in survival regression models when the data is right censored and survival times are influenced by covariates. A modified chi-squared type test, known as Nikulin-Rao-Robson statistic, is applied for the comparison of accelerated failure time models. This statistic is used to test the goodness-of-fit for hypertabastic survival model and four other unimodal hazard rate functions. The results of simulation study showed that the hypertabastic distribution can be used as an alternative to log-logistic and log-normal distribution. In statistical modeling, because of its flexible shape of hazard functions, this distribution can also be used as a competitor of Birnbaum-Saunders and inverse Gaussian distributions. The results for the real data application are shown. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. A statistical investigation of z test and ROC curve on seismo-ionospheric anomalies in TEC associated earthquakes in Taiwan during 1999-2014

    NASA Astrophysics Data System (ADS)

    Shih, A. L.; Liu, J. Y. G.

    2015-12-01

    A median-based method and a z test are employed to find characteristics of seismo-ionospheric precursor (SIP) of the total electron content (TEC) in global ionosphere map (GIM) associated with 129 M≥5.5 earthquakes in Taiwan during 1999-2014. Results show that both negative and positive anomalies in the GIM TEC with the statistical significance of the z test appear few days before the earthquakes. The receiver operating characteristic (ROC) curve is further applied to see whether the SIPs exist in Taiwan.

  15. Rank-based testing of equal survivorship based on cross-sectional survival data with or without prospective follow-up.

    PubMed

    Chan, Kwun Chuen Gary; Qin, Jing

    2015-10-01

    Existing linear rank statistics cannot be applied to cross-sectional survival data without follow-up since all subjects are essentially censored. However, partial survival information are available from backward recurrence times and are frequently collected from health surveys without prospective follow-up. Under length-biased sampling, a class of linear rank statistics is proposed based only on backward recurrence times without any prospective follow-up. When follow-up data are available, the proposed rank statistic and a conventional rank statistic that utilizes follow-up information from the same sample are shown to be asymptotically independent. We discuss four ways to combine these two statistics when follow-up is present. Simulations show that all combined statistics have substantially improved power compared with conventional rank statistics, and a Mantel-Haenszel test performed the best among the proposal statistics. The method is applied to a cross-sectional health survey without follow-up and a study of Alzheimer's disease with prospective follow-up. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angers, Crystal Plume; Bottema, Ryan; Buckley, Les

    Purpose: Treatment unit uptime statistics are typically used to monitor radiation equipment performance. The Ottawa Hospital Cancer Centre has introduced the use of Quality Control (QC) test success as a quality indicator for equipment performance and overall health of the equipment QC program. Methods: Implemented in 2012, QATrack+ is used to record and monitor over 1100 routine machine QC tests each month for 20 treatment and imaging units ( http://qatrackplus.com/ ). Using an SQL (structured query language) script, automated queries of the QATrack+ database are used to generate program metrics such as the number of QC tests executed and themore » percentage of tests passing, at tolerance or at action. These metrics are compared against machine uptime statistics already reported within the program. Results: Program metrics for 2015 show good correlation between pass rate of QC tests and uptime for a given machine. For the nine conventional linacs, the QC test success rate was consistently greater than 97%. The corresponding uptimes for these units are better than 98%. Machines that consistently show higher failure or tolerance rates in the QC tests have lower uptimes. This points to either poor machine performance requiring corrective action or to problems with the QC program. Conclusions: QATrack+ significantly improves the organization of QC data but can also aid in overall equipment management. Complimenting machine uptime statistics with QC test metrics provides a more complete picture of overall machine performance and can be used to identify areas of improvement in the machine service and QC programs.« less

  17. Online incidental statistical learning of audiovisual word sequences in adults: a registered report

    PubMed Central

    Duta, Mihaela; Thompson, Paul

    2018-01-01

    Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876

  18. Statistical analysis of time transfer data from Timation 2. [US Naval Observatory and Australia

    NASA Technical Reports Server (NTRS)

    Luck, J. M.; Morgan, P.

    1974-01-01

    Between July 1973 and January 1974, three time transfer experiments using the Timation 2 satellite were conducted to measure time differences between the U.S. Naval Observatory and Australia. Statistical tests showed that the results are unaffected by the satellite's position with respect to the sunrise/sunset line or by its closest approach azimuth at the Australian station. Further tests revealed that forward predictions of time scale differences, based on the measurements, can be made with high confidence.

  19. KNOWLEDGE OF PUERPERAL MOTHERS ABOUT THE GUTHRIE TEST.

    PubMed

    Arduini, Giovanna Abadia Oliveira; Balarin, Marly Aparecida Spadotto; Silva-Grecco, Roseane Lopes da; Marqui, Alessandra Bernadete Trovó de

    2017-01-01

    This study aimed to assess the knowledge of puerperal mothers about the Guthrie test. A total of 75 mothers who sought primary care between October 2014 and February 2015 were investigated. The form was applied by the main researcher and the data was analyzed, using descriptive statistics with Microsoft Office Excel, and Statistical Package for Social Sciences (SPSS) programs. Association tests and statistical power were applied. Among the 75 mothers, 47 (62.7%) would have liked to receive more information about the newborn screening, especially regarding the correct sample collection period, followed by the screened morbidities. Most participants (n=55; 85.9%) took their children to be tested between the third and the seventh day of birth, as recommended by the Brazilian Health Ministry. Fifty-four women (72%) were unable to name the morbidities screened by the test in Minas Gerais, and they were also unaware that most have genetic etiology. The health professional who informed the mother about the Guthrie test was mainly the physician. This information was given prenatally to 57% of the cases, and to 43 % at the time of discharge from the hospital. The association test showed that mothers with higher education have more knowledge about the purpose and importance of the Guthrie test. The statistical power was 83.5%. Maternal knowledge about the Guthrie test is superficial and may reflect the health team's usual practice.

  20. Effect of repeated cycles of chemical disinfection on the roughness and hardness of hard reline acrylic resins.

    PubMed

    Pinto, Luciana de Rezende; Acosta, Emílio José T Rodríguez; Távora, Flora Freitas Fernandes; da Silva, Paulo Maurício Batista; Porto, Vinícius Carvalho

    2010-06-01

    The aim of this study was to assess the effect of repeated cycles of five chemical disinfectant solutions on the roughness and hardness of three hard chairside reliners. A total of 180 circular specimens (30 mm x 6 mm) were fabricated using three hard chairside reliners (Jet; n = 60, Kooliner; n = 60, Tokuyama Rebase II Fast; n = 60), which were immersed in deionised water (control), and five disinfectant solutions (1%, 2%, 5.25% sodium hypochlorite; 2% glutaraldehyde; 4% chlorhexidine gluconate). They were tested for Knoop hardness (KHN) and surface roughness (microm), before and after 30 simulated disinfecting cycles. Data was analysed by the factorial scheme (6 x 2), two-way analysis of variance (anova), followed by Tukey's test. For Jet (from 18.74 to 13.86 KHN), Kooliner (from 14.09 to 8.72 KHN), Tokuyama (from 12.57 to 8.28 KHN) a significant decrease in hardness was observed irrespective of the solution used on all materials. For Jet (from 0.09 to 0.11 microm) there was a statistically significant increase in roughness. Kooliner (from 0.36 to 0.26 microm) presented a statistically significant decrease in roughness and Tokuyama (from 0.15 to 0.11 microm) presented no statistically significant difference after 30 days. This study showed that all disinfectant solutions promoted a statistically significant decrease in hardness, whereas with roughness, the materials tested showed a statistically significant increase, except for Tokuyama. Although statistically significant values were registered, these results could not be considered clinically significant.

  1. Retention of veneered stainless steel crowns on replicated typodont primary incisors: an in vitro study.

    PubMed

    Guelmann, Marcio; Gehring, Daren F; Turner, Clara

    2003-01-01

    The purpose of this in vitro study was to determine the effect of crimping and cementation on retention of veneered stainless steel crowns. One hundred twenty crowns, 90 from 3 commercially available brands of veneered stainless steel crowns (Dura Crown, Kinder Krown, and NuSmile Primary Crown) and 30 (plain) Unitek stainless steel crowns were assessed for retention. An orthodontic wire was soldered perpendicular to the incisal edge of the crowns; the crowns were fitted to acrylic replicas of ideal crown preparations and were divided equally into 3 test groups: group 1--crowns were crimped only (no cement used); group 2--crowns were cemented only; and group 3--crowns were crimped and cemented to the acrylic replicas. An Instron machine recorded the amount of force necessary to dislodge the crowns and the results were statistically analyzed using 2-way ANOVA and Tukey honestly significant difference (HSD) test. Group 3 was statistically more retentive than groups 1 and 2. Group 2 was statistically more retentive than group 1 (P < .001). In group 1, Unitek crowns were statistically more retentive than the veneered crowns (P < .05). In group 2, NuSmile crowns showed statistically less retention values than all other crowns (P < .05). In group 3,Kinder Krown crowns showed statistically better retention rates than all other brands (P < .05). Significantly higher retention values were obtained for all brands tested when crimping and cement were combined. The crowns with veneer facings were significantly more retentive than the nonveneered ones when cement and crimping were combined.

  2. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  3. A Ratio Test of Interrater Agreement with High Specificity

    ERIC Educational Resources Information Center

    Cousineau, Denis; Laurencelle, Louis

    2015-01-01

    Existing tests of interrater agreements have high statistical power; however, they lack specificity. If the ratings of the two raters do not show agreement but are not random, the current tests, some of which are based on Cohen's kappa, will often reject the null hypothesis, leading to the wrong conclusion that agreement is present. A new test of…

  4. How White Teachers Experience and Think about Race in Professional Development

    ERIC Educational Resources Information Center

    Marcy, Renee

    2010-01-01

    The public educational system in the United States fails to proficiently educate a majority of African American, Latino/a, and students from low-income backgrounds. Test score statistics show an average scaled score gap of twenty-six points between African American and White students (National Center for Education Statistics, 2007). The term…

  5. United States Middle School Students' Perspectives on Learning Statistics

    ERIC Educational Resources Information Center

    Dwyer, Jerry; Moorhouse, Kim; Colwell, Malinda J.

    2009-01-01

    This paper describes an intervention at the 8th grade level where university mathematics researchers presented a series of lessons on introductory concepts in probability and statistics. Pre- and post-tests, and interviews were conducted to examine whether or not students at this grade level can understand these concepts. Students showed a…

  6. Comparing the Lifetimes of Two Brands of Batteries

    ERIC Educational Resources Information Center

    Dunn, Peter K.

    2013-01-01

    In this paper, we report a case study that illustrates the importance in interpreting the results from statistical tests, and shows the difference between practical importance and statistical significance. This case study presents three sets of data concerning the performance of two brands of batteries. The data are easy to describe and…

  7. 75 FR 79035 - Nationally Recognized Testing Laboratories; Supplier's Declaration of Conformity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ... statistic, however, covers only a narrow subset of ICT equipment, and excludes laptop computers and computer... between 2003 and March of 2009 shows a total of 60 product recalls, including laptop computers, scanners..., statistical or similar data and studies, of a credible nature, supporting any claims made by commenters.'' (73...

  8. Generalized Hurst exponent estimates differentiate EEG signals of healthy and epileptic patients

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2018-01-01

    The aim of our current study is to check whether multifractal patterns of the electroencephalographic (EEG) signals of normal and epileptic patients are statistically similar or different. In this regard, the generalized Hurst exponent (GHE) method is used for robust estimation of the multifractals in each type of EEG signals, and three powerful statistical tests are performed to check existence of differences between estimated GHEs from healthy control subjects and epileptic patients. The obtained results show that multifractals exist in both types of EEG signals. Particularly, it was found that the degree of fractal is more pronounced in short variations of normal EEG signals than in short variations of EEG signals with seizure free intervals. In contrary, it is more pronounced in long variations of EEG signals with seizure free intervals than in normal EEG signals. Importantly, both parametric and nonparametric statistical tests show strong evidence that estimated GHEs of normal EEG signals are statistically and significantly different from those with seizure free intervals. Therefore, GHEs can be efficiently used to distinguish between healthy and patients suffering from epilepsy.

  9. The “χ” of the Matter: Testing the Relationship between Paleoenvironments and Three Theropod Clades

    PubMed Central

    Sales, Marcos A. F.; Lacerda, Marcel B.; Horn, Bruno L. D.; de Oliveira, Isabel A. P.; Schultz, Cesar L.

    2016-01-01

    The view of spinosaurs as dinosaurs of semi-aquatic habits and strongly associated with marginal and coastal habitats are deeply rooted in both scientific and popular knowledge, but it was never statistically tested. Inspired by a previous analysis of other dinosaur clades and major paleoenvironmental categories, here we present our own statistical evaluation of the association between coastal and terrestrial paleoenvironments and spinosaurids, along with other two theropod taxa: abelisaurids and carcharodontosaurids. We also included a taphonomic perspective and classified the occurrences in categories related to potential biases in order to better address our interpretations. Our main results can be summarized as follows: 1) the taxon with the largest amount of statistical evidence showing it positively associated to coastal paleoenvironments is Spinosauridae; 2) abelisaurids and carcharodontosaurids had more statistical evidence showing them positively associated with terrestrial paleoenvironments; 3) it is likely that spinosaurids also occupied spatially inland areas in a way somehow comparable at least to carcharodontosaurids; 4) abelisaurids may have been more common than the other two taxa in inland habitats. PMID:26829315

  10. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  11. A Seven-Year Follow-Up of Intelligence Test Scores of Foster Grandparents

    ERIC Educational Resources Information Center

    Troll, Lillian E.; And Others

    1976-01-01

    After seven years, a group (N=32) of originally nonemployed poverty-level older people (over 60) now employed as foster grandparents were retested with the WAIS. Three subtest scores showed stability and Digit Span showed a statistically significant drop. Neither age nor initial level of health or WAIS scores was related to test-score changes over…

  12. Cognitive predictors of balance in Parkinson's disease.

    PubMed

    Fernandes, Ângela; Mendes, Andreia; Rocha, Nuno; Tavares, João Manuel R S

    2016-06-01

    Postural instability is one of the most incapacitating symptoms of Parkinson's disease (PD) and appears to be related to cognitive deficits. This study aims to determine the cognitive factors that can predict deficits in static and dynamic balance in individuals with PD. A sociodemographic questionnaire characterized 52 individuals with PD for this work. The Trail Making Test, Rule Shift Cards Test, and Digit Span Test assessed the executive functions. The static balance was assessed using a plantar pressure platform, and dynamic balance was based on the Timed Up and Go Test. The results were statistically analysed using SPSS Statistics software through linear regression analysis. The results show that a statistically significant model based on cognitive outcomes was able to explain the variance of motor variables. Also, the explanatory value of the model tended to increase with the addition of individual and clinical variables, although the resulting model was not statistically significant The model explained 25-29% of the variability of the Timed Up and Go Test, while for the anteroposterior displacement it was 23-34%, and for the mediolateral displacement it was 24-39%. From the findings, we conclude that the cognitive performance, especially the executive functions, is a predictor of balance deficit in individuals with PD.

  13. Assessment of surface hardness of acrylic resins submitted to accelerated artificial aging.

    PubMed

    Tornavoi, D C; Agnelli, J A M; Lepri, C P; Mazzetto, M O; Botelho, A L; Soares, R G; Dos Reis, A C

    2012-06-01

    The aim of this study was to assess the influence of accelerated artificial aging (AAA) on the surface hardness of acrylic resins. The following three commercial brands of acrylic resins were tested: Vipi Flash (autopolymerized resin), Vipi Wave (microwave heat-polymerized resin) and Vipi Cril (conventional heat-polymerized resin). To perform the tests, 21 test specimens (65x10x3 mm) were made, 7 for each resin. Three surface hardness readings were performed for each test specimen, before and after AAA, and the means were submitted to the following tests: Kolmogorov-Smirnov (P>0.05), Levene Statistic, Two-way ANOVA, Tukey Post Hoc (P<0.05) with the SPSS Statistical Software 17.0. The analysis of the factors showed significant differences in the hardness values (P<0.05). Before aging, the autopolymerized acrylic resin Vipi Flash showed lower hardness values when compared with the heat-polymerized resin Vipi Cril (P=0.001). After aging, the 3 materials showed similar performance when compared among them. The Vipi Cril was the only one affected by AAA and showed lower hardness values after this procedure (Pp=0.003). It may be concluded that accelerated artificial aging influenced surface hardness of heat-polymerized acrylic resin Vipi Cril.

  14. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  15. Evaluation of bond strength of various epoxy resin based sealers in oval shaped root canals.

    PubMed

    Cakici, Fatih; Cakici, Elif Bahar; Ceyhanli, Kadir Tolga; Celik, Ersan; Kucukekenci, Funda Fundaoglu; Gunseren, Arif Onur

    2016-09-30

    The aim of this study was to evaluate the bond strength of AH plus, Acroseal, and Adseal to the root canal dentin. A total of 36 single-rooted, mandibular premolar teeth were used. Root canal shaping procedures were performed with ProTaper rotary instruments (Dentsply Maillefer) up to size F4. The prepared samples were then randomly assembled into 3 groups (n = 12). For each group, an ultrasonic tip (size 15, 0.02 taper) which was also coated with an epoxy resin based sealer and placed 2 mm shorter than the working length. The sealer was then activated for 10 s. A push-out test was used to measure the bond strength between the root canal dentine and the sealer. Kruskal-Wallis test to evaluate the push-out bond strength of epoxy based sealer (P = 0.05). The failure mode data were statistically analyzed using Pearson's chi square test (P = 0.05). Kruskal-Wallis test indicated that there were no statistically significant difference among the push out bond strength values of 3 mm (p = 0.123) and 6 mm (P = 0.057) for groups, there was statistically significant difference push out bond strength value of 9 mm (P = 0.032). Pearson's chi square test showed statistically significant differences for the failure types among the groups. Various epoxy resin based sealers activated ultrasonically showed similar bond strength in oval shaped root canals. Apical sections for all groups have higher push out bond strength values than middle and coronal sections.

  16. Is My Network Module Preserved and Reproducible?

    PubMed Central

    Langfelder, Peter; Luo, Rui; Oldham, Michael C.; Horvath, Steve

    2011-01-01

    In many applications, one is interested in determining which of the properties of a network module change across conditions. For example, to validate the existence of a module, it is desirable to show that it is reproducible (or preserved) in an independent test network. Here we study several types of network preservation statistics that do not require a module assignment in the test network. We distinguish network preservation statistics by the type of the underlying network. Some preservation statistics are defined for a general network (defined by an adjacency matrix) while others are only defined for a correlation network (constructed on the basis of pairwise correlations between numeric variables). Our applications show that the correlation structure facilitates the definition of particularly powerful module preservation statistics. We illustrate that evaluating module preservation is in general different from evaluating cluster preservation. We find that it is advantageous to aggregate multiple preservation statistics into summary preservation statistics. We illustrate the use of these methods in six gene co-expression network applications including 1) preservation of cholesterol biosynthesis pathway in mouse tissues, 2) comparison of human and chimpanzee brain networks, 3) preservation of selected KEGG pathways between human and chimpanzee brain networks, 4) sex differences in human cortical networks, 5) sex differences in mouse liver networks. While we find no evidence for sex specific modules in human cortical networks, we find that several human cortical modules are less preserved in chimpanzees. In particular, apoptosis genes are differentially co-expressed between humans and chimpanzees. Our simulation studies and applications show that module preservation statistics are useful for studying differences between the modular structure of networks. Data, R software and accompanying tutorials can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/ModulePreservation. PMID:21283776

  17. Ridge preservation using a composite bone graft and a bioabsorbable membrane with and without primary wound closure: a comparative clinical trial.

    PubMed

    Engler-Hamm, Daniel; Cheung, Wai S; Yen, Alec; Stark, Paul C; Griffin, Terrence

    2011-03-01

    The aim of this single-masked, randomized controlled clinical trial is to compare hard and soft tissue changes after ridge preservation performed with (control, RPc) and without (test, RPe) primary soft tissue closure in a split-mouth design. Eleven patients completed this 6-month trial. Extraction and ridge preservation were performed using a composite bone graft of inorganic bovine-derived hydroxyapatite matrix and cell binding peptide P-15 (ABM/P-15), demineralized freeze-dried bone allograft, and a copolymer bioabsorbable membrane. Primary wound closure was achieved on the control sites (RPc), whereas test sites (RPe) left the membrane exposed. Pocket probing depth on adjacent teeth, repositioning of the mucogingival junction, bone width, bone fill, and postoperative discomfort were assessed. Bone cores were obtained for histological examination. Intragroup analyses for both groups demonstrated statistically significant mean reductions in probing depth (RPc: 0.42 mm, P = 0.012; RPe: 0.25 mm, P = 0.012) and bone width (RPc: 3 mm, P = 0.002; RPe: 3.42 mm, P <0.001). However, intergroup analysis did not find these parameters to be statistically different at 6 months. The test group showed statistically significant mean change in bone fill (7.21 mm; P <0.001). Compared to the control group, the test group showed statistically significant lower mean postoperative discomfort (RPc 4 versus RPe 2; P = 0.002). Histomorphometric analysis showed presence of 0% to 40% of ABM/P-15 and 5% to 20% of new bone formation in both groups. Comparison of clinical variables between the two groups at 6 months revealed that the mucogingival junction was statistically significantly more coronally displaced in the control group than in the test group, with a mean of 3.83 mm versus 1.21 mm (P = 0.002). Ridge preservation without flap advancement preserves more keratinized tissue and has less postoperative discomfort and swelling. Although ridge preservation is performed with either method, ≈27% to 30% of bone width is lost.

  18. Determination of ABO blood grouping and Rhesus factor from tooth material

    PubMed Central

    Kumar, Pooja Vijay; Vanishree, M; Anila, K; Hunasgi, Santosh; Suryadevra, Sri Sujan; Kardalkar, Swetha

    2016-01-01

    Objective: The aim of the study was to determine blood groups and Rhesus factor from dentin and pulp using absorption-elution (AE) technique in different time periods at 0, 3, 6, 9 and 12 months, respectively. Materials and Methods: A total of 150 cases, 30 patients each at 0, 3, 6, 9 and 12 months were included in the study. The samples consisted of males and females with age ranging 13–60 years. Patient's blood group was checked and was considered as “control.” The dentin and pulp of extracted teeth were tested for the presence of ABO/Rh antigen, at respective time periods by AE technique. Statistical Analysis: Data were analyzed in proportion. For comparison, Chi-square test or Fisher's exact test was used for the small sample. Results: Blood group antigens of ABO and Rh factor were detected in dentin and pulp up to 12 months. For both ABO and Rh factor, dentin and pulp showed 100% sensitivity for the samples tested at 0 month and showed a gradual decrease in the sensitivity as time period increased. The sensitivity of pulp was better than dentin for both the blood grouping systems and ABO blood group antigens were better detected than Rh antigens. Conclusion: In dentin and pulp, the antigens of ABO and Rh factor were detected up to 12 months but showed a progressive decrease in the antigenicity as the time period increased. When compared the results obtained of dentin and pulp in ABO and Rh factor grouping showed similar results with no statistical significance. The sensitivity of ABO blood grouping was better than Rh factor blood grouping and showed a statistically significant result. PMID:27721625

  19. Adhesive properties and adhesive joints strength of graphite/epoxy composites

    NASA Astrophysics Data System (ADS)

    Rudawska, Anna; Stančeková, Dana; Cubonova, Nadezda; Vitenko, Tetiana; Müller, Miroslav; Valášek, Petr

    2017-05-01

    The article presents the results of experimental research of the adhesive joints strength of graphite/epoxy composites and the results of the surface free energy of the composite surfaces. Two types of graphite/epoxy composites with different thickness were tested which are used to aircraft structure. The single-lap adhesive joints of epoxy composites were considered. Adhesive properties were described by surface free energy. Owens-Wendt method was used to determine surface free energy. The epoxy two-component adhesive was used to preparing the adhesive joints. Zwick/Roell 100 strength device were used to determination the shear strength of adhesive joints of epoxy composites. The strength test results showed that the highest value was obtained for adhesive joints of graphite-epoxy composite of smaller material thickness (0.48 mm). Statistical analysis of the results obtained, the study showed statistically significant differences between the values of the strength of the confidence level of 0.95. The statistical analysis of the results also showed that there are no statistical significant differences in average values of surface free energy (0.95 confidence level). It was noted that in each of the results the dispersion component of surface free energy was much greater than polar component of surface free energy.

  20. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poyer, D.A.

    In this report, tests of statistical significance of five sets of variables with household energy consumption (at the point of end-use) are described. Five models, in sequence, were empirically estimated and tested for statistical significance by using the Residential Energy Consumption Survey of the US Department of Energy, Energy Information Administration. Each model incorporated additional information, embodied in a set of variables not previously specified in the energy demand system. The variable sets were generally labeled as economic variables, weather variables, household-structure variables, end-use variables, and housing-type variables. The tests of statistical significance showed each of the variable sets tomore » be highly significant in explaining the overall variance in energy consumption. The findings imply that the contemporaneous interaction of different types of variables, and not just one exclusive set of variables, determines the level of household energy consumption.« less

  2. Outlier Detection in High-Stakes Certification Testing.

    ERIC Educational Resources Information Center

    Meijer, Rob R.

    2002-01-01

    Used empirical data from a certification test to study methods from statistical process control that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in computerized adaptive testing. Results for 1,392 examinees show that different types of misfit can be distinguished. (SLD)

  3. Lower incisor inclination regarding different reference planes.

    PubMed

    Zataráin, Brenda; Avila, Josué; Moyaho, Angeles; Carrasco, Rosendo; Velasco, Carmen

    2016-09-01

    The purpose of this study was to assess the degree of lower incisor inclination with respect to different reference planes. It was an observational, analytical, longitudinal, prospective study conducted on 100 lateral cephalograms which were corrected according to the photograph in natural head position in order to draw the true vertical plane (TVP). The incisor mandibular plane angle (IMPA) was compensated to eliminate the variation of the mandibular plane growth type with the formula "FMApx.- 25 (FMA) + IMPApx. = compensated IMPA (IMPACOM)". As the data followed normal distribution determined by the KolmogorovSmirnov test, parametric tests were used for the statistical analysis, Ttest, ANOVA and Pearson coefficient correlation test. Statistical analysis was performed using a statistical significance of p <0.05. There is correlation between TVP and NB line (NB) (0.8614), Frankfort mandibular incisor angle (FMIA) (0.8894), IMPA (0.6351), Apo line (Apo) (0.609), IMPACOM (0.8895) and McHorris angle (MH) (0.7769). ANOVA showed statistically significant differences between the means for the 7 variables with 95% confidence level, P=0.0001. The multiple range test showed no significant difference among means: APoNB (0.88), IMPAMH (0.36), IMPANB (0.65), FMIAIMPACOM (0.01), FMIATVP (0.18), TVPIMPACOM (0.17). There was correlation among all reference planes. There were statistically significant differences among the means of the planes measured, except for IMPACOM, FMIA and TVP. The IMPA differed significantly from the IMPACOM. The compensated IMPA and the FMIA did not differ significantly from the TVP. The true horizontal plane was mismatched with Frankfort plane in 84% of the sample with a range of 19°. The true vertical plane is adequate for measuring lower incisor inclination. Sociedad Argentina de Investigación Odontológica.

  4. Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines

    PubMed Central

    Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen

    2015-01-01

    Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627

  5. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    PubMed

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  6. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis

    PubMed Central

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J.; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T.; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-01-01

    Motivation: A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. Results: We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness. Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Availability and implementation: Code is available at https://github.com/aalto-ics-kepaco Contacts: anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153689

  7. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Reproducible detection of disease-associated markers from gene expression data.

    PubMed

    Omae, Katsuhiro; Komori, Osamu; Eguchi, Shinto

    2016-08-18

    Detection of disease-associated markers plays a crucial role in gene screening for biological studies. Two-sample test statistics, such as the t-statistic, are widely used to rank genes based on gene expression data. However, the resultant gene ranking is often not reproducible among different data sets. Such irreproducibility may be caused by disease heterogeneity. When we divided data into two subsets, we found that the signs of the two t-statistics were often reversed. Focusing on such instability, we proposed a sign-sum statistic that counts the signs of the t-statistics for all possible subsets. The proposed method excludes genes affected by heterogeneity, thereby improving the reproducibility of gene ranking. We compared the sign-sum statistic with the t-statistic by a theoretical evaluation of the upper confidence limit. Through simulations and applications to real data sets, we show that the sign-sum statistic exhibits superior performance. We derive the sign-sum statistic for getting a robust gene ranking. The sign-sum statistic gives more reproducible ranking than the t-statistic. Using simulated data sets we show that the sign-sum statistic excludes hetero-type genes well. Also for the real data sets, the sign-sum statistic performs well in a viewpoint of ranking reproducibility.

  9. Comparisons of false negative rates from a trend test alone and from a trend test jointly with a control-high groups pairwise test in the determination of the carcinogenicity of new drugs.

    PubMed

    Lin, Karl K; Rahman, Mohammad A

    2018-05-21

    Interest has been expressed in using a joint test procedure that requires that the results of both a trend test and a pairwise comparison test between the control and the high groups be statistically significant simultaneously at the levels of significance recommended in the FDA 2001 draft guidance for industry document for the separate tests in order for the drug effect on the development of an individual tumor type to be considered as statistically significant. Results of our simulation studies show that there is a serious consequence of large inflations of the false negative rate through large decreases of false positive rate in the use of the above joint test procedure in the final interpretation of the carcinogenicity potential of a new drug if the levels of significance recommended for separate tests are used. The inflation can be as high as 204.5% of the false negative rate when the trend test alone is required to test if the effect is statistically significant. To correct the problem, new sets of levels of significance have also been developed for those who want to use the joint test in reviews of carcinogenicity studies.

  10. An accurate test for homogeneity of odds ratios based on Cochran's Q-statistic.

    PubMed

    Kulinskaya, Elena; Dollinger, Michael B

    2015-06-10

    A frequently used statistic for testing homogeneity in a meta-analysis of K independent studies is Cochran's Q. For a standard test of homogeneity the Q statistic is referred to a chi-square distribution with K-1 degrees of freedom. For the situation in which the effects of the studies are logarithms of odds ratios, the chi-square distribution is much too conservative for moderate size studies, although it may be asymptotically correct as the individual studies become large. Using a mixture of theoretical results and simulations, we provide formulas to estimate the shape and scale parameters of a gamma distribution to fit the distribution of Q. Simulation studies show that the gamma distribution is a good approximation to the distribution for Q. Use of the gamma distribution instead of the chi-square distribution for Q should eliminate inaccurate inferences in assessing homogeneity in a meta-analysis. (A computer program for implementing this test is provided.) This hypothesis test is competitive with the Breslow-Day test both in accuracy of level and in power.

  11. Comparing the Effectiveness of Traditional and Active Learning Methods in Business Statistics: Convergence to the Mean

    ERIC Educational Resources Information Center

    Weltman, David; Whiteside, Mary

    2010-01-01

    This research shows that active learning is not universally effective and, in fact, may inhibit learning for certain types of students. The results of this study show that as increased levels of active learning are utilized, student test scores decrease for those with a high grade point average. In contrast, test scores increase as active learning…

  12. Goodness-of-Fit Tests for Generalized Normal Distribution for Use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Das, Samiran

    2018-04-01

    The use of three-parameter generalized normal (GNO) as a hydrological frequency distribution is well recognized, but its application is limited due to unavailability of popular goodness-of-fit (GOF) test statistics. This study develops popular empirical distribution function (EDF)-based test statistics to investigate the goodness-of-fit of the GNO distribution. The focus is on the case most relevant to the hydrologist, namely, that in which the parameter values are unidentified and estimated from a sample using the method of L-moments. The widely used EDF tests such as Kolmogorov-Smirnov, Cramer von Mises, and Anderson-Darling (AD) are considered in this study. A modified version of AD, namely, the Modified Anderson-Darling (MAD) test, is also considered and its performance is assessed against other EDF tests using a power study that incorporates six specific Wakeby distributions (WA-1, WA-2, WA-3, WA-4, WA-5, and WA-6) as the alternative distributions. The critical values of the proposed test statistics are approximated using Monte Carlo techniques and are summarized in chart and regression equation form to show the dependence of shape parameter and sample size. The performance results obtained from the power study suggest that the AD and a variant of the MAD (MAD-L) are the most powerful tests. Finally, the study performs case studies involving annual maximum flow data of selected gauged sites from Irish and US catchments to show the application of the derived critical values and recommends further assessments to be carried out on flow data sets of rivers with various hydrological regimes.

  13. [Treatment of patients with neuromuscular disease in a warm climate].

    PubMed

    Dahl, Arve; Skjeldal, Ola H; Simensen, Andreas; Dalen, Håkon E; Bråthen, Tone; Ahlvin, Petra; Svendsby, Ellen Kathrine; Sveinall, Anne; Fredriksen, Per Morten

    2004-07-01

    Several patient groups request treatment in a warm climate, in spite of the fact that the effects of such treatment are undocumented. 47 children and 40 adults with neuromuscular diseases were recruited, stratified according to sex, use or non-use of electric wheelchair, primary myopathy or hereditary neuropathy, and randomised into two adult and two children groups. The patients were treated in a rehabilitation centre, either on Lanzarote or in Norway. All patients were monitored with physical tests and questionnaires at the start of the study, at the end of the treatment period, after three months (all groups) and after six months (adults only). No significant differences in effect between the groups were found. In the warm climate, the adult patient group showed a statistically significant improvement regarding pain, quality of life, depression, and results of physical tests at the end of treatment. After three months, the improvement in physical tests was still present. Among adult patients treated in Norway, improvement in physical tests was statistically significant after three months, but not at the end of the treatment period. This study did not show a statistically significant difference between patients with various neuromuscular diseases treated in a warm climate compared to similar patients treated in Norway.

  14. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    PubMed

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    PubMed

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.

  16. Single variant and multi-variant trend tests for genetic association with next generation sequencing that are robust to sequencing error

    PubMed Central

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek

    2013-01-01

    As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495

  17. Pathway analysis with next-generation sequencing data.

    PubMed

    Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao

    2015-04-01

    Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.

  18. A novel approach to detect test-seeking behaviour in the blood donor population: making the invisible visible.

    PubMed

    de Vos, A S; Lieshout-Krikke, R W; Slot, E; Cator, E A; Janssen, M P

    2016-10-01

    Individuals may donate blood in order to determine their infection status after exposure to an increased infection risk. Such test-seeking behaviour decreases transfusion safety. Instances of test seeking are difficult to substantiate as donors are unlikely to admit to such behaviour. However, manifestation in a population of repeat donors may be determined using statistical inference. Test-seeking donors would be highly motivated to donate following infection risk, influencing the timing of their donation. Donation intervals within 2005-2014 of all Dutch blood donors who acquired syphilis (N = 50), HIV (N = 13), HTLV (N = 4) or HCV (N = 2) were compared to donation intervals of uninfected blood donors (N = 7 327 836) using the Anderson-Darling test. We adjusted for length bias as well as for age, gender and donation type of the infected. Additionally, the power of the proposed method was investigated by simulation. Among the Dutch donors who acquired infection, we found only a non-significant overrepresentation of short donation intervals (P = 0·54). However, we show by simulation that both relatively short and long donation intervals among infected donors can reveal test seeking. The power of the method is >90% if among 69 infected donors >35 (51%) are test seeking, or if among 320 infected donors >90 (30%) are test seeking. We show how statistical analysis may be used to reveal the extent of test seeking in repeat blood donor populations. In the Dutch setting, indications for test-seeking behaviour were not statistically significant. This may, however, be due to the low number of infected individuals. © 2016 International Society of Blood Transfusion.

  19. [Again review of research design and statistical methods of Chinese Journal of Cardiology].

    PubMed

    Kong, Qun-yu; Yu, Jin-ming; Jia, Gong-xian; Lin, Fan-li

    2012-11-01

    To re-evaluate and compare the research design and the use of statistical methods in Chinese Journal of Cardiology. Summary the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology all over the year of 2011, and compared the result with the evaluation of 2008. (1) There is no difference in the distribution of the design of researches of between the two volumes. Compared with the early volume, the use of survival regression and non-parameter test are increased, while decreased in the proportion of articles with no statistical analysis. (2) The proportions of articles in the later volume are significant lower than the former, such as 6(4%) with flaws in designs, 5(3%) with flaws in the expressions, 9(5%) with the incomplete of analysis. (3) The rate of correction of variance analysis has been increased, so as the multi-group comparisons and the test of normality. The error rate of usage has been decreased form 17% to 25% without significance in statistics due to the ignorance of the test of homogeneity of variance. Many improvements showed in Chinese Journal of Cardiology such as the regulation of the design and statistics. The homogeneity of variance should be paid more attention in the further application.

  20. Evaluation of moxifloxacin-hydroxyapatite composite graft in the regeneration of intrabony defects: A clinical, radiographic, and microbiological study

    PubMed Central

    Nagarjuna Reddy, Y. V.; Deepika, P. C.; Venkatesh, M. P.; Rajeshwari, K. G.

    2016-01-01

    Background: The formation of new connective periodontal attachment is contingent upon the elimination or marked reduction of pathogens at the treated periodontal site. An anti-microbial agent, i.e. moxifloxacin has been incorporated into the bone graft to control infection and facilitate healing during and after periodontal therapy. Materials and Methods: By purposive sampling, 15 patients with at least two contralateral vertical defect sites were selected. The selected sites in each individual were divided randomly into test and control sites according to split-mouth design. Test site received moxifloxacin-hydroxyapatite composite graft and control site received hydroxyapatite-placebo gel composite graft. Probing depth (PD) and Clinical attachment level (CAL) were assessed at baseline, 3, 6, 9, and 12 months. Bone probing depth (BPD) and hard tissue parameters such as amount of defect fill, percentage of defect fill, and changes in alveolar crest were assessed at baseline, 6, and 12 months. Changes in subgingival microflora were also assessed by culturing the subgingival plaque samples at baseline and at 3-month follow-up. The clinical, radiographic, and microbiological data obtained were subjected to statistical analysis using descriptive statistics, paired sample t-test, independent t-test, and contingency test. Results: On intragroup comparison at test and control sites, there was a significant improvement in all clinical and radiographic parameters. However, on intergroup comparison of the same, there was no statistically significant difference between test and control sites at any interval. Although test sites showed slightly higher amount of bone fill, it was not statistically significant. There was a significant reduction in the counts of Aggregatibacter actinomycetemcomitans and Porphyromonas gingivalis at both sites from baseline to 3 months. In addition, there was a significant reduction at test sites as compared to control sites at 3-month follow-up (P = 0.003 and P = 0.013). Conclusion: The reduction in microbial counts found in test sites at 3-month follow-up could not bring similar significant improvements in the clinical and radiographic parameters though the test sites showed slightly higher bone fill. PMID:27630501

  1. Normal Distribution of CD8+ T-Cell-Derived ELISPOT Counts within Replicates Justifies the Reliance on Parametric Statistics for Identifying Positive Responses.

    PubMed

    Karulin, Alexey Y; Caspell, Richard; Dittrich, Marcus; Lehmann, Paul V

    2015-03-02

    Accurate assessment of positive ELISPOT responses for low frequencies of antigen-specific T-cells is controversial. In particular, it is still unknown whether ELISPOT counts within replicate wells follow a theoretical distribution function, and thus whether high power parametric statistics can be used to discriminate between positive and negative wells. We studied experimental distributions of spot counts for up to 120 replicate wells of IFN-γ production by CD8+ T-cell responding to EBV LMP2A (426 - 434) peptide in human PBMC. The cells were tested in serial dilutions covering a wide range of average spot counts per condition, from just a few to hundreds of spots per well. Statistical analysis of the data using diagnostic Q-Q plots and the Shapiro-Wilk normality test showed that in the entire dynamic range of ELISPOT spot counts within replicate wells followed a normal distribution. This result implies that the Student t-Test and ANOVA are suited to identify positive responses. We also show experimentally that borderline responses can be reliably detected by involving more replicate wells, plating higher numbers of PBMC, addition of IL-7, or a combination of these. Furthermore, we have experimentally verified that the number of replicates needed for detection of weak responses can be calculated using parametric statistics.

  2. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  3. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    PubMed

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  4. The relationship between the behavior problems and motor skills of students with intellectual disability.

    PubMed

    Lee, Yangchool; Jeoung, Bogja

    2016-12-01

    The purpose of this study was to determine the relationship between the motor skills and the behavior problems of students with intellectual disabilities. The study participants were 117 students with intellectual disabilities who were between 7 and 25 years old (male, n=79; female, n=38) and attending special education schools in South Korea. Motor skill abilities were assessed by using the second version of the Bruininks-Oseretsky test of motor proficiency, which includes subtests in fine motor control, manual coordination, body coordination, strength, and agility. Data were analyzed with SPSS IBM 21 by using correlation and regression analyses, and the significance level was set at P <0.05. The results showed that fine motor precision and integration had a statistically significant influence on aggressive behavior. Manual dexterity showed a statistically significant influence on somatic complaint and anxiety/depression, and bilateral coordination had a statistically significant influence on social problems, attention problem, and aggressive behavior. Our results showed that balance had a statistically significant influence on social problems and aggressive behavior, and speed and agility had a statistically significant influence on social problems and aggressive behavior. Upper limb coordination and strength had a statistically significant influence on social problems.

  5. Statistical inference for Hardy-Weinberg proportions in the presence of missing genotype information.

    PubMed

    Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor

    2013-01-01

    In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.

  6. mvp - an open-source preprocessor for cleaning duplicate records and missing values in mass spectrometry data.

    PubMed

    Lee, Geunho; Lee, Hyun Beom; Jung, Byung Hwa; Nam, Hojung

    2017-07-01

    Mass spectrometry (MS) data are used to analyze biological phenomena based on chemical species. However, these data often contain unexpected duplicate records and missing values due to technical or biological factors. These 'dirty data' problems increase the difficulty of performing MS analyses because they lead to performance degradation when statistical or machine-learning tests are applied to the data. Thus, we have developed missing values preprocessor (mvp), an open-source software for preprocessing data that might include duplicate records and missing values. mvp uses the property of MS data in which identical chemical species present the same or similar values for key identifiers, such as the mass-to-charge ratio and intensity signal, and forms cliques via graph theory to process dirty data. We evaluated the validity of the mvp process via quantitative and qualitative analyses and compared the results from a statistical test that analyzed the original and mvp-applied data. This analysis showed that using mvp reduces problems associated with duplicate records and missing values. We also examined the effects of using unprocessed data in statistical tests and examined the improved statistical test results obtained with data preprocessed using mvp.

  7. On the Computation of the RMSEA and CFI from the Mean-And-Variance Corrected Test Statistic with Nonnormal Data in SEM.

    PubMed

    Savalei, Victoria

    2018-01-01

    A new type of nonnormality correction to the RMSEA has recently been developed, which has several advantages over existing corrections. In particular, the new correction adjusts the sample estimate of the RMSEA for the inflation due to nonnormality, while leaving its population value unchanged, so that established cutoff criteria can still be used to judge the degree of approximate fit. A confidence interval (CI) for the new robust RMSEA based on the mean-corrected ("Satorra-Bentler") test statistic has also been proposed. Follow up work has provided the same type of nonnormality correction for the CFI (Brosseau-Liard & Savalei, 2014). These developments have recently been implemented in lavaan. This note has three goals: a) to show how to compute the new robust RMSEA and CFI from the mean-and-variance corrected test statistic; b) to offer a new CI for the robust RMSEA based on the mean-and-variance corrected test statistic; and c) to caution that the logic of the new nonnormality corrections to RMSEA and CFI is most appropriate for the maximum likelihood (ML) estimator, and cannot easily be generalized to the most commonly used categorical data estimators.

  8. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  9. Knowledge dimensions in hypothesis test problems

    NASA Astrophysics Data System (ADS)

    Krishnan, Saras; Idris, Noraini

    2012-05-01

    The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.

  10. Effect of in-office bleaching agents on physical properties of dental composite resins.

    PubMed

    Mourouzis, Petros; Koulaouzidou, Elisabeth A; Helvatjoglu-Antoniades, Maria

    2013-04-01

    The physical properties of dental restorative materials have a crucial effect on the longevity of restorations and moreover on the esthetic demands of patients, but they may be compromised by bleaching treatments. The purpose of this study was to evaluate the effects of in-office bleaching agents on the physical properties of three composite resin restorative materials. The bleaching agents used were hydrogen peroxide and carbamide peroxide at high concentrations. Specimens of each material were prepared, cured, and polished. Measurements of color difference, microhardness, and surface roughness were recorded before and after bleaching and data were examined statistically by analysis of variance (ANOVA) and Tukey HSD post-hoc test at P < .05. The measurements showed that hue and chroma of silorane-based composite resin altered after the bleaching procedure (P < .05). No statistically significant differences were found when testing the microhardness and surface roughness of composite resins tested (P > .05). The silorane-based composite resin tested showed some color alteration after bleaching procedures. The bleaching procedure did not alter the microhardness and the surface roughness of all composite resins tested.

  11. Corrosion Analysis of an Experimental Noble Alloy on Commercially Pure Titanium Dental Implants

    PubMed Central

    Bortagaray, Manuel Alberto; Ibañez, Claudio Arturo Antonio; Ibañez, Maria Constanza; Ibañez, Juan Carlos

    2016-01-01

    Objective: To determine whether the Noble Bond® Argen® alloy was electrochemically suitable for the manufacturing of prosthetic superstructures over commercially pure titanium (c.p. Ti) implants. Also, the electrolytic corrosion effects over three types of materials used on prosthetic suprastructures that were coupled with titanium implants were analysed: Noble Bond® (Argen®), Argelite 76sf +® (Argen®), and commercially pure titanium. Materials and Methods: 15 samples were studied, consisting in 1 abutment and one c.p. titanium implant each. They were divided into three groups, namely: Control group: five c.p Titanium abutments (B&W®), Test group 1: five Noble Bond® (Argen®) cast abutments and, Test group 2: five Argelite 76sf +® (Argen®) abutments. In order to observe the corrosion effects, the surface topography was imaged using a confocal microscope. Thus, three metric parameters (Sa: Arithmetical mean height of the surface. Sp: Maximum height of peaks. Sv: Maximum height of valleys.), were measured at three different areas: abutment neck, implant neck and implant body. The samples were immersed in artificial saliva for 3 months, after which the procedure was repeated. The metric parameters were compared by statistical analysis. Results: The analysis of the Sa at the level of the implant neck, abutment neck and implant body, showed no statistically significant differences on combining c.p. Ti implants with the three studied alloys. The Sp showed no statistically significant differences between the three alloys. The Sv showed no statistically significant differences between the three alloys. Conclusion: The effects of electrogalvanic corrosion on each of the materials used when they were in contact with c.p. Ti showed no statistically significant differences. PMID:27733875

  12. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  13. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  14. Is there an association between flow diverter fish mouthing and delayed-type hypersensitivity to metals?-a case-control study.

    PubMed

    Kocer, Naci; Mondel, Prabath Kumar; Yamac, Elif; Kavak, Ayse; Kizilkilic, Osman; Islak, Civan

    2017-11-01

    Flow diverters are increasingly used in the treatment of complex and giant intracranial aneurysms. However, they are associated with complications like late aneurysmal rupture. Additionally, flow diverters show focal structural decrease in luminal diameter without any intimal hyperplasia. This resembles a "fish mouth" when viewed en face. In this pilot study, we tested the hypothesis of a possible association between flow diverter fish-mouthing and delayed-type hypersensitivity to its metal constituents. We retrospectively reviewed patient records from our center between May 2010 and November 2015. A total of nine patients had flow diverter fish mouthing. A control group of 25 patients was selected. All study participants underwent prospective patch test to detect hypersensitivity to flow diverter metal constituents. Analysis was performed using logistic regression analysis and Wilcoxon sign rank sum test. Univariate and multivariate analyses were performed to test variables to predict flow diverter fish mouthing. The association between flow diverter fish mouthing and positive patch test was not statistically significant. In multivariate analysis, history of allergy and maximum aneurysm size category was associated with flow diverter fish mouthing. This was further confirmed on Wilcoxon sign rank sum test. The study showed statistically significant association between flow diverter fish mouthing and history of contact allergy and a small aneurysmal size. Further large-scale studies are needed to detect a statistically significant association between flow diverter fish mouthing and patch test. We recommend early and more frequent follow-up imaging in patients with contact allergy to detect flow diverter fish mouthing and its subsequent evolution.

  15. Equivalence Testing of Complex Particle Size Distribution Profiles Based on Earth Mover's Distance.

    PubMed

    Hu, Meng; Jiang, Xiaohui; Absar, Mohammad; Choi, Stephanie; Kozak, Darby; Shen, Meiyu; Weng, Yu-Ting; Zhao, Liang; Lionberger, Robert

    2018-04-12

    Particle size distribution (PSD) is an important property of particulates in drug products. In the evaluation of generic drug products formulated as suspensions, emulsions, and liposomes, the PSD comparisons between a test product and the branded product can provide useful information regarding in vitro and in vivo performance. Historically, the FDA has recommended the population bioequivalence (PBE) statistical approach to compare the PSD descriptors D50 and SPAN from test and reference products to support product equivalence. In this study, the earth mover's distance (EMD) is proposed as a new metric for comparing PSD particularly when the PSD profile exhibits complex distribution (e.g., multiple peaks) that is not accurately described by the D50 and SPAN descriptor. EMD is a statistical metric that measures the discrepancy (distance) between size distribution profiles without a prior assumption of the distribution. PBE is then adopted to perform statistical test to establish equivalence based on the calculated EMD distances. Simulations show that proposed EMD-based approach is effective in comparing test and reference profiles for equivalence testing and is superior compared to commonly used distance measures, e.g., Euclidean and Kolmogorov-Smirnov distances. The proposed approach was demonstrated by evaluating equivalence of cyclosporine ophthalmic emulsion PSDs that were manufactured under different conditions. Our results show that proposed approach can effectively pass an equivalent product (e.g., reference product against itself) and reject an inequivalent product (e.g., reference product against negative control), thus suggesting its usefulness in supporting bioequivalence determination of a test product to the reference product which both possess multimodal PSDs.

  16. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  17. Decadal power in land air temperatures: Is it statistically significant?

    NASA Astrophysics Data System (ADS)

    Thejll, Peter A.

    2001-12-01

    The geographical distribution and properties of the well-known 10-11 year signal in terrestrial temperature records is investigated. By analyzing the Global Historical Climate Network data for surface air temperatures we verify that the signal is strongest in North America and is similar in nature to that reported earlier by R. G. Currie. The decadal signal is statistically significant for individual stations, but it is not possible to show that the signal is statistically significant globally, using strict tests. In North America, during the twentieth century, the decadal variability in the solar activity cycle is associated with the decadal part of the North Atlantic Oscillation index series in such a way that both of these signals correspond to the same spatial pattern of cooling and warming. A method for testing statistical results with Monte Carlo trials on data fields with specified temporal structure and specific spatial correlation retained is presented.

  18. Applying the multivariate time-rescaling theorem to neural population models

    PubMed Central

    Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon

    2011-01-01

    Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436

  19. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Statistical alignment: computational properties, homology testing and goodness-of-fit.

    PubMed

    Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G

    2000-09-08

    The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.

  1. Steganalysis of recorded speech

    NASA Astrophysics Data System (ADS)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  2. Appraisal of within- and between-laboratory reproducibility of non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: comparison of OECD TG429 performance standard and statistical evaluation.

    PubMed

    Yang, Hyeri; Na, Jihye; Jang, Won-Hee; Jung, Mi-Sook; Jeon, Jun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Lim, Kyung-Min; Bae, SeungJin

    2015-05-05

    Mouse local lymph node assay (LLNA, OECD TG429) is an alternative test replacing conventional guinea pig tests (OECD TG406) for the skin sensitization test but the use of a radioisotopic agent, (3)H-thymidine, deters its active dissemination. New non-radioisotopic LLNA, LLNA:BrdU-FCM employs a non-radioisotopic analog, 5-bromo-2'-deoxyuridine (BrdU) and flow cytometry. For an analogous method, OECD TG429 performance standard (PS) advises that two reference compounds be tested repeatedly and ECt(threshold) values obtained must fall within acceptable ranges to prove within- and between-laboratory reproducibility. However, this criteria is somewhat arbitrary and sample size of ECt is less than 5, raising concerns about insufficient reliability. Here, we explored various statistical methods to evaluate the reproducibility of LLNA:BrdU-FCM with stimulation index (SI), the raw data for ECt calculation, produced from 3 laboratories. Descriptive statistics along with graphical representation of SI was presented. For inferential statistics, parametric and non-parametric methods were applied to test the reproducibility of SI of a concurrent positive control and the robustness of results were investigated. Descriptive statistics and graphical representation of SI alone could illustrate the within- and between-laboratory reproducibility. Inferential statistics employing parametric and nonparametric methods drew similar conclusion. While all labs passed within- and between-laboratory reproducibility criteria given by OECD TG429 PS based on ECt values, statistical evaluation based on SI values showed that only two labs succeeded in achieving within-laboratory reproducibility. For those two labs that satisfied the within-lab reproducibility, between-laboratory reproducibility could be also attained based on inferential as well as descriptive statistics. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  4. Assessing the impacts of ethanol and isobutanol on gaseous and particulate emissions from flexible fuel vehicles.

    PubMed

    Karavalakis, Georgios; Short, Daniel; Russell, Robert L; Jung, Heejung; Johnson, Kent C; Asa-Awuku, Akua; Durbin, Thomas D

    2014-12-02

    This study investigated the effects of higher ethanol blends and an isobutanol blend on the criteria emissions, fuel economy, gaseous toxic pollutants, and particulate emissions from two flexible-fuel vehicles equipped with spark ignition engines, with one wall-guided direct injection and one port fuel injection configuration. Both vehicles were tested over triplicate Federal Test Procedure (FTP) and Unified Cycles (UC) using a chassis dynamometer. Emissions of nonmethane hydrocarbons (NMHC) and carbon monoxide (CO) showed some statistically significant reductions with higher alcohol fuels, while total hydrocarbons (THC) and nitrogen oxides (NOx) did not show strong fuel effects. Acetaldehyde emissions exhibited sharp increases with higher ethanol blends for both vehicles, whereas butyraldehyde emissions showed higher emissions for the butanol blend relative to the ethanol blends at a statistically significant level. Particulate matter (PM) mass, number, and soot mass emissions showed strong reductions with increasing alcohol content in gasoline. Particulate emissions were found to be clearly influenced by certain fuel parameters including oxygen content, hydrogen content, and aromatics content.

  5. Comparison of Value System among a Group of Military Prisoners with Controls in Tehran.

    PubMed

    Mirzamani, Seyed Mahmood

    2011-01-01

    Religious values were investigated in a group of Iranian Revolutionary Guards in Tehran. The sample consisted of official duty troops and conscripts who were in prison due to a crime. One hundred thirty seven individuals cooperated with us in the project (37 Official personnel and 100 conscripts). The instruments used included a demographic questionnaire containing personal data and the Allport, Vernon and Lindzey's Study of Values Test. Most statistical methods used descriptive statistical methods such as frequency, mean, tables and t-test. The results showed that religious value was lower in the criminal group than the control group (p<.001). This study showed lower religious value scores in the criminals group, suggesting the possibility that lower religious value increases the probability of committing crimes.

  6. Validation of a modification to Performance-Tested Method 070601: Reveal Listeria Test for detection of Listeria spp. in selected foods and selected environmental samples.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method (PTM) 070601, Reveal Listeria Test (Reveal), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there was a statistically significant difference in performance between the Reveal and reference culture [U.S. Food and Drug Administration's Bacteriological Analytical Manual (FDA/BAM) or U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS)] methods for only a single food in one trial (pasteurized crab meat) at the 27 h enrichment time point, with more positive results obtained with the FDA/BAM reference method. No foods showed statistically significant differences in method performance at the 30 h time point. Independent laboratory testing of 3 foods again produced a statistically significant difference in results for crab meat at the 27 h time point; otherwise results of the Reveal and reference methods were statistically equivalent. Overall, considering both internal and independent laboratory trials, sensitivity of the Reveal method relative to the reference culture procedures in testing of foods was 85.9% at 27 h and 97.1% at 30 h. Results from 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the Reveal method was more productive than the reference USDA-FSIS culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the Reveal method at the 24 h time point. Overall, sensitivity of the Reveal method at 24 h relative to that of the USDA-FSIS method was 153%. The Reveal method exhibited extremely high specificity, with only a single false-positive result in all trials combined for overall specificity of 99.5%.

  7. A Simple and Robust Statistical Test for Detecting the Presence of Recombination

    PubMed Central

    Bruen, Trevor C.; Philippe, Hervé; Bryant, David

    2006-01-01

    Recombination is a powerful evolutionary force that merges historically distinct genotypes. But the extent of recombination within many organisms is unknown, and even determining its presence within a set of homologous sequences is a difficult question. Here we develop a new statistic, Φw, that can be used to test for recombination. We show through simulation that our test can discriminate effectively between the presence and absence of recombination, even in diverse situations such as exponential growth (star-like topologies) and patterns of substitution rate correlation. A number of other tests, Max χ2, NSS, a coalescent-based likelihood permutation test (from LDHat), and correlation of linkage disequilibrium (both r2 and |D′|) with distance, all tend to underestimate the presence of recombination under strong population growth. Moreover, both Max χ2 and NSS falsely infer the presence of recombination under a simple model of mutation rate correlation. Results on empirical data show that our test can be used to detect recombination between closely as well as distantly related samples, regardless of the suspected rate of recombination. The results suggest that Φw is one of the best approaches to distinguish recurrent mutation from recombination in a wide variety of circumstances. PMID:16489234

  8. Comparison of Piezosurgery and Conventional Rotary Instruments for Removal of Impacted Mandibular Third Molars: A Randomized Controlled Clinical and Radiographic Trial

    PubMed Central

    Shokry, Mohamed; Aboelsaad, Nayer

    2016-01-01

    The purpose of this study was to test the effect of the surgical removal of impacted mandibular third molars using piezosurgery versus the conventional surgical technique on postoperative sequelae and bone healing. Material and Methods. This study was carried out as a randomized controlled clinical trial: split mouth design. Twenty patients with bilateral mandibular third molar mesioangular impaction class II position B indicated for surgical extraction were treated randomly using either the piezosurgery or the conventional bur technique on each site. Duration of the procedure, postoperative edema, trismus, pain, healing, and bone density and quantity were evaluated up to 6 months postoperatively. Results. Test and control sites were compared using paired t-test. There was statistical significance in reduction of pain and swelling in test sites, where the time of the procedure was statistically increased in test site. For bone quantity and quality, statistical difference was found where test site showed better results. Conclusion. Piezosurgery technique improves quality of patient's life in form of decrease of postoperative pain, trismus, and swelling. Furthermore, it enhances bone quality within the extraction socket and bone quantity along the distal aspect of the mandibular second molar. PMID:27597866

  9. Can power-law scaling and neuronal avalanches arise from stochastic dynamics?

    PubMed

    Touboul, Jonathan; Destexhe, Alain

    2010-02-11

    The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.

  10. [Comparison of thromboelastography and routine coagulation tests for evaluation of blood coagulation function in patients].

    PubMed

    Chen, Guan-Yi; Ou Yang, Xi-Lin; Wu, Jing-Hui; Wang, Li-Hua; Yang, Jin-Hua; Gu, Li-Nan; Lu, Zhu-Jie; Zhao, Xiao-Zi

    2015-04-01

    To investigate the correlation and consistency between thromboelastography(TEG) and routine coagulation tests, and to evaluate the value of the two methods in determining the blood coagulation of patients. The TEG, routine coagulation tests and platelet counts of 182 patients from the Intensive Care Unit(ICU) and Department of Gastroenterology in our hospital from January to September 2014 were performed and analyzed retrospectively for their correlation, Kappa identity test analysis and chi-square test, and the diagnostic sensitivity and specificity of both methods in the patients with bleeding were evaluated. The TEG R time and PT, R time and APTT showed a linear dependence (P<0.01). The relationship between the TEG K value, α-Angle, MA and Fibrinogen showed a linear dependence (P<0.001). And the relationship between the TEG K value, α-Angle, MA and the platelet count were in a linear dependent way (P<0.001). The Kappa values of the TEG R time with PT and APTT were 0.038 (P>0.05) and 0.061 (P>0.05), respectively. The chi-square test values of the TEG R time with PT and APTT were 35.309 (P<0.001) and 15.848 (P<0.001), respectively. The Fibrinogen and the TEG K value, α-Angle, MA value had statistical significance (P<0.001), with a Kappa value of 0.323, 0.288 and 0.427, respectively. The chi-square test values between Fibrinogen and the TEG K value, α-Angle, MA value were not statistically significant, with X2=1.091 (P=0.296), X2=1.361 (P=0.243), X2=0.108 (P=0.742). The Kappa values of the platelet count and the TEG K value, α-Angle, MA value were 0.379, 0.208 and 0.352, respectively, which were also statistically significant difference (P<0.001). The chi-square test values between the platelet count and the TEG K value, α-Angle, MA value showed a statistically significant difference (P<0.001), with X2=37.5, X2=37.23, X2=26.630. The diagnostic sensitivity of the two methods for the patients with bleeding was less than 50%. There was a significant correlation between some TEG parameters and routine coagulation tests, but the consistency is weak. Moreover, the diagnostic sensitivity of two methods in the patients with bleeding is low. It was concluded that the TEG cannot replace the conventional coagulation tests, and the preferable method remains uncertain which could reflect the risk of bleeding.

  11. The Effect of Compressive Loading on the Fatigue Lifetime of Graphite/ Epoxy Laminates

    DTIC Science & Technology

    1979-10-01

    Un-notched 11 3 Specimen Configuration, Notched 12 4 Location of Thickness and Width Measurements 14 5 Overall View of Composite Compression Test...Grips in Universal Testing Machine 24 8 Specimen Positioning Device 26 9 "Full-Fixity" Apparatus, Showing Auxiliary Platens 26 10 Specimen and Restraint...the accumu- lation of a statistically significant data base. * IA previous research study [11 showed that graphite/epoxy composites under constant

  12. Toward a perceptual image quality assessment of color quantized images

    NASA Astrophysics Data System (ADS)

    Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.

  13. Impairment of Concept Formation Ability in Children with ADHD: Comparisons between Lower Grades and Higher Grades

    PubMed Central

    Hong, Hye Jeong; Kim, Jin Sung; Seo, Wan Seok; Koo, Bon Hoon; Bai, Dai Seg; Jeong, Jin Young

    2010-01-01

    Objective We investigated executive functions (EFs), as evaluated by the Wisconsin Card Sorting Test (WCST), and other EF between lower grades (LG) and higher grades (HG) in elementary-school-age attention deficit hyperactivity disorder (ADHD) children. Methods We classified a sample of 112 ADHD children into 4 groups (composed of 28 each) based on age (LG vs. HG) and WCST performance [lower vs. higher performance on WCST, defined by the number of completed categories (CC)] Participants in each group were matched according to age, gender, ADHD subtype, and intelligence. We used the Wechsler intelligence Scale for Children 3rd edition to test intelligence and the Computerized Neurocognitive Function Test-IV, which included the WCST, to test EF. Results Comparisons of EFs scores in LG ADHD children showed statistically significant differences in performing digit spans backward, some verbal learning scores, including all memory scores, and Stroop test scores. However, comparisons of EF scores in HG ADHD children did not show any statistically significant differences. Correlation analyses of the CC and EF variables and stepwise multiple regression analysis in LG ADHD children showed a combination of the backward form of the Digit span test and Visual span test in lower-performance ADHD participants significantly predicted the number of CC (R2=0.273, p<0.001). Conclusion This study suggests that the design of any battery of neuropsychological tests for measuring EF in ADHD children should first consider age before interpreting developmental variations and neuropsychological test results. Researchers should consider the dynamics of relationships within EF, as measured by neuropsychological tests. PMID:20927306

  14. Migraine patients consistently show abnormal vestibular bedside tests.

    PubMed

    Maranhão, Eliana Teixeira; Maranhão-Filho, Péricles; Luiz, Ronir Raggio; Vincent, Maurice Borges

    2016-01-01

    Migraine and vertigo are common disorders, with lifetime prevalences of 16% and 7% respectively, and co-morbidity around 3.2%. Vestibular syndromes and dizziness occur more frequently in migraine patients. We investigated bedside clinical signs indicative of vestibular dysfunction in migraineurs. To test the hypothesis that vestibulo-ocular reflex, vestibulo-spinal reflex and fall risk (FR) responses as measured by 14 bedside tests are abnormal in migraineurs without vertigo, as compared with controls. Cross-sectional study including sixty individuals - thirty migraineurs, 25 women, 19-60 y-o; and 30 gender/age healthy paired controls. Migraineurs showed a tendency to perform worse in almost all tests, albeit only the Romberg tandem test was statistically different from controls. A combination of four abnormal tests better discriminated the two groups (93.3% specificity). Migraine patients consistently showed abnormal vestibular bedside tests when compared with controls.

  15. Shear bond strength of orthodontic color-change adhesives with different light-curing times

    PubMed Central

    Bayani, Shahin; Ghassemi, Amirreza; Manafi, Safa; Delavarian, Mohadeseh

    2015-01-01

    Background: The purpose of this study was to evaluate the effect of light-curing time on the shear bond strength (SBS) of two orthodontic color-change adhesives (CCAs). Materials and Methods: A total of 72 extracted premolars were randomly assigned into 6 groups of 12 teeth each. Subsequent to primer application, a metal bracket was bonded to the buccal surface using an orthodontic adhesive. Two CCAs (Greengloo and Transbond Plus) were tested and one conventional light-cured adhesive (Resilience) served as control. For each adhesive, the specimens were light-cured for two different times of 20 and 40 s. All the specimens underwent mechanical testing using a universal testing machine to measure the SBS. Adhesive remnant index (ARI) was used to assess the remnant adhesive material on the tooth surface. All statistical analyses were performed using SPSS software. The significance level for all statistical tests was set at P ≤ 0.05. Results: The SBSs of the tested groups were in the range of 14.05-31.25 MPa. Greengloo adhesive showed the highest SBS values when light-cured for 40 s, and Transbond Plus adhesive showed the lowest values when light-cured for 20 s. ARI scores of Transbond Plus adhesive were significantly higher than those of controls, while other differences in ARI values were not significant. Conclusion: Within the limitations of his study, decreasing the light-curing time from 40 to 20 s decreased the SBS of the tested adhesives; however, this decline in SBS was statistically significant only in Transbond Plus adhesive PMID:26005468

  16. Study of statistical coding for digital TV

    NASA Technical Reports Server (NTRS)

    Gardenhire, L. W.

    1972-01-01

    The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.

  17. Statistics of Scientific Procedures on Living Animals Great Britain 2015 - highlighting an ongoing upward trend in animal use and missed opportunities.

    PubMed

    Hudson-Shore, Michelle

    2016-12-01

    The Annual Statistics of Scientific Procedures on Living Animals Great Britain 2015 indicate that the Home Office were correct in recommending that caution should be exercised when interpreting the 2014 data as an apparent decline in animal experiments. The 2015 report shows that, as the changes to the format of the annual statistics have become more familiar and less problematic, there has been a re-emergence of the upward trend in animal research and testing in Great Britain. The 2015 statistics report an increase in animal procedures (up to 4,142,631) and in the number of animals used (up to 4,069,349). This represents 1% more than the totals in 2013, and a 7% increase on the procedures reported in 2014. This paper details an analysis of these most recent statistics, providing information on overall animal use and highlighting specific issues associated with genetically-altered animals, dogs and primates. It also reflects on areas of the new format that have previously been highlighted as being problematic, and concludes with a discussion about the use of animals in regulatory research and testing, and how there are significant missed opportunities for replacing some of the animal-based tests in this area. 2016 FRAME.

  18. Story Based Activities Enhance Literacy Skills in Preschool Children

    ERIC Educational Resources Information Center

    Yazici, Elçin; Bolay, Hayrunnisa

    2017-01-01

    We investigated the impact of story-based activities on literacy skills in pre-school children. The efficacy of story-based activities program were tested by literacy skills survey test. Results showed that, the scores of overall literacy skills and all subsets skills in the study group (n = 45) were statistically significantly higher than the…

  19. A Comparison of Two Tests for the Significance of a Mean Vector.

    DTIC Science & Technology

    1978-01-01

    rejected as soon as a component test in the sequence shows significance . It is well. known (3’. Roy 1958; Roy, Gnanadesikan and Srivastava 1971 (p...confidence bounds ” , Annals of Mathematical Statistics, 29, 491—503. (14] Roy, S.N., Gnanadesikan , R., and Srivastava, J.N. (1971). Analysis and

  20. An Indirect Test of Children's Influence on Efficiencies in Parental Consumer Behavior.

    ERIC Educational Resources Information Center

    Polachek, Dora E.; Polachek, Solomon W.

    1989-01-01

    Results of a statistical test show that not only do children have an influence on parental consumption, but also that the influence is beneficial. Not accounting for this benefit could cause underestimation of the rate of return to education or benefits of governmental programs such as Head Start. (JOW)

  1. ANTIPLAQUE AND ANTIGINGIVITIS EFFECT OF LIPPIA SIDOIDES. A DOUBLE-BLIND CLINICAL STUDY IN HUMANS

    PubMed Central

    Rodrigues, Ítalo Sarto Carvalho; Tavares, Vinícius Nascimento; Pereira, Sérgio Luís da Silva; da Costa, Flávio Nogueira

    2009-01-01

    Objectives: The antiplaque and antigingivitis effect of Lippia Sidoides (LS) was evaluated in this in vivo investigation. Material and Methods: Twenty-three subjects participated in a cross-over, double-blind clinical study, using 21-day partial-mouth experimental model of gingivitis. A toothshield was constructed for each volunteer, avoiding the brushing of the 4 experimental posterior teeth in the lower left quadrant. The subjects were randomly assigned initially to use either the placebo gel (control group) or the test gel, containing 10% LS (test group). Results: The clinical results showed statistically significant differences for plaque index (PLI) (p<0.01) between days 0 and 21 in both groups, however only the control group showed statistically significant difference (p<0.01) for the bleeding (IB) and gingival (GI) index within the experimental period of 21 days. On day 21, the test group presented significantly better results than the control group with regard to the GI (p<0.05). Conclusions: The test gel containing 10% LS was effective in the control of gingivitis. PMID:19936516

  2. Retentive force and microleakage of stainless steel crowns cemented with three different luting agents.

    PubMed

    Yilmaz, Yucel; Dalmis, Anya; Gurbuz, Taskin; Simsek, Sera

    2004-12-01

    The aim of this investigation was to compare the tensile strength, microleakage, and Scanning Electron Microscope (SEM) evaluations of SSCs cemented using different adhesive cements on primary molars. Sixty-three extracted primary first molars were used. Tooth preparations were done. Crowns were altered and adapted for investigation purpose, and then cemented using glass ionomer cement (Aqua Meron), resin modified cement (RelyX Luting), and resin cement (Panavia F) on the prepared teeth. Samples were divided into two groups of 30 samples each for tensile strength and microleakage tests. The remaining three samples were used for SEM evaluation. Data were analyzed with one-way ANOVA and Tukey test. The statistical analysis of ANOVA revealed significant differences among the groups for both tensile strength and microleakage tests (p < 0.05). Tukey test showed statistically significant difference between Panavia F and RelyX Luting (p < 0.05), but none between the others (p > 0.05). This study showed that the higher the retentive force a crown possessed, the lower would be the possibility of microleakage.

  3. What do results from coordinate-based meta-analyses tell us?

    PubMed

    Albajes-Eizagirre, Anton; Radua, Joaquim

    2018-08-01

    Coordinate-based meta-analyses (CBMA) methods, such as Activation Likelihood Estimation (ALE) and Seed-based d Mapping (SDM), have become an invaluable tool for summarizing the findings of voxel-based neuroimaging studies. However, the progressive sophistication of these methods may have concealed two particularities of their statistical tests. Common univariate voxelwise tests (such as the t/z-tests used in SPM and FSL) detect voxels that activate, or voxels that show differences between groups. Conversely, the tests conducted in CBMA test for "spatial convergence" of findings, i.e., they detect regions where studies report "more peaks than in most regions", regions that activate "more than most regions do", or regions that show "larger differences between groups than most regions do". The first particularity is that these tests rely on two spatial assumptions (voxels are independent and have the same probability to have a "false" peak), whose violation may make their results either conservative or liberal, though fortunately current versions of ALE, SDM and some other methods consider these assumptions. The second particularity is that the use of these tests involves an important paradox: the statistical power to detect a given effect is higher if there are no other effects in the brain, whereas lower in presence of multiple effects. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    ERIC Educational Resources Information Center

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  5. Design and analysis of multiple diseases genome-wide association studies without controls.

    PubMed

    Chen, Zhongxue; Huang, Hanwen; Ng, Hon Keung Tony

    2012-11-15

    In genome-wide association studies (GWAS), multiple diseases with shared controls is one of the case-control study designs. If data obtained from these studies are appropriately analyzed, this design can have several advantages such as improving statistical power in detecting associations and reducing the time and cost in the data collection process. In this paper, we propose a study design for GWAS which involves multiple diseases but without controls. We also propose corresponding statistical data analysis strategy for GWAS with multiple diseases but no controls. Through a simulation study, we show that the statistical association test with the proposed study design is more powerful than the test with single disease sharing common controls, and it has comparable power to the overall test based on the whole dataset including the controls. We also apply the proposed method to a real GWAS dataset to illustrate the methodologies and the advantages of the proposed design. Some possible limitations of this study design and testing method and their solutions are also discussed. Our findings indicate that the proposed study design and statistical analysis strategy could be more efficient than the usual case-control GWAS as well as those with shared controls. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Blonanserin – A Novel Antianxiety and Antidepressant Drug? An Experimental Study

    PubMed Central

    Limaye, Ramchandra Prabhakar; Patil, Aditi Nitin

    2016-01-01

    Introduction Many psychiatric disorders show signs and symptoms of anxiety and depression. A drug with both, effects and lesser adverse effects is always desired. Blonanserin is a novel drug with postulated effect on anxiety and depression. Aim The study was aimed to evaluate the effect of Blonanserin on anxiety and depression in animal models. Materials and Methods By using elevated plus maze test and forced swimming test, the antianxiety and antidepressant effects were evaluated. Animal ethics protocols were followed strictly. Total 50 rats (10 rats per group) were used for each test. As a control drug diazepam and imipramine were used in elevated plus maze and forced swimming test respectively. Blonanserin was tested for 3 doses 0.075, 0.2 and 0.8mg. These doses were selected from previous references as well as by extrapolating human doses. Results This study showed an antianxiety effect of Blonanserin comparable to diazepam, which was statistically significant. Optimal effect was observed with 0.075mg, followed by 0.2 and 0.8mg. It also showed an antidepressant effect which was statistically significant. Optimal effect was observed at 0.2mg dose. Conclusion The results showed that at a dose range of 0.075 and 0.2mg Blonanserin has potential to exert an adjuvant antianxiety and antidepressant activity in animal models. In order to extrapolate this in patient, longer clinical studies with comparable doses should be planned. The present study underlines potential of Blonanserin as a novel drug for such studies. PMID:27790460

  7. Blonanserin - A Novel Antianxiety and Antidepressant Drug? An Experimental Study.

    PubMed

    Limaye, Ramchandra Prabhakar; Patil, Aditi Nitin

    2016-09-01

    Many psychiatric disorders show signs and symptoms of anxiety and depression. A drug with both, effects and lesser adverse effects is always desired. Blonanserin is a novel drug with postulated effect on anxiety and depression. The study was aimed to evaluate the effect of Blonanserin on anxiety and depression in animal models. By using elevated plus maze test and forced swimming test, the antianxiety and antidepressant effects were evaluated. Animal ethics protocols were followed strictly. Total 50 rats (10 rats per group) were used for each test. As a control drug diazepam and imipramine were used in elevated plus maze and forced swimming test respectively. Blonanserin was tested for 3 doses 0.075, 0.2 and 0.8mg. These doses were selected from previous references as well as by extrapolating human doses. This study showed an antianxiety effect of Blonanserin comparable to diazepam, which was statistically significant. Optimal effect was observed with 0.075mg, followed by 0.2 and 0.8mg. It also showed an antidepressant effect which was statistically significant. Optimal effect was observed at 0.2mg dose. The results showed that at a dose range of 0.075 and 0.2mg Blonanserin has potential to exert an adjuvant antianxiety and antidepressant activity in animal models. In order to extrapolate this in patient, longer clinical studies with comparable doses should be planned. The present study underlines potential of Blonanserin as a novel drug for such studies.

  8. ArraySolver: an algorithm for colour-coded graphical display and Wilcoxon signed-rank statistics for comparing microarray gene expression data.

    PubMed

    Khan, Haseeb Ahmad

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann-Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n < or = 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform.

  9. ArraySolver: An Algorithm for Colour-Coded Graphical Display and Wilcoxon Signed-Rank Statistics for Comparing Microarray Gene Expression Data

    PubMed Central

    2004-01-01

    The massive surge in the production of microarray data poses a great challenge for proper analysis and interpretation. In recent years numerous computational tools have been developed to extract meaningful interpretation of microarray gene expression data. However, a convenient tool for two-groups comparison of microarray data is still lacking and users have to rely on commercial statistical packages that might be costly and require special skills, in addition to extra time and effort for transferring data from one platform to other. Various statistical methods, including the t-test, analysis of variance, Pearson test and Mann–Whitney U test, have been reported for comparing microarray data, whereas the utilization of the Wilcoxon signed-rank test, which is an appropriate test for two-groups comparison of gene expression data, has largely been neglected in microarray studies. The aim of this investigation was to build an integrated tool, ArraySolver, for colour-coded graphical display and comparison of gene expression data using the Wilcoxon signed-rank test. The results of software validation showed similar outputs with ArraySolver and SPSS for large datasets. Whereas the former program appeared to be more accurate for 25 or fewer pairs (n ≤ 25), suggesting its potential application in analysing molecular signatures that usually contain small numbers of genes. The main advantages of ArraySolver are easy data selection, convenient report format, accurate statistics and the familiar Excel platform. PMID:18629036

  10. Evaluation of surface detail reproduction, dimensional stability and gypsum compatibility of monophase polyvinyl-siloxane and polyether elastomeric impression materials under dry and moist conditions

    PubMed Central

    Vadapalli, Sriharsha Babu; Atluri, Kaleswararao; Putcha, Madhu Sudhan; Kondreddi, Sirisha; Kumar, N. Suman; Tadi, Durga Prasad

    2016-01-01

    Objectives: This in vitro study was designed to compare polyvinyl-siloxane (PVS) monophase and polyether (PE) monophase materials under dry and moist conditions for properties such as surface detail reproduction, dimensional stability, and gypsum compatibility. Materials and Methods: Surface detail reproduction was evaluated using two criteria. Dimensional stability was evaluated according to American Dental Association (ADA) specification no. 19. Gypsum compatibility was assessed by two criteria. All the samples were evaluated, and the data obtained were analyzed by a two-way analysis of variance (ANOVA) and Pearson's Chi-square tests. Results: When surface detail reproduction was evaluated with modification of ADA specification no. 19, both the groups under the two conditions showed no significant difference statistically. When evaluated macroscopically both the groups showed statistically significant difference. Results for dimensional stability showed that the deviation from standard was significant among the two groups, where Aquasil group showed significantly more deviation compared to Impregum group (P < 0.001). Two conditions also showed significant difference, with moist conditions showing significantly more deviation compared to dry condition (P < 0.001). The results of gypsum compatibility when evaluated with modification of ADA specification no. 19 and by giving grades to the casts for both the groups and under two conditions showed no significant difference statistically. Conclusion: Regarding dimensional stability, both impregum and aquasil performed better in dry condition than in moist; impregum performed better than aquasil in both the conditions. When tested for surface detail reproduction according to ADA specification, under dry and moist conditions both of them performed almost equally. When tested according to macroscopic evaluation, impregum and aquasil performed significantly better in dry condition compared to moist condition. In dry condition, both the materials performed almost equally. In moist condition, aquasil performed significantly better than impregum. Regarding gypsum compatibility according to ADA specification, in dry condition both the materials performed almost equally, and in moist condition aquasil performed better than impregum. When tested by macroscopic evaluation, impregum performed better than aquasil in both the conditions. PMID:27583217

  11. Evaluation of surface detail reproduction, dimensional stability and gypsum compatibility of monophase polyvinyl-siloxane and polyether elastomeric impression materials under dry and moist conditions.

    PubMed

    Vadapalli, Sriharsha Babu; Atluri, Kaleswararao; Putcha, Madhu Sudhan; Kondreddi, Sirisha; Kumar, N Suman; Tadi, Durga Prasad

    2016-01-01

    This in vitro study was designed to compare polyvinyl-siloxane (PVS) monophase and polyether (PE) monophase materials under dry and moist conditions for properties such as surface detail reproduction, dimensional stability, and gypsum compatibility. Surface detail reproduction was evaluated using two criteria. Dimensional stability was evaluated according to American Dental Association (ADA) specification no. 19. Gypsum compatibility was assessed by two criteria. All the samples were evaluated, and the data obtained were analyzed by a two-way analysis of variance (ANOVA) and Pearson's Chi-square tests. When surface detail reproduction was evaluated with modification of ADA specification no. 19, both the groups under the two conditions showed no significant difference statistically. When evaluated macroscopically both the groups showed statistically significant difference. Results for dimensional stability showed that the deviation from standard was significant among the two groups, where Aquasil group showed significantly more deviation compared to Impregum group (P < 0.001). Two conditions also showed significant difference, with moist conditions showing significantly more deviation compared to dry condition (P < 0.001). The results of gypsum compatibility when evaluated with modification of ADA specification no. 19 and by giving grades to the casts for both the groups and under two conditions showed no significant difference statistically. Regarding dimensional stability, both impregum and aquasil performed better in dry condition than in moist; impregum performed better than aquasil in both the conditions. When tested for surface detail reproduction according to ADA specification, under dry and moist conditions both of them performed almost equally. When tested according to macroscopic evaluation, impregum and aquasil performed significantly better in dry condition compared to moist condition. In dry condition, both the materials performed almost equally. In moist condition, aquasil performed significantly better than impregum. Regarding gypsum compatibility according to ADA specification, in dry condition both the materials performed almost equally, and in moist condition aquasil performed better than impregum. When tested by macroscopic evaluation, impregum performed better than aquasil in both the conditions.

  12. Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Bremner, Paul

    2014-01-01

    This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.

  13. Explorations in statistics: hypothesis tests and P values.

    PubMed

    Curran-Everett, Douglas

    2009-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.

  14. Effect of Internet-Based Cognitive Apprenticeship Model (i-CAM) on Statistics Learning among Postgraduate Students.

    PubMed

    Saadati, Farzaneh; Ahmad Tarmizi, Rohani; Mohd Ayub, Ahmad Fauzi; Abu Bakar, Kamariah

    2015-01-01

    Because students' ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is 'value added' because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students' problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students.

  15. Decomposing the Site Frequency Spectrum: The Impact of Tree Topology on Neutrality Tests.

    PubMed

    Ferretti, Luca; Ledda, Alice; Wiehe, Thomas; Achaz, Guillaume; Ramos-Onsins, Sebastian E

    2017-09-01

    We investigate the dependence of the site frequency spectrum on the topological structure of genealogical trees. We show that basic population genetic statistics, for instance, estimators of θ or neutrality tests such as Tajima's D , can be decomposed into components of waiting times between coalescent events and of tree topology. Our results clarify the relative impact of the two components on these statistics. We provide a rigorous interpretation of positive or negative values of an important class of neutrality tests in terms of the underlying tree shape. In particular, we show that values of Tajima's D and Fay and Wu's H depend in a direct way on a peculiar measure of tree balance, which is mostly determined by the root balance of the tree. We present a new test for selection in the same class as Fay and Wu's H and discuss its interpretation and power. Finally, we determine the trees corresponding to extreme expected values of these neutrality tests and present formulas for these extreme values as a function of sample size and number of segregating sites. Copyright © 2017 by the Genetics Society of America.

  16. Testing for nonlinearity in time series: The method of surrogate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theiler, J.; Galdrikian, B.; Longtin, A.

    1991-01-01

    We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less

  17. Anonymous HIV testing: the impact of availability on demand in Arizona.

    PubMed Central

    Hirano, D; Gellert, G A; Fleming, K; Boyd, D; Englender, S J; Hawks, H

    1994-01-01

    The purpose of this study was to evaluate the impact of anonymous testing availability on human immunodeficiency virus (HIV) test demand in Arizona. Testing patterns before and after the introduction of anonymous testing were compared. Client knowledge of new test policy and delay in testing until an anonymous option was available were assessed. Test numbers among men who have sex with men showed a statistically significant increase after introduction of an anonymous testing option. Arizona continues to maintain anonymous testing availability. Public health agencies should consider how test policy may influence people's HIV test decisions. PMID:7998649

  18. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  19. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  20. Flexural strength and reliability of monolithic and trilayer ceramic structures obtained by the CAD-on technique.

    PubMed

    Basso, G R; Moraes, R R; Borba, M; Griggs, J A; Della Bona, A

    2015-12-01

    To evaluate the flexural strength, Weibull modulus, fracture toughness, and failure behavior of ceramic structures obtained by the CAD-on technique, testing the null hypothesis that trilayer structures show similar properties to monolithic structures. Bar-shaped (1.8mm×4mm×16mm) monolithic specimens of zirconia (IPS e.max ZirCAD - Ivoclar Vivadent) and trilayer specimens of zirconia/fusion ceramic/lithium dissilicate (IPS e.max ZirCAD/IPS e.max CAD Crystall./Connect/IPS e.max CAD, Ivoclar Vivadent) were fabricated (n=30). Specimens were tested in flexure in 37°C deionized water using a universal testing machine at a crosshead speed of 0.5mm/min. Failure loads were recorded, and the flexural strength values were calculated. Fractography principles were used to examine the fracture surfaces under optical and scanning electron microscopy. Data were statistically analyzed using Student's t-test and Weibull statistics (α=0.05). Monolithic and trilayer specimens showed similar mean flexural strengths, characteristic strengths, and Weibull moduli. Trilayer structures showed greater mean critical flaw and fracture toughness values than monolithic specimens (p<0.001). Most critical flaws in the trilayer groups were located on the Y-TZP surface subjected to tension and propagated catastrophically. Trilayer structures showed no flaw deflection at the interface. Considering the CAD-on technique, the trilayer structures showed greater fracture toughness than the monolithic zirconia specimens. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. Stereomicroscopic evaluation of defects caused by torsional fatigue in used hand and rotary nickel-titanium instruments

    PubMed Central

    Asthana, Geeta; Kapadwala, Marsrat I.; Parmar, Girish J.

    2016-01-01

    Objective: The aim of this study was to evaluate defects caused by torsional fatigue in used hand and rotary nickel-titanium (Ni-Ti) instruments by stereomicroscopic examination. Materials and Methods: One hundred five greater taper Ni-Ti instruments were used including Protaper universal hand (Dentsply Maillefer, Ballaigues, Switzerland), Protaper universal rotary (Dentsply Maillefer, Ballaigues, Switzerland), and Revo-S rotary (MicroMega, Besançon, France) files. Files were used on lower anterior teeth. After every use, the files were observed with both naked eyes and stereomicroscope at 20× magnification (Olympus, Shinjuku, Tokyo, Japan) to evaluate defects caused by torsional fatigue. Scoring was assigned to each file according to the degree of damage. Statistics: The results were statistically analyzed using the Mann-Whitney U test and the Kruskal-Wallis test. Results: A greater number of defects were seen under the stereomicroscope than on examining with naked eyes. However, the difference in methods of evaluation was not statistically significant. Revo-S files showed minimum defects, while Protaper universal hand showed maximum defects. The intergroup comparison of defects showed that the bend in Protaper universal hand instruments was statistically significant. Conclusion: Visible defects in Ni-Ti files due to torsional fatigue were seen by naked eyes as well as by stereomicroscope. This study emphasizes that all the files should be observed before and after every instrument cycle to minimize the risk of separation. PMID:27099415

  2. Countermeasures for Reducing Unsteady Aerodynamic Force Acting on High-Speed Train in Tunnel by Use of Modifications of Train Shapes

    NASA Astrophysics Data System (ADS)

    Suzuki, Masahiro; Nakade, Koji; Ido, Atsushi

    As the maximum speed of high-speed trains increases, flow-induced vibration of trains in tunnels has become a subject of discussion in Japan. In this paper, we report the result of a study on use of modifications of train shapes as a countermeasure for reducing an unsteady aerodynamic force by on-track tests and a wind tunnel test. First, we conduct a statistical analysis of on-track test data to identify exterior parts of a train which cause the unsteady aerodynamic force. Next, we carry out a wind tunnel test to measure the unsteady aerodynamic force acting on a train in a tunnel and examined train shapes with a particular emphasis on the exterior parts identified by the statistical analysis. The wind tunnel test shows that fins under the car body are effective in reducing the unsteady aerodynamic force. Finally, we test the fins by an on-track test and confirmed its effectiveness.

  3. Usefulness of Leukocyte Esterase Test Versus Rapid Strep Test for Diagnosis of Acute Strep Pharyngitis

    PubMed Central

    2015-01-01

    Objective: A study to compare the usage of throat swab testing for leukocyte esterase on a test strip(urine dip stick-multi stick) to rapid strep test for rapid diagnosis of Group A Beta hemolytic streptococci in cases of acute pharyngitis in children. Hypothesis: The testing of throat swab for leukocyte esterase on test strip currently used for urine testing may be used to detect throat infection and might be as useful as rapid strep. Methods: All patients who come with a complaint of sore throat and fever were examined clinically for erythema of pharynx, tonsils and also for any exudates. Informed consent was obtained from the parents and assent from the subjects. 3 swabs were taken from pharyngo-tonsillar region, testing for culture, rapid strep & Leukocyte Esterase. Results: Total number is 100. Cultures 9(+); for rapid strep== 84(-) and16 (+); For LE== 80(-) and 20(+) Statistics: From data configuration Rapid Strep versus LE test don’t seem to be a random (independent) assignment but extremely aligned. The Statistical results show rapid and LE show very agreeable results. Calculated Value of Chi Squared Exceeds Tabulated under 1 Degree Of Freedom (P<.0.0001) reject Null Hypothesis and Conclude Alternative Conclusions: Leukocyte esterase on throat swab is as useful as rapid strep test for rapid diagnosis of strep pharyngitis on test strip currently used for urine dip stick causing acute pharyngitis in children. PMID:27335975

  4. Distinguishing humans from computers in the game of go: A complex network approach

    NASA Astrophysics Data System (ADS)

    Coquidé, C.; Georgeot, B.; Giraud, O.

    2017-08-01

    We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as a tool to implement a Turing-like test for go simulators.

  5. Experimental comparisons of hypothesis test and moving average based combustion phase controllers.

    PubMed

    Gao, Jinwu; Wu, Yuhu; Shen, Tielong

    2016-11-01

    For engine control, combustion phase is the most effective and direct parameter to improve fuel efficiency. In this paper, the statistical control strategy based on hypothesis test criterion is discussed. Taking location of peak pressure (LPP) as combustion phase indicator, the statistical model of LPP is first proposed, and then the controller design method is discussed on the basis of both Z and T tests. For comparison, moving average based control strategy is also presented and implemented in this study. The experiments on a spark ignition gasoline engine at various operating conditions show that the hypothesis test based controller is able to regulate LPP close to set point while maintaining the rapid transient response, and the variance of LPP is also well constrained. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Nursing students' mathematic calculation skills.

    PubMed

    Rainboth, Lynde; DeMasi, Chris

    2006-12-01

    This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.

  7. Analysis of visual quality improvements provided by known tools for HDR content

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo

    2016-09-01

    In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.

  8. Morphometric analysis of root canal cleaning after rotary instrumentation with or without laser irradiation

    NASA Astrophysics Data System (ADS)

    Marchesan, Melissa A.; Geurisoli, Danilo M. Z.; Brugnera, Aldo, Jr.; Barbin, Eduardo L.; Pecora, Jesus D.

    2002-06-01

    The present study examined root canal cleaning, using the optic microscope, after rotary instrumentation with ProFile.04 with or without laser application with different output energies. Cleaning and shaping can be accomplished manually, with ultra-sonic and sub-sonic devices, with rotary instruments and recently, increasing development in laser radiation has shown promising results for disinfection and smear layer removal. In this study, 30 palatal maxillary molar roots were examined using an optic microscope after rotary instrumentation with ProFile .04 with or without Er:YAG laser application (KaVo KeyLaser II, Germany) with different output energies (2940 nm, 15 Hz, 300 pulses, 500 milli-sec duration, 42 J, 140 mJ showed on the display- input, 61 mJ at fiberoptic tip-output and 140 mJ showed on the display-input and 51 mJ at fiberoptic tip-output). Statistical analysis showed no statistical differences between the tested treatments (ANOVA, p>0.05). ANOVA also showed a statistically significant difference (p<0.01) between the root canal thirds, indicating that the middle third had less debris than the apical third. We conclude that: 1) none of the tested treatments led to totally cleaned root canals; 2) all treatments removed debris similarly, 3) the middle third had less debris than the apical third; 4) variation in output energy did not increase cleaning.

  9. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    PubMed

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  10. Analysis of the color alteration and radiopacity promoted by bismuth oxide in calcium silicate cement.

    PubMed

    Marciano, Marina Angélica; Estrela, Carlos; Mondelli, Rafael Francisco Lia; Ordinola-Zapata, Ronald; Duarte, Marco Antonio Hungaro

    2013-01-01

    The aim of the study was to determine if the increase in radiopacity provided by bismuth oxide is related to the color alteration of calcium silicate-based cement. Calcium silicate cement (CSC) was mixed with 0%, 15%, 20%, 30% and 50% of bismuth oxide (BO), determined by weight. Mineral trioxide aggregate (MTA) was the control group. The radiopacity test was performed according to ISO 6876/2001. The color was evaluated using the CIE system. The assessments were performed after 24 hours, 7 and 30 days of setting time, using a spectrophotometer to obtain the ΔE, Δa, Δb and ΔL values. The statistical analyses were performed using the Kruskal-Wallis/Dunn and ANOVA/Tukey tests (p<0.05). The cements in which bismuth oxide was added showed radiopacity corresponding to the ISO recommendations (>3 mm equivalent of Al). The MTA group was statistically similar to the CSC/30% BO group (p>0.05). In regard to color, the increase of bismuth oxide resulted in a decrease in the ΔE value of the calcium silicate cement. The CSC group presented statistically higher ΔE values than the CSC/50% BO group (p<0.05). The comparison between 24 hours and 7 days showed higher ΔE for the MTA group, with statistical differences for the CSC/15% BO and CSC/50% BO groups (p<0.05). After 30 days, CSC showed statistically higher ΔE values than CSC/30% BO and CSC/50% BO (p<0.05). In conclusion, the increase in radiopacity provided by bismuth oxide has no relation to the color alteration of calcium silicate-based cements.

  11. An ANOVA approach for statistical comparisons of brain networks.

    PubMed

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  12. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  13. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  14. Time series, periodograms, and significance

    NASA Astrophysics Data System (ADS)

    Hernandez, G.

    1999-05-01

    The geophysical literature shows a wide and conflicting usage of methods employed to extract meaningful information on coherent oscillations from measurements. This makes it difficult, if not impossible, to relate the findings reported by different authors. Therefore, we have undertaken a critical investigation of the tests and methodology used for determining the presence of statistically significant coherent oscillations in periodograms derived from time series. Statistical significance tests are only valid when performed on the independent frequencies present in a measurement. Both the number of possible independent frequencies in a periodogram and the significance tests are determined by the number of degrees of freedom, which is the number of true independent measurements, present in the time series, rather than the number of sample points in the measurement. The number of degrees of freedom is an intrinsic property of the data, and it must be determined from the serial coherence of the time series. As part of this investigation, a detailed study has been performed which clearly illustrates the deleterious effects that the apparently innocent and commonly used processes of filtering, de-trending, and tapering of data have on periodogram analysis and the consequent difficulties in the interpretation of the statistical significance thus derived. For the sake of clarity, a specific example of actual field measurements containing unevenly-spaced measurements, gaps, etc., as well as synthetic examples, have been used to illustrate the periodogram approach, and pitfalls, leading to the (statistical) significance tests for the presence of coherent oscillations. Among the insights of this investigation are: (1) the concept of a time series being (statistically) band limited by its own serial coherence and thus having a critical sampling rate which defines one of the necessary requirements for the proper statistical design of an experiment; (2) the design of a critical test for the maximum number of significant frequencies which can be used to describe a time series, while retaining intact the variance of the test sample; (3) a demonstration of the unnecessary difficulties that manipulation of the data brings into the statistical significance interpretation of said data; and (4) the resolution and correction of the apparent discrepancy in significance results obtained by the use of the conventional Lomb-Scargle significance test, when compared with the long-standing Schuster-Walker and Fisher tests.

  15. Considering whether Medicaid is worth the cost: revisiting the Oregon Health Study.

    PubMed

    Muennig, Peter A; Quan, Ryan; Chiuzan, Codruta; Glied, Sherry

    2015-05-01

    The Oregon Health Study was a groundbreaking experiment in which uninsured participants were randomized to either apply for Medicaid or stay with their current care. The study showed that Medicaid produced numerous important socioeconomic and health benefits but had no statistically significant impact on hypertension, hypercholesterolemia, or diabetes. Medicaid opponents interpreted the findings to mean that Medicaid is not a worthwhile investment. Medicaid proponents viewed the experiment as statistically underpowered and, irrespective of the laboratory values, suggestive that Medicaid is a good investment. We tested these competing claims and, using a sensitive joint test and statistical power analysis, confirmed that the Oregon Health Study did not improve laboratory values. However, we also found that Medicaid is a good value, with a cost of just $62 000 per quality-adjusted life-years gained.

  16. Acute effects of exergames on cognitive function of institutionalized older persons: a single-blinded, randomized and controlled pilot study.

    PubMed

    Monteiro-Junior, Renato Sobral; da Silva Figueiredo, Luiz Felipe; Maciel-Pinheiro, Paulo de Tarso; Abud, Erick Lohan Rodrigues; Braga, Ana Elisa Mendes Montalvão; Barca, Maria Lage; Engedal, Knut; Nascimento, Osvaldo José M; Deslandes, Andrea Camaz; Laks, Jerson

    2017-06-01

    Improvements on balance, gait and cognition are some of the benefits of exergames. Few studies have investigated the cognitive effects of exergames in institutionalized older persons. To assess the acute effect of a single session of exergames on cognition of institutionalized older persons. Nineteen institutionalized older persons were randomly allocated to Wii (WG, n = 10, 86 ± 7 year, two males) or control groups (CG, n = 9, 86 ± 5 year, one male). The WG performed six exercises with virtual reality, whereas CG performed six exercises without virtual reality. Verbal fluency test (VFT), digit span forward and digit span backward were used to evaluate semantic memory/executive function, short-term memory and work memory, respectively, before and after exergames and Δ post- to pre-session (absolute) and Δ % (relative) were calculated. Parametric (t independent test) and nonparametric (Mann-Whitney test) statistics and effect size were applied to tests for efficacy. VFT was statistically significant within WG (-3.07, df = 9, p = 0.013). We found no statistically significant differences between the two groups (p > 0.05). Effect size between groups of Δ % (median = 21 %) showed moderate effect for WG (0.63). Our data show moderate improvement of semantic memory/executive function due to exergames session. It is possible that cognitive brain areas are activated during exergames, increasing clinical response. A single session of exergames showed no significant improvement in short-term memory, working memory and semantic memory/executive function. The effect size for verbal fluency was promising, and future studies on this issue should be developed. RBR-6rytw2.

  17. Statistically Assessing Time-Averaged and Paleosecular Variation Field Models Against Paleomagnetic Directional Data Sets. Can Likely non-Zonal Features be Detected in a Robust way ?

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Khokhlov, A.

    2007-12-01

    We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).

  18. An investigation of new toxicity test method performance in validation studies: 1. Toxicity test methods that have predictive capacity no greater than chance.

    PubMed

    Bruner, L H; Carr, G J; Harbell, J W; Curren, R D

    2002-06-01

    An approach commonly used to measure new toxicity test method (NTM) performance in validation studies is to divide toxicity results into positive and negative classifications, and the identify true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. After this step is completed, the contingent probability statistics (CPS), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are calculated. Although these statistics are widely used and often the only statistics used to assess the performance of toxicity test methods, there is little specific guidance in the validation literature on what values for these statistics indicate adequate performance. The purpose of this study was to begin developing data-based answers to this question by characterizing the CPS obtained from an NTM whose data have a completely random association with a reference test method (RTM). Determining the CPS of this worst-case scenario is useful because it provides a lower baseline from which the performance of an NTM can be judged in future validation studies. It also provides an indication of relationships in the CPS that help identify random or near-random relationships in the data. The results from this study of randomly associated tests show that the values obtained for the statistics vary significantly depending on the cut-offs chosen, that high values can be obtained for individual statistics, and that the different measures cannot be considered independently when evaluating the performance of an NTM. When the association between results of an NTM and RTM is random the sum of the complementary pairs of statistics (sensitivity + specificity, NPV + PPV) is approximately 1, and the prevalence (i.e., the proportion of toxic chemicals in the population of chemicals) and PPV are equal. Given that combinations of high sensitivity-low specificity or low specificity-high sensitivity (i.e., the sum of the sensitivity and specificity equal to approximately 1) indicate lack of predictive capacity, an NTM having these performance characteristics should be considered no better for predicting toxicity than by chance alone.

  19. In vitro cariostatic effect of whitening toothpastes in human dental enamel-microhardness evaluation.

    PubMed

    Watanabe, Melina Mayumi; Rodrigues, José Augusto; Marchi, Giselle Maria; Ambrosano, Gláucia Maria Bovi

    2005-06-01

    The aim of this study was to evaluate, in vitro, the cariostatic effect of whitening toothpastes. Ninety-five dental fragments were obtained from nonerupted third molars. The fragments were embedded in polystyrene resin and sequentially polished with abrasive papers (400-, 600-, and 1,000-grit) and diamond pastes of 6, 3, and 1 microm. The fragments were assigned in five groups according to toothpaste treatment: G1 = Rembrandt Plus with Peroxide; G2 = Crest Dual Action Whitening; G3 = Aquafresh Whitening Triple Protection; and the control groups: G4 = Sensodyne Original (without fluoride); G5 = Sensodyne Sodium Bicarbonated (with fluoride). The initial enamel microhardness evaluations were done. For 2 weeks the fragments were submitted daily to a de-remineralization cycle followed by a 10-minute toothpaste slurry. After that, the final microhardness tests were done. The percentage of mineral loss of enamel was determined for statistical analysis. Analysis of variance and the Tukey test were applied. The results did not show statistically significant differences in mineral loss among groups G1, G2, G3, and G5, which statistically differ from G4 (toothpaste without fluoride). G4 showed the highest mineral loss (P < or = .05). The whitening toothpastes evaluated showed a cariostatic effect similar to regular, nonwhitening toothpaste.

  20. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    PubMed

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  1. Mechanical properties and radiopacity of experimental glass-silica-metal hybrid composites.

    PubMed

    Jandt, Klaus D; Al-Jasser, Abdullah M O; Al-Ateeq, Khalid; Vowles, Richard W; Allen, Geoff C

    2002-09-01

    Experimental glass-silica-metal hybrid composites (polycomposites) were developed and tested mechanically and radiographically in this fundamental pilot study. To determine whether mechanical properties of a glass-silica filled two-paste dental composite based on a Bis-GMA/polyglycol dimethacrylate blend could be improved through the incorporation of titanium (Ti) particles (particle size ranging from 1 to 3 microm) or silver-tin-copper (Ag-Sn-Cu) particles (particle size ranging from 1 to 50 microm) we measured the diametral tensile strength, fracture toughness and radiopacity of five composites. The five materials were: I, the original unmodified composite (control group); II, as group I but containing 5% (wt/wt) of Ti particles; III, as group II but with Ti particles treated with 4-methacryloyloxyethyl trimellitate anhydride (4-META) to promote Ti-resin bonding; IV, as group I but containing 5% (wt/wt) of Ag-Sn-Cu particles; and V, as group IV but with the metal particles treated with 4-META. Ten specimens of each group were tested in a standard diametral tensile strength test and a fracture toughness test using a single-edge notched sample design and five specimens of each group were tested using a radiopacity test. The diametral tensile strength increased statistically significantly after incorporation of Ti treated with 4-META, as tested by ANOVA (P=0.004) and Fisher's LSD test. A statistically significant increase of fracture toughness was observed between the control group and groups II, III and V as tested by ANOVA (P=0.003) and Fisher's LSD test. All other groups showed no statistically significant increase in diametral tensile strength and fracture toughness respectively when compared to their control groups. No statistically significant increase in radiopacity was found between the control group and the Ti filled composite, whereas a statistically significant increase in radiopacity was found between the control group and the Ag-Sn-Cu filled composite as tested by ANOVA (P=0.000) and Fisher's LSD procedure. The introduction of titanium and silver-tin-copper fillers has potential as added components in composites to provide increased mechanical strength and radiopacity, for example for use in core materials.

  2. On Improving the Experiment Methodology in Pedagogical Research

    ERIC Educational Resources Information Center

    Horakova, Tereza; Houska, Milan

    2014-01-01

    The paper shows how the methodology for a pedagogical experiment can be improved through including the pre-research stage. If the experiment has the form of a test procedure, an improvement of methodology can be achieved using for example the methods of statistical and didactic analysis of tests which are traditionally used in other areas, i.e.…

  3. Expert system verification and validation guidelines/workshop task. Deliverable no. 1: ES V/V guidelines

    NASA Technical Reports Server (NTRS)

    French, Scott W.

    1991-01-01

    The goals are to show that verifying and validating a software system is a required part of software development and has a direct impact on the software's design and structure. Workshop tasks are given in the areas of statistics, integration/system test, unit and architectural testing, and a traffic controller problem.

  4. A comparison of bicortical and intramedullary screw fixations of Jones' fractures.

    PubMed

    Husain, Zeeshan S; DeFronzo, Donna J

    2002-01-01

    Two different fixations for treatment of Jones' fracture were tested in bone models and cadaveric specimens to determine the differences in the stability of the constructs. A bicortical 3.5-mm cannulated cortical screw and an intramedullary 4.0-mm partially threaded cancellous screw were tested using physiologic loads with an Instron 8500 servohydraulic tensiometer (Instron Corporation, Canton, MA). In bone models, the bicortical construct (n = 5, 87+/-23 N) showed superior fixation strength (p = .0009) when compared to the intramedullary screw fixation (n = 5, 25+/-13 N). Cadaveric testing showed similar statistical significance (p = .0124) with the bicortical construct (n = 5, 152+/-71 N) having greater load resistance than the intramedullary screw fixation (n = 4, 29+/-20 N). In bone models, the bicortical constructs (23+/-9 N/mm) showed over twice the elastic modulus than the intramedullary screw fixations (9+/-4 N/mm) with statistical significance (p = .0115). The elastic modulus in the cadaveric group showed a similar pattern between the bicortical (19+/-17 N/mm) and intramedullary (9+/-6 N/mm) screw constructs. Analysis of the bicortical screw failure patterns revealed that screw orientation had a critical impact on fixation stability. The more distal the exit site of the bicortical screw was from the fracture site, the greater the load needed to displace the fixation.

  5. The control of manual entry accuracy in management/engineering information systems, phase 1

    NASA Technical Reports Server (NTRS)

    Hays, Daniel; Nocke, Henry; Wilson, Harold; Woo, John, Jr.; Woo, June

    1987-01-01

    It was shown that clerical personnel can be tested for proofreading performance under simulated industrial conditions. A statistical study showed that errors in proofreading follow an extreme value probability theory. The study showed that innovative man/machine interfaces can be developed to improve and control accuracy during data entry.

  6. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  7. Effects of botulinum toxin A therapy and multidisciplinary rehabilitation on upper and lower limb spasticity in post-stroke patients.

    PubMed

    Hara, Takatoshi; Abo, Masahiro; Hara, Hiroyoshi; Kobayashi, Kazushige; Shimamoto, Yusuke; Samizo, Yuta; Sasaki, Nobuyuki; Yamada, Naoki; Niimi, Masachika

    2017-06-01

    The purpose of this study was to examine the effects of combined botulinum toxin type A (BoNT-A) and inpatient multidisciplinary (MD) rehabilitation therapy on the improvement of upper and lower limb function in post-stroke patients. In this retrospective study, a 12-day inpatient treatment protocol was implemented on 51 post-stroke patients with spasticity. Assessments were performed on the day of admission, at discharge, and at 3 months following discharge. At the time of discharge, all of the evaluated items showed a statistically significant improvement. Only the Functional Reach Test (FRT) showed a statistically significant improvement at 3 months. In subgroup analyses, the slowest walking speed group showed a significantly greater change ratio of the 10 Meter Walk Test relative to the other groups, from the time of admission to discharge. This group showed a greater FRT change ratio than the other groups from the time of admission to the 3-month follow-up. Inpatient combined therapy of simultaneous injections of BoNT-A to the upper and lower limbs and MD may improve motor function.

  8. Pitfalls and important issues in testing reliability using intraclass correlation coefficients in orthopaedic research.

    PubMed

    Lee, Kyoung Min; Lee, Jaebong; Chung, Chin Youb; Ahn, Soyeon; Sung, Ki Hyuk; Kim, Tae Won; Lee, Hui Jong; Park, Moon Seok

    2012-06-01

    Intra-class correlation coefficients (ICCs) provide a statistical means of testing the reliability. However, their interpretation is not well documented in the orthopedic field. The purpose of this study was to investigate the use of ICCs in the orthopedic literature and to demonstrate pitfalls regarding their use. First, orthopedic articles that used ICCs were retrieved from the Pubmed database, and journal demography, ICC models and concurrent statistics used were evaluated. Second, reliability test was performed on three common physical examinations in cerebral palsy, namely, the Thomas test, the Staheli test, and popliteal angle measurement. Thirty patients were assessed by three orthopedic surgeons to explore the statistical methods testing reliability. Third, the factors affecting the ICC values were examined by simulating the data sets based on the physical examination data where the ranges, slopes, and interobserver variability were modified. Of the 92 orthopedic articles identified, 58 articles (63%) did not clarify the ICC model used, and only 5 articles (5%) described all models, types, and measures. In reliability testing, although the popliteal angle showed a larger mean absolute difference than the Thomas test and the Staheli test, the ICC of popliteal angle was higher, which was believed to be contrary to the context of measurement. In addition, the ICC values were affected by the model, type, and measures used. In simulated data sets, the ICC showed higher values when the range of data sets were larger, the slopes of the data sets were parallel, and the interobserver variability was smaller. Care should be taken when interpreting the absolute ICC values, i.e., a higher ICC does not necessarily mean less variability because the ICC values can also be affected by various factors. The authors recommend that researchers clarify ICC models used and ICC values are interpreted in the context of measurement.

  9. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  10. Active Female Maximal and Anaerobic Threshold Cardiorespiratory Responses to Six Different Water Aerobics Exercises.

    PubMed

    Antunes, Amanda H; Alberton, Cristine L; Finatto, Paula; Pinto, Stephanie S; Cadore, Eduardo L; Zaffari, Paula; Kruel, Luiz F M

    2015-01-01

    Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the exercises in the sagittal plane and 3 in the frontal plane. Twelve active female volunteers (aged 24 ± 2 years) performed 6 maximal progressive test sessions. Throughout the exercise tests, we measured heart rate (HR) and oxygen consumption (VO2). We randomized all sessions with a minimum interval of 48 hr between each session. For statistical analysis, we used repeated-measures 1-way analysis of variance. Regarding the maximal responses, for the peak VO2, abductor hop and jumping jacks (JJ) showed significantly lower values than frontal kick and cross-country skiing (CCS; p < .001; partial η(2) = .509), while for the peak HR, JJ showed statistically significantly lower responses compared with stationary running and CCS (p < .001; partial η(2) = .401). At anaerobic threshold intensity expressed as the percentage of the maximum values, no statistically significant differences were found among exercises. Cardiorespiratory responses are directly associated with the muscle mass involved in the exercise. Thus, it is worth emphasizing the importance of performing a maximal test that is specific to the analyzed exercise so the prescription of the intensity can be safer and valid.

  11. Dental and Chronological Ages as Determinants of Peak Growth Period and Its Relationship with Dental Calcification Stages

    PubMed Central

    Litsas, George; Lucchese, Alessandra

    2016-01-01

    Purpose: To investigate the relationship between dental, chronological, and cervical vertebral maturation growth in the peak growth period, as well as to study the association between the dental calcification phases and the skeletal maturity stages during the same growth period. Methods: Subjects were selected from orthodontic pre-treatment cohorts consisting of 420 subjects where 255 were identified and enrolled into the study, comprising 145 girls and 110 boys. The lateral cephalometric and panoramic radiographs were examined from the archives of the Department of Orthodontics, Aristotle University of Thessaloniki, Greece. Dental age was assessed according to the method of Demirjian, and skeletal maturation according to the Cervical Vertebral Maturation Method. Statistical elaboration included Spearman Brown formula, descriptive statistics, Pearson’s correlation coefficient and regression analysis, paired samples t-test, and Spearman’s rho correlation coefficient. Results: Chronological and dental age showed a high correlation for both gender(r =0.741 for boys, r = 0.770 for girls, p<0.001). The strongest correlation was for the CVM Stage IV for both males (r=0.554) and females (r=0.68). The lowest correlation was for the CVM Stage III in males (r=0.433, p<0.001) and for the CVM Stage II in females (r=0.393, p>0.001). The t-test revealed statistically significant differences between these variables (p<0.001) during the peak period. A statistically significant correlation (p<0.001) between tooth calcification and CVM stages was determined. The second molars showed the highest correlation with CVM stages (CVMS) (r= 0.65 for boys, r = 0.72 for girls). Conclusion: Dental age was more advanced than chronological for both boys and girls for all CVMS. During the peak period these differences were more pronounced. Moreover, all correlations between skeletal and dental stages were statistically significant. The second molars showed the highest correlation whereas the canines showed the lowest correlation for both gender. PMID:27335610

  12. Explorations in Statistics: Hypothesis Tests and P Values

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2009-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of "Explorations in Statistics" delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what…

  13. A Performance Comparison on the Probability Plot Correlation Coefficient Test using Several Plotting Positions for GEV Distribution.

    NASA Astrophysics Data System (ADS)

    Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng

    2014-05-01

    It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.

  14. Gram-Negative Bacterial Wound Infections

    DTIC Science & Technology

    2015-05-01

    not statistically differ- ent from that of the control group . The levels (CFU/g) of bacteria in lung tissue correlated with the survival curves. The...median levels in the control and 2.5 mg/kg- treated groups were almost identical, at 9.04 and 9.07 log CFU/g, respectively. Figure 6B shows a decrease...Dunn’s multiple comparison test, found a statistically significant difference in bacterial burden when the control group was com- pared to animals

  15. A Closer Look at Data Independence: Comment on “Lies, Damned Lies, and Statistics (in Geology)”

    NASA Astrophysics Data System (ADS)

    Kravtsov, Sergey; Saunders, Rolando Olivas

    2011-02-01

    In his Forum (Eos, 90(47), 443, doi:10.1029/2009EO470004, 2009), P. Vermeesch suggests that statistical tests are not fit to interpret long data records. He asserts that for large enough data sets any true null hypothesis will always be rejected. This is certainly not the case! Here we revisit this author's example of weekly distribution of earthquakes and show that statistical results support the commonsense expectation that seismic activity does not depend on weekday (see the online supplement to this Eos issue for details (http://www.agu.org/eos_elec/)).

  16. Wood Products Analysis

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Structural Reliability Consultants' computer program creates graphic plots showing the statistical parameters of glue laminated timbers, or 'glulam.' The company president, Dr. Joseph Murphy, read in NASA Tech Briefs about work related to analysis of Space Shuttle surface tile strength performed for Johnson Space Center by Rockwell International Corporation. Analysis led to a theory of 'consistent tolerance bounds' for statistical distributions, applicable in industrial testing where statistical analysis can influence product development and use. Dr. Murphy then obtained the Tech Support Package that covers the subject in greater detail. The TSP became the basis for Dr. Murphy's computer program PC-DATA, which he is marketing commercially.

  17. Effect of Probiotic Curd on Salivary pH and Streptococcus mutans: A Double Blind Parallel Randomized Controlled Trial.

    PubMed

    Srivastava, Shivangi; Saha, Sabyasachi; Kumari, Minti; Mohd, Shafaat

    2016-02-01

    Dairy products like curd seem to be the most natural way to ingest probiotics which can reduce Streptococcus mutans level and also increase salivary pH thereby reducing the dental caries risk. To estimate the role of probiotic curd on salivary pH and Streptococcus mutans count, over a period of 7 days. This double blind parallel randomized clinical trial was conducted at the institution with 60 caries free volunteers belonging to the age group of 20-25 years who were randomly allocated into two groups. Test Group consisted of 30 subjects who consumed 100ml of probiotic curd daily for seven days while an equal numbered Control Group were given 100ml of regular curd for seven days. Saliva samples were assessed at baseline, after ½ hour 1 hour and 7 days of intervention period using pH meter and Mitis Salivarius Bacitracin agar to estimate salivary pH and S. mutans count. Data was statistically analysed using Paired and Unpaired t-test. The study revealed a reduction in salivary pH after ½ hour and 1 hour in both the groups. However after 7 days, normal curd showed a statistically significant (p< 0.05) reduction in salivary pH while probiotic curd showed a statistically significant (p< 0.05) increase in salivary pH. Similarly with regard to S. mutans colony counts probiotic curd showed statistically significant reduction (p< 0.05) as compared to normal curd. Short-term consumption of probiotic curds showed marked salivary pH elevation and reduction of salivary S. mutans counts and thus can be exploited for the prevention of enamel demineralization as a long-term remedy keeping in mind its cost effectiveness.

  18. Statistical control process to compare and rank treatment plans in radiation oncology: impact of heterogeneity correction on treatment planning in lung cancer.

    PubMed

    Chaikh, Abdulhamid; Balosso, Jacques

    2016-12-01

    This study proposes a statistical process to compare different treatment plans issued from different irradiation techniques or different treatment phases. This approach aims to provide arguments for discussion about the impact on clinical results of any condition able to significantly alter dosimetric or ballistic related data. The principles of the statistical investigation are presented in the framework of a clinical example based on 40 fields of radiotherapy for lung cancers. Two treatment plans were generated for each patient making a change of dose distribution due to variation of lung density correction. The data from 2D gamma index (γ) including the pixels having γ≤1 were used to determine the capability index (Cp) and the acceptability index (Cpk) of the process. To measure the strength of the relationship between the γ passing rates and the Cp and Cpk indices, the Spearman's rank non-parametric test was used to calculate P values. The comparison between reference and tested plans showed that 95% of pixels have γ≤1 with criteria (6%, 6 mm). The values of the Cp and Cpk indices were lower than one showing a significant dose difference. The data showed a strong correlation between γ passing rates and the indices with P>0.8. The statistical analysis using Cp and Cpk, show the significance of dose differences resulting from two plans in radiotherapy. These indices can be used for adaptive radiotherapy to measure the difference between initial plan and daily delivered plan. The significant changes of dose distribution could raise the question about the continuity to treat the patient with the initial plan or the need for adjustments.

  19. Statistical learning of movement.

    PubMed

    Ongchoco, Joan Danielle Khonghun; Uddenberg, Stefan; Chun, Marvin M

    2016-12-01

    The environment is dynamic, but objects move in predictable and characteristic ways, whether they are a dancer in motion, or a bee buzzing around in flight. Sequences of movement are comprised of simpler motion trajectory elements chained together. But how do we know where one trajectory element ends and another begins, much like we parse words from continuous streams of speech? As a novel test of statistical learning, we explored the ability to parse continuous movement sequences into simpler element trajectories. Across four experiments, we showed that people can robustly parse such sequences from a continuous stream of trajectories under increasingly stringent tests of segmentation ability and statistical learning. Observers viewed a single dot as it moved along simple sequences of paths, and were later able to discriminate these sequences from novel and partial ones shown at test. Observers demonstrated this ability when there were potentially helpful trajectory-segmentation cues such as a common origin for all movements (Experiment 1); when the dot's motions were entirely continuous and unconstrained (Experiment 2); when sequences were tested against partial sequences as a more stringent test of statistical learning (Experiment 3); and finally, even when the element trajectories were in fact pairs of trajectories, so that abrupt directional changes in the dot's motion could no longer signal inter-trajectory boundaries (Experiment 4). These results suggest that observers can automatically extract regularities in movement - an ability that may underpin our capacity to learn more complex biological motions, as in sport or dance.

  20. A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs.

    PubMed

    Patel, Ameera X; Bullmore, Edward T

    2016-11-15

    Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised fMRI time series. Accurate estimation of df offers many potential advantages for probabilistically thresholding functional connectivity and network statistics tested in the context of spatially variant and non-stationary noise. Code for wavelet despiking, seed correlational testing and probabilistic graph construction is freely available to download as part of the BrainWavelet Toolbox at www.brainwavelet.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  2. Why Flash Type Matters: A Statistical Analysis

    NASA Astrophysics Data System (ADS)

    Mecikalski, Retha M.; Bitzer, Phillip M.; Carey, Lawrence D.

    2017-09-01

    While the majority of research only differentiates between intracloud (IC) and cloud-to-ground (CG) flashes, there exists a third flash type, known as hybrid flashes. These flashes have extensive IC components as well as return strokes to ground but are misclassified as CG flashes in current flash type analyses due to the presence of a return stroke. In an effort to show that IC, CG, and hybrid flashes should be separately classified, the two-sample Kolmogorov-Smirnov (KS) test was applied to the flash sizes, flash initiation, and flash propagation altitudes for each of the three flash types. The KS test statistically showed that IC, CG, and hybrid flashes do not have the same parent distributions and thus should be separately classified. Separate classification of hybrid flashes will lead to improved lightning-related research, because unambiguously classified hybrid flashes occur on the same order of magnitude as CG flashes for multicellular storms.

  3. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  4. Radioactivity measurement of radioactive contaminated soil by using a fiber-optic radiation sensor

    NASA Astrophysics Data System (ADS)

    Joo, Hanyoung; Kim, Rinah; Moon, Joo Hyun

    2016-06-01

    A fiber-optic radiation sensor (FORS) was developed to measure the gamma radiation from radioactive contaminated soil. The FORS was fabricated using an inorganic scintillator (Lu,Y)2SiO5:Ce (LYSO:Ce), a mixture of epoxy resin and hardener, aluminum foil, and a plastic optical fiber. Before its real application, the FORS was tested to determine if it performed adequately. The test result showed that the measurements by the FORS adequately followed the theoretically estimated values. Then, the FORS was applied to measure the gamma radiation from radioactive contaminated soil. For comparison, a commercial radiation detector was also applied to measure the same soil samples. The measurement data were analyzed by using a statistical parameter, the critical level to determine if net radioactivity statistically different from background was present in the soil sample. The analysis showed that the soil sample had radioactivity distinguishable from background.

  5. Towards a web-based decision support tool for selecting appropriate statistical test in medical and biological sciences.

    PubMed

    Suner, Aslı; Karakülah, Gökhan; Dicle, Oğuz

    2014-01-01

    Statistical hypothesis testing is an essential component of biological and medical studies for making inferences and estimations from the collected data in the study; however, the misuse of statistical tests is widely common. In order to prevent possible errors in convenient statistical test selection, it is currently possible to consult available test selection algorithms developed for various purposes. However, the lack of an algorithm presenting the most common statistical tests used in biomedical research in a single flowchart causes several problems such as shifting users among the algorithms, poor decision support in test selection and lack of satisfaction of potential users. Herein, we demonstrated a unified flowchart; covers mostly used statistical tests in biomedical domain, to provide decision aid to non-statistician users while choosing the appropriate statistical test for testing their hypothesis. We also discuss some of the findings while we are integrating the flowcharts into each other to develop a single but more comprehensive decision algorithm.

  6. Effects of different centrifugation conditions on clinical chemistry and Immunology test results.

    PubMed

    Minder, Elisabeth I; Schibli, Adrian; Mahrer, Dagmar; Nesic, Predrag; Plüer, Kathrin

    2011-05-10

    The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study.

  7. Effects of different centrifugation conditions on clinical chemistry and Immunology test results

    PubMed Central

    2011-01-01

    Background The effect of centrifugation time of heparinized blood samples on clinical chemistry and immunology results has rarely been studied. WHO guideline proposed a 15 min centrifugation time without citing any scientific publications. The centrifugation time has a considerable impact on the turn-around-time. Methods We investigated 74 parameters in samples from 44 patients on a Roche Cobas 6000 system, to see whether there was a statistical significant difference in the test results among specimens centrifuged at 2180 g for 15 min, at 2180 g for 10 min or at 1870 g for 7 min, respectively. Two tubes with different plasma separators (both Greiner Bio-One) were used for each centrifugation condition. Statistical comparisons were made by Deming fit. Results Tubes with different separators showed identical results in all parameters. Likewise, excellent correlations were found among tubes to which different centrifugation conditions were applied. Fifty percent of the slopes lay between 0.99 and 1.01. Only 3.6 percent of the statistical tests results fell outside the significance level of p < 0.05, which was less than the expected 5%. This suggests that the outliers are the result of random variation and the large number of statistical tests performed. Further, we found that our data are sufficient not to miss a biased test (beta error) with a probability of 0.10 to 0.05 in most parameters. Conclusion A centrifugation time of either 7 or 10 min provided identical test results compared to the time of 15 min as proposed by WHO under the conditions used in our study. PMID:21569233

  8. Soft Tissue Response to Titanium Abutments with Different Surface Treatment: Preliminary Histologic Report of a Randomized Controlled Trial.

    PubMed

    Canullo, Luigi; Dehner, Jan Friedrich; Penarrocha, David; Checchi, Vittorio; Mazzoni, Annalisa; Breschi, Lorenzo

    2016-01-01

    The aim of this preliminary prospective RCT was to histologically evaluate peri-implant soft tissues around titanium abutments treated using different cleaning methods. Sixteen patients were randomized into three groups: laboratory customized abutments underwent Plasma of Argon treatment (Plasma Group), laboratory customized abutments underwent cleaning by steam (Steam Group), and abutments were used as they came from industry (Control Group). Seven days after the second surgery, soft tissues around abutments were harvested. Samples were histologically analyzed. Soft tissues surrounding Plasma Group abutments predominantly showed diffuse chronic infiltrate, almost no acute infiltrate, with presence of few polymorphonuclear neutrophil granulocytes, and a diffuse presence of collagenization bands. Similarly, in Steam Group, the histological analysis showed a high variability of inflammatory expression factors. Tissues harvested from Control Group showed presence of few neutrophil granulocytes, moderate presence of lymphocytes, and diffuse collagenization bands in some sections, while they showed absence of acute infiltrate in 40% of sections. However, no statistical difference was found among the tested groups for each parameter (p > 0.05). Within the limit of the present study, results showed no statistically significant difference concerning inflammation and healing tendency between test and control groups.

  9. Differences in genotoxic activity of alpha-Ni3S2 on human lymphocytes from nickel-hypersensitized and nickel-unsensitized donors.

    PubMed

    Arrouijal, F Z; Marzin, D; Hildebrand, H F; Pestel, J; Haguenoer, J M

    1992-05-01

    The genotoxic activity of alpha-Ni3S2 was assessed on human lymphocytes from nickel-hypersensitized (SSL) and nickel-unsensitized (USL) subjects. Three genotoxicity tests were performed: the sister chromatid exchange (SCE) test, the metaphase analysis test and the micronucleus test. (i) The SCE test (3-100 micrograms/ml) showed a weak but statistically significant increase in the number of SCE in both lymphocyte types with respect to controls, USL presenting a slightly higher SCE incidence but only at one concentration. (ii) The metaphase analysis test demonstrated a high dose-dependent clastogenic activity of alpha-Ni3S2 in both lymphocyte types. The frequency of chromosomal anomalies was significantly higher in USL than in SSL for all concentrations applied. (iii) The micronucleus test confirmed the dose-dependent clastogenic activity of alpha-Ni3S2 and the differences already observed between USL and SSL, i.e. the number of cells with micronuclei was statistically higher in USL. Finally, the incorporation study with alpha-63Ni3S2 showed a higher uptake of its solubilized fraction by USL. This allows an explanation of the different genotoxic action of nickel on the two cell types. In this study we demonstrated that hypersensitivity has an influence on the incorporation of alpha-Ni3S2 and subsequently on the different induction of chromosomal aberrations in human lymphocytes.

  10. Influence of Stepped Osteotomy on Primary Stability of Implants Inserted in Low-Density Bone Sites: An In Vitro Study.

    PubMed

    Degidi, Marco; Daprile, Giuseppe; Piattelli, Adriano

    The aims of this study were to evaluate the ability of a stepped osteotomy to improve dental implant primary stability in low-density bone sites and to investigate possible correlations between primary stability parameters. The study was performed on fresh humid bovine bone classified as type III. The test group consisted of 30 Astra Tech EV implants inserted following the protocol provided by the manufacturer. The first control group consisted of 30 Astra Tech EV implants inserted in sites without the underpreparation of the apical portion. The second control group consisted of 30 Astra Tech TX implants inserted following the protocol provided by the manufacturer. Implant insertion was performed at the predetermined 30 rpm. The insertion torque data were recorded and exported as a curve; using a trapezoidal integration technique, the area underlying the curve was calculated: this area represents the variable torque work (VTW). Peak insertion torque (pIT) and resonance frequency analysis (RFA) were also recorded. A Mann-Whitney test showed that the mean VTW was significantly higher in the test group compared with the first control and second control groups; furthermore, statistical analysis showed that pIT also was significantly higher in the test group compared with the first and second control groups. Analyzing RFA values, only the difference between the test group and second control group showed statistical significance. Pearson correlation analysis showed a very strong positive correlation between pIT and VTW values in all groups; furthermore, it showed a positive correlation between pIT and RFA values and between VTW and RFA values only in the test group. Within the limitations of an in vitro study, the results show that stepped osteotomy can be a viable method to improve implant primary stability in low-density bone sites, and that, when a traditional osteotomy method is performed, RFA presents no correlation with pIT and VTW.

  11. An efficient genome-wide association test for mixed binary and continuous phenotypes with applications to substance abuse research.

    PubMed

    Buu, Anne; Williams, L Keoki; Yang, James J

    2018-03-01

    We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.

  12. Correlation Between University Students' Kinematic Achievement and Learning Styles

    NASA Astrophysics Data System (ADS)

    Çirkinoǧlu, A. G.; Dem&ircidot, N.

    2007-04-01

    In the literature, some researches on kinematics revealed that students have many difficulties in connecting graphs and physics. Also some researches showed that the method used in classroom affects students' further learning. In this study the correlation between university students' kinematics achieve and learning style are investigated. In this purpose Kinematics Achievement Test and Learning Style Inventory were applied to 573 students enrolled in general physics 1 courses at Balikesir University in the fall semester of 2005-2006. Kinematics Test, consists of 12 multiple choose and 6 open ended questions, was developed by researchers to assess students' understanding, interpreting, and drawing graphs. Learning Style Inventory, a 24 items test including visual, auditory, and kinesthetic learning styles, was developed and used by Barsch. The data obtained from in this study were analyzed necessary statistical calculations (T-test, correlation, ANOVA, etc.) by using SPSS statistical program. Based on the research findings, the tentative recommendations are made.

  13. Testing manifest monotonicity using order-constrained statistical inference.

    PubMed

    Tijmstra, Jesper; Hessen, David J; van der Heijden, Peter G M; Sijtsma, Klaas

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores, such as the restscore, a single item score, and in some cases the total score. In this study, we show that manifest monotonicity can be tested by means of the order-constrained statistical inference framework. We propose a procedure that uses this framework to determine whether manifest monotonicity should be rejected for specific items. This approach provides a likelihood ratio test for which the p-value can be approximated through simulation. A simulation study is presented that evaluates the Type I error rate and power of the test, and the procedure is applied to empirical data.

  14. Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.

    PubMed

    Potter, Christine E; Wang, Tianlin; Saffran, Jenny R

    2017-04-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, 6 months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, whereas both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. Copyright © 2016 Cognitive Science Society, Inc.

  15. Second language experience facilitates statistical learning of novel linguistic materials

    PubMed Central

    Potter, Christine E.; Wang, Tianlin; Saffran, Jenny R.

    2016-01-01

    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In the present research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin Learners and Naïve Controls, at two time points, six months apart. At each time point, participants performed two different statistical learning tasks: an artificial tonal language statistical learning task and a visual statistical learning task. Only the Mandarin-learning group showed significant improvement on the linguistic task, while both groups improved equally on the visual task. These results support the view that there are multiple influences on statistical learning. Domain-relevant experiences may affect the regularities that learners can discover when presented with novel stimuli. PMID:27988939

  16. Optimizing Aircraft Availability: Where to Spend Your Next O&M Dollar

    DTIC Science & Technology

    2010-03-01

    patterns of variance are present. In addition, we use the Breusch - Pagan test to statistically determine whether homoscedasticity exists. For this... Breusch - Pagan test , large p-values are preferred so that we may accept the null hypothesis of normality. Failure to meet the fourth assumption is...Next, we show the residual by predicted plot and the Breusch - Pagan test for constant variance of the residuals. The null hypothesis is that the

  17. A note on the misuses of the variance test in meteorological studies

    NASA Astrophysics Data System (ADS)

    Hazra, Arnab; Bhattacharya, Sourabh; Banik, Pabitra; Bhattacharya, Sabyasachi

    2017-12-01

    Stochastic modeling of rainfall data is an important area in meteorology. The gamma distribution is a widely used probability model for non-zero rainfall. Typically the choice of the distribution for such meteorological studies is based on two goodness-of-fit tests—the Pearson's Chi-square test and the Kolmogorov-Smirnov test. Inspired by the index of dispersion introduced by Fisher (Statistical methods for research workers. Hafner Publishing Company Inc., New York, 1925), Mooley (Mon Weather Rev 101:160-176, 1973) proposed the variance test as a goodness-of-fit measure in this context and a number of researchers have implemented it since then. We show that the asymptotic distribution of the test statistic for the variance test is generally not comparable to any central Chi-square distribution and hence the test is erroneous. We also describe a method for checking the validity of the asymptotic distribution for a class of distributions. We implement the erroneous test on some simulated, as well as real datasets and demonstrate how it leads to some wrong conclusions.

  18. Investigation of the Role of Training Health Volunteers in Promoting Pap Smear Test Use among Iranian Women Based on the Protection Motivation Theory.

    PubMed

    Ghahremani, Leila; Harami, Zahra Khiyali; Kaveh, Mohammad Hossein; Keshavarzi, Sareh

    2016-01-01

    Cervical cancer is known as one of the most prevalent types of cancers and a major public health problem in developing countries which can be detected by Pap test, prevented, and treated. Despite the effective role of Pap test in decreasing the incidence and mortality due to cervical cancer, it is still one the most common causes of cancer-related deaths among women, especially in developing countries. Thus, this study aimed to examine the effect of educational interventions implemented by health volunteers based on protection motivation theory (PMT) on promoting Pap test use among women. This quasi-experimental study was conducted on 60 health volunteers and 420 women. The study participants were divided into an intervention and a control group. Data were collected using a valid self-reported questionnaire including demographic variables and PMT constructs which was completed by both groups before and 2 months after the intervention. Then, the data were entered into the SPSS statistical software, version 19 and were analyzed using Chi-square test, independent T-test, and descriptive statistical methods. P<0.05 was considered as statistically significant. The findings of this study showed that the mean scores of PMT constructs (i.e. perceived vulnerability, perceived severity, fear, response-costs, self-efficacy, and intention) increased in the intervention group after the intervention (P<0.001). However, no significant difference was found between the two groups regarding response efficacy after the intervention (P=0.06). The rate of Pap test use also increased by about 62.9% among the study women. This study showed a significant positive relationship between PMT-based training and Pap test use. The results also revealed the successful contribution of health volunteers to training cervical cancer screening. Thus, training interventions based on PMT are suggested to be designed and implemented and health volunteers are recommended to be employed for educational purposes and promoting the community's, especially women's, health.

  19. The use of imputed sibling genotypes in sibship-based association analysis: on modeling alternatives, power and model misspecification.

    PubMed

    Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I

    2013-05-01

    When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.

  20. One-minute heart rate recovery after cycloergometer exercise testing as a predictor of mortality in a large cohort of exercise test candidates: substantial differences with the treadmill-derived parameter.

    PubMed

    Gaibazzi, Nicola; Petrucci, Nicola; Ziacchi, Vigilio

    2004-03-01

    Previous work showed a strong inverse association between 1-min heart rate recovery (HRR) after exercising on a treadmill and all-cause mortality. The aim of this study was to determine whether the results could be replicated in a wide population of real-world exercise ECG candidates in our center, using a standard bicycle exercise test. Between 1991 and 1997, 1420 consecutive patients underwent ECG exercise testing performed according to our standard cycloergometer protocol. Three pre-specified cut-point values of 1-min HRR, derived from previous studies in the medical literature, were tested to see whether they could identify a higher-risk group for all-cause mortality; furthermore, we tested the possible association between 1-min HRR as a continuous variable and mortality using logistic regression. Both methods showed a lack of a statistically significant association between 1-min HRR and all-cause mortality. A weak trend toward an inverse association, although not statistically significant, could not be excluded. We could not validate the clear-cut results from some previous studies performed using the treadmill exercise test. The results in our study may only "not exclude" a mild inverse association between 1-min HRR measured after cycloergometer exercise testing and all-cause mortality. The 1-min HRR measured after cycloergometer exercise testing was not clinically useful as a prognostic marker.

  1. Assessing the status of airline safety culture and its relationship to key employee attitudes

    NASA Astrophysics Data System (ADS)

    Owen, Edward L.

    The need to identify the factors that influence the overall safety environment and compliance with safety procedures within airline operations is substantial. This study examines the relationships between job satisfaction, the overall perception of the safety culture, and compliance with safety rules and regulations of airline employees working in flight operations. A survey questionnaire administered via the internet gathered responses which were converted to numerical values for quantitative analysis. The results were grouped to provide indications of overall average levels in each of the three categories, satisfaction, perceptions, and compliance. Correlations between data in the three sets were tested for statistical significance using two-sample t-tests assuming equal variances. Strong statistical significance was found between job satisfaction and compliance with safety rules and between perceptions of the safety environment and safety compliance. The relationship between job satisfaction and safety perceptions did not show strong statistical significance.

  2. Robust Statistical Detection of Power-Law Cross-Correlation.

    PubMed

    Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert

    2016-06-02

    We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.

  3. Differences in Temperature Changes in Premature Infants During Invasive Procedures in Incubators and Radiant Warmers.

    PubMed

    Handhayanti, Ludwy; Rustina, Yeni; Budiati, Tri

    Premature infants tend to lose heat quickly. This loss can be aggravated when they have received an invasive procedure involving a venous puncture. This research uses crossover design by conducting 2 intervention tests to compare 2 different treatments on the same sample. This research involved 2 groups with 18 premature infants in each. The process of data analysis used a statistical independent t test. Interventions conducted in an open incubator showed a p value of .001 which statistically related to heat loss in premature infants. In contrast, the radiant warmer p value of .001 statistically referred to a different range of heat gain before and after the venous puncture was given. The radiant warmer saved the premature infant from hypothermia during the invasive procedure. However, it is inadvisable for routine care of newborn infants since it can increase insensible water loss.

  4. Robust Statistical Detection of Power-Law Cross-Correlation

    PubMed Central

    Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert

    2016-01-01

    We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630

  5. Determination of ABO blood grouping and Rhesus factor from tooth material.

    PubMed

    Kumar, Pooja Vijay; Vanishree, M; Anila, K; Hunasgi, Santosh; Suryadevra, Sri Sujan; Kardalkar, Swetha

    2016-01-01

    The aim of the study was to determine blood groups and Rhesus factor from dentin and pulp using absorption-elution (AE) technique in different time periods at 0, 3, 6, 9 and 12 months, respectively. A total of 150 cases, 30 patients each at 0, 3, 6, 9 and 12 months were included in the study. The samples consisted of males and females with age ranging 13-60 years. Patient's blood group was checked and was considered as "control." The dentin and pulp of extracted teeth were tested for the presence of ABO/Rh antigen, at respective time periods by AE technique. Data were analyzed in proportion. For comparison, Chi-square test or Fisher's exact test was used for the small sample. Blood group antigens of ABO and Rh factor were detected in dentin and pulp up to 12 months. For both ABO and Rh factor, dentin and pulp showed 100% sensitivity for the samples tested at 0 month and showed a gradual decrease in the sensitivity as time period increased. The sensitivity of pulp was better than dentin for both the blood grouping systems and ABO blood group antigens were better detected than Rh antigens. In dentin and pulp, the antigens of ABO and Rh factor were detected up to 12 months but showed a progressive decrease in the antigenicity as the time period increased. When compared the results obtained of dentin and pulp in ABO and Rh factor grouping showed similar results with no statistical significance. The sensitivity of ABO blood grouping was better than Rh factor blood grouping and showed a statistically significant result.

  6. The Sperm Chromatin Structure Assay (SCSA(®)) and other sperm DNA fragmentation tests for evaluation of sperm nuclear DNA integrity as related to fertility.

    PubMed

    Evenson, Donald P

    2016-06-01

    Thirty-five years ago the pioneering paper in Science (240:1131) on the relationship between sperm DNA integrity and pregnancy outcome was featured as the cover issue showing a fluorescence photomicrograph of red and green stained sperm. The flow cytometry data showed a very significant difference in sperm DNA integrity between fertile and subfertile bulls and men. This study utilized heat (100°C, 5min) to denature DNA at sites of DNA strand breaks followed by staining with acridine orange (AO) and measurements of 5000 individual sperm of green double strand (ds) DNA and red single strand (ss) DNA fluorescence. Later, the heat protocol was changed to a low pH protocol to denature the DNA at sites of strand breaks; the heat and acid procedures produced the same results. SCSA data are very advantageously dual parameter with 1024 channels (degrees) of both red and green fluorescence. Hundreds of publications on the use of the SCSA test in animals and humans have validated the SCSA as a highly useful test for determining male breeding soundness. The SCSA test is a rapid, non-biased flow cytometer machine measurement providing robust statistical data with exceptional precision and repeatability. Many genotoxic experiments showed excellent dose response data with very low coefficient of variation that further validated the SCSA as being a highly powerful assay for sperm DNA integrity. Twelve years following the introduction of the SCSA test, the terminal deoxynucleotidyl transferase-mediated fluorescein-dUTP nick end labelling (TUNEL) test (1993) for sperm was introduced as the only other flow cytometric assay for sperm DNA fragmentation. However, the TUNEL test can also be done by light microscopy with much less statistical robustness. The COMET (1998) and Sperm Chromatin Dispersion (SCD; HALO) (2003) tests were introduced as light microscope tests that don't require a flow cytometer. Since these tests measure only 50-200 sperm per sample, they suffer from the lack of the statistical robustness of flow cytometric measurements. Only the SCSA test has an exact standardization of a fixed protocol. The many variations of the other tests make it very difficult to compare data and thresholds for risk of male factor infertility. Data from these four sperm DNA fragmentation tests plus the light microscope acridine orange test (AOT) are correlated to various degrees. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  8. Robot-assisted walking training for individuals with Parkinson's disease: a pilot randomized controlled trial.

    PubMed

    Sale, Patrizio; De Pandis, Maria Francesca; Le Pera, Domenica; Sova, Ivan; Cimolin, Veronica; Ancillao, Andrea; Albertini, Giorgio; Galli, Manuela; Stocchi, Fabrizio; Franceschini, Marco

    2013-05-24

    Over the last years, the introduction of robotic technologies into Parkinson's disease rehabilitation settings has progressed from concept to reality. However, the benefit of robotic training remains elusive. This pilot randomized controlled observer trial is aimed at investigating the feasibility, the effectiveness and the efficacy of new end-effector robot training in people with mild Parkinson's disease. Design. Pilot randomized controlled trial. Robot training was feasible, acceptable, safe, and the participants completed 100% of the prescribed training sessions. A statistically significant improvement in gait index was found in favour of the EG (T0 versus T1). In particular, the statistical analysis of primary outcome (gait speed) using the Friedman test showed statistically significant improvements for the EG (p = 0,0195). The statistical analysis performed by Friedman test of Step length left (p = 0,0195) and right (p = 0,0195) and Stride length left (p = 0,0078) and right (p = 0,0195) showed a significant statistical gain. No statistically significant improvements on the CG were found. Robot training is a feasible and safe form of rehabilitative exercise for cognitively intact people with mild PD. This original approach can contribute to increase a short time lower limb motor recovery in idiopathic PD patients. The focus on the gait recovery is a further characteristic that makes this research relevant to clinical practice. On the whole, the simplicity of treatment, the lack of side effects, and the positive results from patients support the recommendation to extend the use of this treatment. Further investigation regarding the long-time effectiveness of robot training is warranted. ClinicalTrials.gov NCT01668407.

  9. Comparative Evaluation of Microleakage Between Nano-Ionomer, Giomer and Resin Modified Glass Ionomer Cement in Class V Cavities- CLSM Study

    PubMed Central

    Hari, Archana; Thumu, Jayaprakash; Velagula, Lakshmi Deepa; Bolla, Nagesh; Varri, Sujana; Kasaraneni, Srikanth; Nalli, Siva Venkata Malathi

    2016-01-01

    Introduction Marginal integrity of adhesive restorative materials provides better sealing ability for enamel and dentin and plays an important role in success of restoration in Class V cavities. Restorative material with good marginal adaptation improves the longevity of restorations. Aim Aim of this study was to evaluate microleakage in Class V cavities which were restored with Resin Modified Glass Ionomer Cement (RMGIC), Giomer and Nano-Ionomer. Materials and Methods This in-vitro study was performed on 60 human maxillary and mandibular premolars which were extracted for orthodontic reasons. A standard wedge shaped defect was prepared on the buccal surfaces of teeth with the gingival margin placed near Cemento Enamel Junction (CEJ). Teeth were divided into three groups of 20 each and restored with RMGIC, Giomer and Nano-Ionomer and were subjected to thermocycling. Teeth were then immersed in 0.5% Rhodamine B dye for 48 hours. They were sectioned longitudinally from the middle of cavity into mesial and distal parts. The sections were observed under Confocal Laser Scanning Microscope (CLSM) to evaluate microleakage. Depth of dye penetration was measured in millimeters. Statistical Analysis The data was analysed using the Kruskal Wallis test. Pair wise comparison was done with Mann Whitney U Test. A p-value<0.05 is taken as statistically significant. Results Nano-Ionomer showed less microleakage which was statistically significant when compared to Giomer (p=0.0050). Statistically no significant difference was found between Nano Ionomer and RMGIC (p=0.3550). There was statistically significant difference between RMGIC and Giomer (p=0.0450). Conclusion Nano-Ionomer and RMGIC showed significantly less leakage and better adaptation than Giomer and there was no statistically significant difference between Nano-Ionomer and RMGIC. PMID:27437363

  10. Impact of genotyping errors on statistical power of association tests in genomic analyses: A case study

    PubMed Central

    Hou, Lin; Sun, Ning; Mane, Shrikant; Sayward, Fred; Rajeevan, Nallakkandi; Cheung, Kei-Hoi; Cho, Kelly; Pyarajan, Saiju; Aslan, Mihaela; Miller, Perry; Harvey, Philip D.; Gaziano, J. Michael; Concato, John; Zhao, Hongyu

    2017-01-01

    A key step in genomic studies is to assess high throughput measurements across millions of markers for each participant’s DNA, either using microarrays or sequencing techniques. Accurate genotype calling is essential for downstream statistical analysis of genotype-phenotype associations, and next generation sequencing (NGS) has recently become a more common approach in genomic studies. How the accuracy of variant calling in NGS-based studies affects downstream association analysis has not, however, been studied using empirical data in which both microarrays and NGS were available. In this article, we investigate the impact of variant calling errors on the statistical power to identify associations between single nucleotides and disease, and on associations between multiple rare variants and disease. Both differential and nondifferential genotyping errors are considered. Our results show that the power of burden tests for rare variants is strongly influenced by the specificity in variant calling, but is rather robust with regard to sensitivity. By using the variant calling accuracies estimated from a substudy of a Cooperative Studies Program project conducted by the Department of Veterans Affairs, we show that the power of association tests is mostly retained with commonly adopted variant calling pipelines. An R package, GWAS.PC, is provided to accommodate power analysis that takes account of genotyping errors (http://zhaocenter.org/software/). PMID:28019059

  11. HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.

    PubMed

    Song, Chi; Tseng, George C

    2014-01-01

    Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.

  12. Statistical fluctuations in pedestrian evacuation times and the effect of social contagion

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Bouzat, Sebastián; Kuperman, Marcelo N.

    2016-08-01

    Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models is now available, their calibration and test against empirical data are generally restricted to global averaged quantities; the statistics compiled from the time series of individual escapes ("microscopic" statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the microscopic statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with features that afford a semiquantitative reproduction of the experimental microscopic statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on microscopic statistics.

  13. The Quality vs. the Quantity of Schooling: What Drives Economic Growth?

    ERIC Educational Resources Information Center

    Breton, Theodore R.

    2011-01-01

    This paper challenges Hanushek and Woessmann's (2008) contention that the quality and not the quantity of schooling determines a nation's rate of economic growth. I first show that their statistical analysis is flawed. I then show that when a nation's average test scores and average schooling attainment are included in a national income model,…

  14. Gene-Based Testing of Interactions in Association Studies of Quantitative Traits

    PubMed Central

    Ma, Li; Clark, Andrew G.; Keinan, Alon

    2013-01-01

    Various methods have been developed for identifying gene–gene interactions in genome-wide association studies (GWAS). However, most methods focus on individual markers as the testing unit, and the large number of such tests drastically erodes statistical power. In this study, we propose novel interaction tests of quantitative traits that are gene-based and that confer advantage in both statistical power and biological interpretation. The framework of gene-based gene–gene interaction (GGG) tests combine marker-based interaction tests between all pairs of markers in two genes to produce a gene-level test for interaction between the two. The tests are based on an analytical formula we derive for the correlation between marker-based interaction tests due to linkage disequilibrium. We propose four GGG tests that extend the following P value combining methods: minimum P value, extended Simes procedure, truncated tail strength, and truncated P value product. Extensive simulations point to correct type I error rates of all tests and show that the two truncated tests are more powerful than the other tests in cases of markers involved in the underlying interaction not being directly genotyped and in cases of multiple underlying interactions. We applied our tests to pairs of genes that exhibit a protein–protein interaction to test for gene-level interactions underlying lipid levels using genotype data from the Atherosclerosis Risk in Communities study. We identified five novel interactions that are not evident from marker-based interaction testing and successfully replicated one of these interactions, between SMAD3 and NEDD9, in an independent sample from the Multi-Ethnic Study of Atherosclerosis. We conclude that our GGG tests show improved power to identify gene-level interactions in existing, as well as emerging, association studies. PMID:23468652

  15. Antiadherent and antibacterial properties of stainless steel and NiTi orthodontic wires coated with silver against Lactobacillus acidophilus--an in vitro study.

    PubMed

    Mhaske, Arun Rameshwar; Shetty, Pradeep Chandra; Bhat, N Sham; Ramachandra, C S; Laxmikanth, S M; Nagarahalli, Kiran; Tekale, Pawankumar Dnyandeo

    2015-01-01

    The purpose of the study is to assess the antiadherent and antibacterial properties of surface-modified stainless steel and NiTi orthodontic wires with silver against Lactobacillus acidophilus. This study was done on 80 specimens of stainless steel and NiTi orthodontic wires. The specimens were divided into eight test groups. Each group consisted of 10 specimens. Groups containing uncoated wires acted as a control group for their respective experimental group containing coated wires. Surface modification of wires was carried out by the thermal vacuum evaporation method with silver. Wires were then subjected to microbiological tests for assessment of the antiadherent and antibacterial properties of silver coating against L. acidophilus. Mann-Whitney U test was used to analyze the colony-forming units (CFUs) in control and test groups; and Student's t test (two-tailed, dependent) was used to find the significance of study parameters on a continuous scale within each group. Orthodontic wires coated with silver showed an antiadherent effect against L. acidophilus compared with uncoated wires. Uncoated stainless steel and NiTi wires respectively showed 35.4 and 20.5 % increase in weight which was statistically significant (P < 0.001), whereas surface-modified wires showed only 4.08 and 4.4 % increase in weight (statistically insignificant P > 0.001). The groups containing surface-modified wires showed statistically significant decrease in the survival rate of L. acidophilus expressed as CFU and as log of colony count when compared to groups containing uncoated wires. It was 836.60 ± 48.97 CFU in the case of uncoated stainless steel whereas it was 220.90 ± 30.73 CFU for silver-modified stainless steel, 748.90 ± 35.64 CFU for uncoated NiTi, and 203.20 ± 41.94 CFU for surface-modified NiTi. Surface modification of orthodontic wires with silver can be used to prevent the accumulation of dental plaque and the development of dental caries during orthodontic treatment.

  16. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  17. Vacuum Strength of Two Candidate Glasses for a Space Observatory

    NASA Technical Reports Server (NTRS)

    Manning, Timothy Andrew; Tucker, Dennis S.; Herren, Kenneth A.; Gregory, Don A.

    2007-01-01

    The strengths of two candidate glass types for use in a space observatory were measured. Samples of ultra-low expansion glass (ULE) and borosilicate (Pyrex) were tested in air and in vacuum at room temperature (20 degrees C) and in vacuum after being heated to 200 degrees C. Both glasses tested in vacuum showed a significant increase in strength over those tested in air. However, there was no statistical difference between the strength of samples tested in vacuum at room temperature and those tested in vacuum after heating to 200 degrees C.

  18. Vacuum Strength of Two Candidate Glasses for a Space Observatory

    NASA Technical Reports Server (NTRS)

    Manning, T. a.; Tucker, D. S.; Herren, K. A.; Gregory, D. A.

    2007-01-01

    The strengths of two candidate glass types for use in a space observatory were measured. Samples of ultra-low expansion glass (ULE) and borosilicate (Pyrex) were tested in air and in vacuum at room temperature (20 C) and in vacuum after being heated to 200 C. Both glasses tested in vacuum showed an increase in strength over those tested in air. However, there was no statistical difference between the strength of samples tested in vacuum at room temperature and those tested in vacuum after heating to 200 C.

  19. Effect of Internet-Based Cognitive Apprenticeship Model (i-CAM) on Statistics Learning among Postgraduate Students

    PubMed Central

    Saadati, Farzaneh; Ahmad Tarmizi, Rohani

    2015-01-01

    Because students’ ability to use statistics, which is mathematical in nature, is one of the concerns of educators, embedding within an e-learning system the pedagogical characteristics of learning is ‘value added’ because it facilitates the conventional method of learning mathematics. Many researchers emphasize the effectiveness of cognitive apprenticeship in learning and problem solving in the workplace. In a cognitive apprenticeship learning model, skills are learned within a community of practitioners through observation of modelling and then practice plus coaching. This study utilized an internet-based Cognitive Apprenticeship Model (i-CAM) in three phases and evaluated its effectiveness for improving statistics problem-solving performance among postgraduate students. The results showed that, when compared to the conventional mathematics learning model, the i-CAM could significantly promote students’ problem-solving performance at the end of each phase. In addition, the combination of the differences in students' test scores were considered to be statistically significant after controlling for the pre-test scores. The findings conveyed in this paper confirmed the considerable value of i-CAM in the improvement of statistics learning for non-specialized postgraduate students. PMID:26132553

  20. Evaluation of setting time and flow properties of self-synthesize alginate impressions

    NASA Astrophysics Data System (ADS)

    Halim, Calista; Cahyanto, Arief; Sriwidodo, Harsatiningsih, Zulia

    2018-02-01

    Alginate is an elastic hydrocolloid dental impression materials to obtain negative reproduction of oral mucosa such as to record soft-tissue and occlusal relationships. The aim of the present study was to synthesize alginate and to determine the setting time and flow properties. There were five groups of alginate consisted of fifty samples self-synthesize alginate and commercial alginate impression product. Fifty samples were divided according to two tests, each twenty-five samples for setting time and flow test. Setting time test was recorded in the s unit, meanwhile, flow test was recorded in the mm2 unit. The fastest setting time result was in the group three (148.8 s) and the latest was group fours). The highest flow test result was in the group three (69.70 mm2) and the lowest was group one (58.34 mm2). Results were analyzed statistically by one way ANOVA (α= 0.05), showed that there was a statistical significance of setting time while no statistical significance of flow properties between self-synthesize alginate and alginate impression product. In conclusion, the alginate impression was successfully self-synthesized and variation composition gives influence toward setting time and flow properties. The most resemble setting time of control group is group three. The most resemble flow of control group is group four.

  1. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    PubMed

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The effect of hydroxyzine on treating bruxism of 2- to 14-year-old children admitted to the clinic of Bandar Abbas Children Hospital in 2013-2014.

    PubMed

    Rahmati, M; Moayedi, A; Zakery Shahvari, S; Golmirzaei, J; Zahirinea, M; Abbasi, B

    2015-01-01

    Introduction. Bruxism is to press or grind teeth against each other in non-physiologic cases, when an individual does not swallow or chew. If not treated, teeth problems, stress, mental disorders, frequent night waking, and headache is expected. This research aimed to study the effect of hydroxyzine on treating bruxism of 2- to 14-year-old children admitted to the clinic of Bandar Abbas Children Hospital. Methodology. In this clinical trial, 143 children with the ages between 4-12 years were admitted to the Children Hospital and were divided randomly into test and control groups. The test group consisted of 88 hydroxyzine-treated children and the control group consisted of 55 children who used hot towels. Both groups were examined in some stages including the pre-test stages or the stage before starting treatments at two, four, and six weeks and four months after stopping the treatment. The effects of each treatment on reducing bruxism symptoms were assessed by a questionnaire. The data were analyzed by using SPSS in descriptive statistics, t-test, and ANOVA. Results. As far as bruxism severity was concerned, the results showed a significant difference between the test group members who received hydroxyzine and the control group members who received no medication. T-test results showed a statistically significant difference between the test and the control groups in the second post-test (four weeks later) (p. value ≤ 0.05). Mean of the scores of bruxism severity in the test group has changed significantly in the post-test (at two weeks, four weeks, and six weeks later) as compared to the pre-test. Whereas, as far as the response to the treatment, no significant difference was recorded between the control group and the test group 4 weeks after the treatment. Discussion. The results showed that prescribing hydroxyzine for 4 weeks had a considerable effect in diminishing bruxism severity between the test groups.

  3. Statistical analysis of content of Cs-137 in soils in Bansko-Razlog region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobilarov, R. G., E-mail: rkobi@tu-sofia.bg

    Statistical analysis of the data set consisting of the activity concentrations of {sup 137}Cs in soils in Bansko–Razlog region is carried out in order to establish the dependence of the deposition and the migration of {sup 137}Cs on the soil type. The descriptive statistics and the test of normality show that the data set have not normal distribution. Positively skewed distribution and possible outlying values of the activity of {sup 137}Cs in soils were observed. After reduction of the effects of outliers, the data set is divided into two parts, depending on the soil type. Test of normality of themore » two new data sets shows that they have a normal distribution. Ordinary kriging technique is used to characterize the spatial distribution of the activity of {sup 137}Cs over an area covering 40 km{sup 2} (whole Razlog valley). The result (a map of the spatial distribution of the activity concentration of {sup 137}Cs) can be used as a reference point for future studies on the assessment of radiological risk to the population and the erosion of soils in the study area.« less

  4. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  5. The effect of communication skills training on quality of care, self-efficacy, job satisfaction and communication skills rate of nurses in hospitals of tabriz, iran.

    PubMed

    Khodadadi, Esmail; Ebrahimi, Hossein; Moghaddasian, Sima; Babapour, Jalil

    2013-03-01

    Having an effective relationship with the patient in the process of treatment is essential. Nurses must have communication skills in order to establish effective relationships with the patients. This study evaluated the impact of communication skills training on quality of care, self-efficacy, job satisfaction and communication skills of nurses. This is an experimental study with a control group that has been done in 2012. The study sample consisted of 73 nurses who work in hospitals of Tabriz; they were selected by proportional randomizing method. The intervention was only conducted on the experimental group. In order to measure the quality of care 160 patients, who had received care by nurses, participated in this study. The Data were analyzed by SPSS (ver.13). Comparing the mean scores of communication skills showed a statistically significant difference between control and experimental groups after intervention. The paired t-test showed a statistically significant difference in the experimental group before and after the intervention. Independent t-test showed a statistically significant difference between the rate of quality of care in patients of control and experimental groups after the intervention. The results showed that the training of communication skills can increase the nurse's rate of communication skills and cause elevation in quality of nursing care. Therefore, in order to improve the quality of nursing care it is recommended that communication skills be established and taught as a separate course in nursing education.

  6. Construction of cosmic string induced temperature anisotropy maps with CMBFAST and statistical analysis

    NASA Astrophysics Data System (ADS)

    Simatos, N.; Perivolaropoulos, L.

    2001-01-01

    We use the publicly available code CMBFAST, as modified by Pogosian and Vachaspati, to simulate the effects of wiggly cosmic strings on the cosmic microwave background (CMB). Using the modified CMBFAST code, which takes into account vector modes and models wiggly cosmic strings by the one-scale model, we go beyond the angular power spectrum to construct CMB temperature maps with a resolution of a few degrees. The statistics of these maps are then studied using conventional and recently proposed statistical tests optimized for the detection of hidden temperature discontinuities induced by the Gott-Kaiser-Stebbins effect. We show, however, that these realistic maps cannot be distinguished in a statistically significant way from purely Gaussian maps with an identical power spectrum.

  7. The effects of multiple repairs on Inconel 718 weld mechanical properties

    NASA Technical Reports Server (NTRS)

    Russell, C. K.; Nunes, A. C., Jr.; Moore, D.

    1991-01-01

    Inconel 718 weldments were repaired 3, 6, 9, and 13 times using the gas tungsten arc welding process. The welded panels were machined into mechanical test specimens, postweld heat treated, and nondestructively tested. Tensile properties and high cycle fatigue life were evaluated and the results compared to unrepaired weld properties. Mechanical property data were analyzed using the statistical methods of difference in means for tensile properties and difference in log means and Weibull analysis for high cycle fatigue properties. Statistical analysis performed on the data did not show a significant decrease in tensile or high cycle fatigue properties due to the repeated repairs. Some degradation was observed in all properties, however, it was minimal.

  8. Immersive Theater - a Proven Way to Enhance Learning Retention

    NASA Astrophysics Data System (ADS)

    Reiff, P. H.; Zimmerman, L.; Spillane, S.; Sumners, C.

    2014-12-01

    The portable immersive theater has gone from our first demonstration at fall AGU 2003 to a product offered by multiple companies in various versions to literally millions of users per year. As part of our NASA funded outreach program, we conducted a test of learning in a portable Discovery Dome as contrasted with learning the same materials (visuals and sound track) on a computer screen. We tested 200 middle school students (primarily underserved minorities). Paired t-tests and an independent t-test were used to compare the amount of learning that students achieved. Interest questionnaires were administered to participants in formal (public school) settings and focus groups were conducted in informal (museum camp and educational festival) settings. Overall results from the informal and formal educational setting indicated that there was a statistically significant increase in test scores after viewing We Choose Space. There was a statistically significant increase in test scores for students who viewed We Choose Space in the portable Discovery Dome (9.75) as well as with the computer (8.88). However, long-term retention of the material tested on the questionnaire indicated that for students who watched We Choose Space in the portable Discovery Dome, there was a statistically significant long-term increase in test scores (10.47), whereas, six weeks after learning on the computer, the improvements over the initial baseline (3.49) were far less and were not statistically significant. The test score improvement six weeks after learning in the dome was essentially the same as the post test immediately after watching the show, demonstrating virtually no loss of gained information in the six week interval. In the formal educational setting, approximately 34% of the respondents indicated that they wanted to learn more about becoming a scientist, while 35% expressed an interest in a career in space science. In the informal setting, 26% indicated that they were interested in pursuing a career in space science.

  9. The Effect of Personality Traits of Managers/Supervisor on Job Satisfaction of Medical Sciences University Staffs.

    PubMed

    Abedi, G; Molazadeh-Mahali, Q A; Mirzaian, B; Nadi-Ghara, A; Heidari-Gorji, A M

    2016-01-01

    Todays people are spending most of their time life in their workplace therefore investigation for job satisfaction related factors is necessities of researches. The purpose of this research was to analyze the effect of manager's personality traits on employee job satisfaction. The present study is a descriptive and causative-comparative one utilized on a statistical sample of 44 managers and 119 employees. It was examined and analyzed through descriptive and inferential statistics of Student's t -test (independent T), one-way ANOVA, and Kolmogorov-Smirnov test. Findings showed that the managers and supervisors with personality traits of extraversion, eagerness to new experiences, adaptability, and dutifulness had higher subordinate employee job satisfaction. However, in the neurotic trait, the result was different. The results showed that job satisfaction was low in the aspect of neurosis. Based on this, it is suggested that, before any selection in managerial and supervisory positions, candidates receive a personality test and in case an individual has a neurotic trait, appropriate interference takes place both in this group and the employees' one.

  10. Length bias correction in gene ontology enrichment analysis using logistic regression.

    PubMed

    Mi, Gu; Di, Yanming; Emerson, Sarah; Cumbie, Jason S; Chang, Jeff H

    2012-01-01

    When assessing differential gene expression from RNA sequencing data, commonly used statistical tests tend to have greater power to detect differential expression of genes encoding longer transcripts. This phenomenon, called "length bias", will influence subsequent analyses such as Gene Ontology enrichment analysis. In the presence of length bias, Gene Ontology categories that include longer genes are more likely to be identified as enriched. These categories, however, are not necessarily biologically more relevant. We show that one can effectively adjust for length bias in Gene Ontology analysis by including transcript length as a covariate in a logistic regression model. The logistic regression model makes the statistical issue underlying length bias more transparent: transcript length becomes a confounding factor when it correlates with both the Gene Ontology membership and the significance of the differential expression test. The inclusion of the transcript length as a covariate allows one to investigate the direct correlation between the Gene Ontology membership and the significance of testing differential expression, conditional on the transcript length. We present both real and simulated data examples to show that the logistic regression approach is simple, effective, and flexible.

  11. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  12. The Effect of Personality Traits of Managers/Supervisor on Job Satisfaction of Medical Sciences University Staffs

    PubMed Central

    Abedi, G; Molazadeh-Mahali, QA; Mirzaian, B; Nadi-Ghara, A; Heidari-Gorji, AM

    2016-01-01

    Background: Todays people are spending most of their time life in their workplace therefore investigation for job satisfaction related factors is necessities of researches. Aim: The purpose of this research was to analyze the effect of manager's personality traits on employee job satisfaction. Subjects and Methods: The present study is a descriptive and causative-comparative one utilized on a statistical sample of 44 managers and 119 employees. It was examined and analyzed through descriptive and inferential statistics of Student's t-test (independent T), one-way ANOVA, and Kolmogorov–Smirnov test. Results: Findings showed that the managers and supervisors with personality traits of extraversion, eagerness to new experiences, adaptability, and dutifulness had higher subordinate employee job satisfaction. However, in the neurotic trait, the result was different. Conclusion: The results showed that job satisfaction was low in the aspect of neurosis. Based on this, it is suggested that, before any selection in managerial and supervisory positions, candidates receive a personality test and in case an individual has a neurotic trait, appropriate interference takes place both in this group and the employees' one. PMID:28480099

  13. Comparative evaluation of insertion torque and mechanical stability for self-tapping and self-drilling orthodontic miniscrews - an in vitro study.

    PubMed

    Tepedino, Michele; Masedu, Francesco; Chimenti, Claudio

    2017-05-30

    The aim of the present study was to evaluate the relationship between insertion torque and stability of miniscrews in terms of resistance against dislocation, then comparing a self-tapping screw with a self-drilling one. Insertion torque was measured during placement of 30 self-drilling and 31 self-tapping stainless steel miniscrews (Leone SpA, Sesto Fiorentino, Italy) in synthetic bone blocks. Then, an increasing pulling force was applied at an angle of 90° and 45°, and the displacement of the miniscrews was recorded. The statistical analysis showed a statistically significant difference between the mean Maximum Insertion Torque (MIT) observed in the two groups and showed that force angulation and MIT have a statistically significant effect on miniscrews stability. For both the miniscrews, an angle of 90° between miniscrew and loading force is preferable in terms of stability. The tested self-drilling orthodontic miniscrews showed higher MIT and greater resistance against dislocation than the self-tapping ones.

  14. Accelerated dissolution testing for controlled release microspheres using the flow-through dissolution apparatus.

    PubMed

    Collier, Jarrod W; Thakare, Mohan; Garner, Solomon T; Israel, Bridg'ette; Ahmed, Hisham; Granade, Saundra; Strong, Deborah L; Price, James C; Capomacchia, A C

    2009-01-01

    Theophylline controlled release capsules (THEO-24 CR) were used as a model system to evaluate accelerated dissolution tests for process and quality control and formulation development of controlled release formulations. Dissolution test acceleration was provided by increasing temperature, pH, flow rate, or adding surfactant. Electron microscope studies on the theophylline microspheres subsequent to each experiment showed that at pH values of 6.6 and 7.6 the microspheres remained intact, but at pH 8.6 they showed deterioration. As temperature was increased from 37-57 degrees C, no change in microsphere integrity was noted. Increased flow rate also showed no detrimental effect on integrity. The effect of increased temperature was determined to be the statistically significant variable.

  15. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  16. The Sport Students’ Ability of Literacy and Statistical Reasoning

    NASA Astrophysics Data System (ADS)

    Hidayah, N.

    2017-03-01

    The ability of literacy and statistical reasoning is very important for the students of sport education college due to the materials of statistical learning can be taken from their many activities such as sport competition, the result of test and measurement, predicting achievement based on training, finding connection among variables, and others. This research tries to describe the sport education college students’ ability of literacy and statistical reasoning related to the identification of data type, probability, table interpretation, description and explanation by using bar or pie graphic, explanation of variability, interpretation, the calculation and explanation of mean, median, and mode through an instrument. This instrument is tested to 50 college students majoring in sport resulting only 26% of all students have the ability above 30% while others still below 30%. Observing from all subjects; 56% of students have the ability of identification data classification, 49% of students have the ability to read, display and interpret table through graphic, 27% students have the ability in probability, 33% students have the ability to describe variability, and 16.32% students have the ability to read, count and describe mean, median and mode. The result of this research shows that the sport students’ ability of literacy and statistical reasoning has not been adequate and students’ statistical study has not reached comprehending concept, literary ability trining and statistical rasoning, so it is critical to increase the sport students’ ability of literacy and statistical reasoning

  17. Biomechanical in vitro - stability testing on human specimens of a locking plate system against conventional screw fixation of a proximal first metatarsal lateral displacement osteotomy.

    PubMed

    Arnold, Heino; Stukenborg-Colsman, Christina; Hurschler, Christof; Seehaus, Frank; Bobrowitsch, Evgenij; Waizy, Hazibullah

    2012-01-01

    The aim of this study was to examine resistance to angulation and displacement of the internal fixation of a proximal first metatarsal lateral displacement osteotomy, using a locking plate system compared with a conventional crossed screw fixation. Seven anatomical human specimens were tested. Each specimen was tested with a locking screw plate as well as a crossed cancellous srew fixation. The statistical analysis was performed by the Friedman test. The level of significance was p = 0.05. We found larger stability about all three axes of movement analyzed for the PLATE than the crossed screws osteosynthesis (CSO). The Friedman test showed statistical significance at a level of p = 0.05 for all groups and both translational and rotational movements. The results of our study confirm that the fixation of the lateral proximal first metatarsal displacement osteotomy with a locking plate fixation is a technically simple procedure of superior stability.

  18. Biomechanical In Vitro - Stability Testing on Human Specimens of a Locking Plate System Against Conventional Screw Fixation of a Proximal First Metatarsal Lateral Displacement Osteotomy

    PubMed Central

    Arnold, Heino; Stukenborg-Colsman, Christina; Hurschler, Christof; Seehaus, Frank; Bobrowitsch, Evgenij; Waizy, Hazibullah

    2012-01-01

    Introduction: The aim of this study was to examine resistance to angulation and displacement of the internal fixation of a proximal first metatarsal lateral displacement osteotomy, using a locking plate system compared with a conventional crossed screw fixation. Materials and Methodology: Seven anatomical human specimens were tested. Each specimen was tested with a locking screw plate as well as a crossed cancellous srew fixation. The statistical analysis was performed by the Friedman test. The level of significance was p = 0.05. Results: We found larger stability about all three axes of movement analyzed for the PLATE than the crossed screws osteosynthesis (CSO). The Friedman test showed statistical significance at a level of p = 0.05 for all groups and both translational and rotational movements. Conclusion: The results of our study confirm that the fixation of the lateral proximal first metatarsal displacement osteotomy with a locking plate fixation is a technically simple procedure of superior stability. PMID:22675409

  19. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  20. Order-restricted inference for means with missing values.

    PubMed

    Wang, Heng; Zhong, Ping-Shou

    2017-09-01

    Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.

  1. Interlaboratory round robin study on axial tensile properties of SiC-SiC CMC tubular test specimens [Interlaboratory round robin study on axial tensile properties of SiC/SiC tubes

    DOE PAGES

    Singh, Gyanender P.; Gonczy, Steve T.; Deck, Christian P.; ...

    2018-04-19

    An interlaboratory round robin study was conducted on the tensile strength of SiC–SiC ceramic matrix composite (CMC) tubular test specimens at room temperature with the objective of expanding the database of mechanical properties of nuclear grade SiC–SiC and establishing the precision and bias statement for standard test method ASTM C1773. The mechanical properties statistics from the round robin study and the precision statistics and precision statement are presented herein. The data show reasonable consistency across the laboratories, indicating that the current C1773–13 ASTM standard is adequate for testing ceramic fiber reinforced ceramic matrix composite tubular test specimen. Furthermore, it wasmore » found that the distribution of ultimate tensile strength data was best described with a two–parameter Weibull distribution, while a lognormal distribution provided a good description of the distribution of proportional limit stress data.« less

  2. Genotoxicity of AMPA, the environmental metabolite of glyphosate, assessed by the Comet assay and cytogenetic tests.

    PubMed

    Mañas, F; Peralta, L; Raviolo, J; García Ovando, H; Weyers, A; Ugnia, L; Gonzalez Cid, M; Larripa, I; Gorla, N

    2009-03-01

    Formulations containing glyphosate are the most widely used herbicides in the world. AMPA is the major environmental breakdown product of glyphosate. The purpose of this study is to evaluate the in vitro genotoxicity of AMPA using the Comet assay in Hep-2 cells after 4h of incubation and the chromosome aberration (CA) test in human lymphocytes after 48h of exposition. Potential in vivo genotoxicity was evaluated through the micronucleus test in mice. In the Comet assay, the level of DNA damage in exposed cells at 2.5-7.5mM showed a significant increase compared with the control group. In human lymphocytes we found statistically significant clastogenic effect AMPA at 1.8mM compared with the control group. In vivo, the micronucleus test rendered significant statistical increases at 200-400mg/kg. AMPA was genotoxic in the three performed tests. Very scarce data are available about AMPA potential genotoxicity.

  3. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  4. Sequential Blood Filtration for Extracorporeal Circulation: Initial Results from a Proof-of-Concept Prototype.

    PubMed

    Herbst, Daniel P

    2014-09-01

    Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient's systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26-33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique.

  5. Sequential Blood Filtration for Extracorporeal Circulation: Initial Results from a Proof-of-Concept Prototype

    PubMed Central

    Herbst, Daniel P.

    2014-01-01

    Abstract: Micropore filters are used during extracorporeal circulation to prevent gaseous and solid particles from entering the patient’s systemic circulation. Although these devices improve patient safety, limitations in current designs have prompted the development of a new concept in micropore filtration. A prototype of the new design was made using 40-μm filter screens and compared against four commercially available filters for performance in pressure loss and gross air handling. Pre- and postfilter bubble counts for 5- and 10-mL bolus injections in an ex vivo test circuit were recorded using a Doppler ultrasound bubble counter. Statistical analysis of results for bubble volume reduction between test filters was performed with one-way repeated-measures analysis of variance using Bonferroni post hoc tests. Changes in filter performance with changes in microbubble load were also assessed with dependent t tests using the 5- and 10-mL bolus injections as the paired sample for each filter. Significance was set at p < .05. All filters in the test group were comparable in pressure loss performance, showing a range of 26–33 mmHg at a flow rate of 6 L/min. In gross air-handling studies, the prototype showed improved bubble volume reduction, reaching statistical significance with three of the four commercial filters. All test filters showed decreased performance in bubble volume reduction when the microbubble load was increased. Findings from this research support the underpinning theories of a sequential arterial-line filter design and suggest that improvements in microbubble filtration may be possible using this technique. PMID:26357790

  6. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model.

    PubMed

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-08-16

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.

  7. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model

    PubMed Central

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-01-01

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065

  8. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay

    2016-10-01

    Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.

  9. Tests of selection in pooled case-control data: an empirical study.

    PubMed

    Udpa, Nitin; Zhou, Dan; Haddad, Gabriel G; Bafna, Vineet

    2011-01-01

    For smaller organisms with faster breeding cycles, artificial selection can be used to create sub-populations with different phenotypic traits. Genetic tests can be employed to identify the causal markers for the phenotypes, as a precursor to engineering strains with a combination of traits. Traditional approaches involve analyzing crosses of inbred strains to test for co-segregation with genetic markers. Here we take advantage of cheaper next generation sequencing techniques to identify genetic signatures of adaptation to the selection constraints. Obtaining individual sequencing data is often unrealistic due to cost and sample issues, so we focus on pooled genomic data. We explore a series of statistical tests for selection using pooled case (under selection) and control populations. The tests generally capture skews in the scaled frequency spectrum of alleles in a region, which are indicative of a selective sweep. Extensive simulations are used to show that these approaches work well for a wide range of population divergence times and strong selective pressures. Control vs control simulations are used to determine an empirical False Positive Rate, and regions under selection are determined using a 1% FPR level. We show that pooling does not have a significant impact on statistical power. The tests are also robust to reasonable variations in several different parameters, including window size, base-calling error rate, and sequencing coverage. We then demonstrate the viability (and the challenges) of one of these methods in two independent Drosophila populations (Drosophila melanogaster) bred under selection for hypoxia and accelerated development, respectively. Testing for extreme hypoxia tolerance showed clear signals of selection, pointing to loci that are important for hypoxia adaptation. Overall, we outline a strategy for finding regions under selection using pooled sequences, then devise optimal tests for that strategy. The approaches show promise for detecting selection, even several generations after fixation of the beneficial allele has occurred.

  10. Nonparametric predictive inference for combining diagnostic tests with parametric copula

    NASA Astrophysics Data System (ADS)

    Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.

    2017-09-01

    Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.

  11. Graphical Tests for Power Comparison of Competing Designs.

    PubMed

    Hofmann, H; Follett, L; Majumder, M; Cook, D

    2012-12-01

    Lineups have been established as tools for visual testing similar to standard statistical inference tests, allowing us to evaluate the validity of graphical findings in an objective manner. In simulation studies lineups have been shown as being efficient: the power of visual tests is comparable to classical tests while being much less stringent in terms of distributional assumptions made. This makes lineups versatile, yet powerful, tools in situations where conditions for regular statistical tests are not or cannot be met. In this paper we introduce lineups as a tool for evaluating the power of competing graphical designs. We highlight some of the theoretical properties and then show results from two studies evaluating competing designs: both studies are designed to go to the limits of our perceptual abilities to highlight differences between designs. We use both accuracy and speed of evaluation as measures of a successful design. The first study compares the choice of coordinate system: polar versus cartesian coordinates. The results show strong support in favor of cartesian coordinates in finding fast and accurate answers to spotting patterns. The second study is aimed at finding shift differences between distributions. Both studies are motivated by data problems that we have recently encountered, and explore using simulated data to evaluate the plot designs under controlled conditions. Amazon Mechanical Turk (MTurk) is used to conduct the studies. The lineups provide an effective mechanism for objectively evaluating plot designs.

  12. The effects of spatial autoregressive dependencies on inference in ordinary least squares: a geometric approach

    NASA Astrophysics Data System (ADS)

    Smith, Tony E.; Lee, Ka Lok

    2012-01-01

    There is a common belief that the presence of residual spatial autocorrelation in ordinary least squares (OLS) regression leads to inflated significance levels in beta coefficients and, in particular, inflated levels relative to the more efficient spatial error model (SEM). However, our simulations show that this is not always the case. Hence, the purpose of this paper is to examine this question from a geometric viewpoint. The key idea is to characterize the OLS test statistic in terms of angle cosines and examine the geometric implications of this characterization. Our first result is to show that if the explanatory variables in the regression exhibit no spatial autocorrelation, then the distribution of test statistics for individual beta coefficients in OLS is independent of any spatial autocorrelation in the error term. Hence, inferences about betas exhibit all the optimality properties of the classic uncorrelated error case. However, a second more important series of results show that if spatial autocorrelation is present in both the dependent and explanatory variables, then the conventional wisdom is correct. In particular, even when an explanatory variable is statistically independent of the dependent variable, such joint spatial dependencies tend to produce "spurious correlation" that results in over-rejection of the null hypothesis. The underlying geometric nature of this problem is clarified by illustrative examples. The paper concludes with a brief discussion of some possible remedies for this problem.

  13. Temporal and spatial variability of rainfall over Greece

    NASA Astrophysics Data System (ADS)

    Markonis, Y.; Batelis, S. C.; Dimakos, Y.; Moschou, E.; Koutsoyiannis, D.

    2017-10-01

    Recent studies have showed that there is a significant decrease in rainfall over Greece during the last half of the pervious century, following an overall decrease of the precipitation at the eastern Mediterranean. However, during the last decade an increase in rainfall was observed in most regions of the country, contrary to the general circulation climate models forecasts. An updated high-resolution dataset of monthly sums and annual daily maxima records derived from 136 stations during the period 1940-2012 allowed us to present some new evidence for the observed change and its statistical significance. The statistical framework used to determine the significance of the slopes in annual rain was not limited to the time independency assumption (Mann-Kendall test), but we also investigated the effect of short- and long-term persistence through Monte Carlo simulation. Our findings show that (a) change occurs in different scales; most regions show a decline since 1950, an increase since 1980 and remain stable during the last 15 years; (b) the significance of the observed decline is highly dependent to the statistical assumptions used; there are indications that the Mann-Kendall test may be the least suitable method; and (c) change in time is strongly linked with the change in space; for scales below 40 years, relatively close regions may develop even opposite trends, while in larger scales change is more uniform.

  14. Stereomicroscopic evaluation of defects caused by torsional fatigue in used hand and rotary nickel-titanium instruments.

    PubMed

    Asthana, Geeta; Kapadwala, Marsrat I; Parmar, Girish J

    2016-01-01

    The aim of this study was to evaluate defects caused by torsional fatigue in used hand and rotary nickel-titanium (Ni-Ti) instruments by stereomicroscopic examination. One hundred five greater taper Ni-Ti instruments were used including Protaper universal hand (Dentsply Maillefer, Ballaigues, Switzerland), Protaper universal rotary (Dentsply Maillefer, Ballaigues, Switzerland), and Revo-S rotary (MicroMega, Besançon, France) files. Files were used on lower anterior teeth. After every use, the files were observed with both naked eyes and stereomicroscope at 20× magnification (Olympus, Shinjuku, Tokyo, Japan) to evaluate defects caused by torsional fatigue. Scoring was assigned to each file according to the degree of damage. The results were statistically analyzed using the Mann-Whitney U test and the Kruskal-Wallis test. A greater number of defects were seen under the stereomicroscope than on examining with naked eyes. However, the difference in methods of evaluation was not statistically significant. Revo-S files showed minimum defects, while Protaper universal hand showed maximum defects. The intergroup comparison of defects showed that the bend in Protaper universal hand instruments was statistically significant. Visible defects in Ni-Ti files due to torsional fatigue were seen by naked eyes as well as by stereomicroscope. This study emphasizes that all the files should be observed before and after every instrument cycle to minimize the risk of separation.

  15. Evaluating the Effects of Aromatics Content in Gasoline on Gaseous and Particulate Matter Emissions from SI-PFI and SIDI Vehicles.

    PubMed

    Karavalakis, Georgios; Short, Daniel; Vu, Diep; Russell, Robert; Hajbabaei, Maryam; Asa-Awuku, Akua; Durbin, Thomas D

    2015-06-02

    We assessed the emissions response of a fleet of seven light-duty gasoline vehicles for gasoline fuel aromatic content while operating over the LA92 driving cycle. The test fleet consisted of model year 2012 vehicles equipped with spark-ignition (SI) and either port fuel injection (PFI) or direct injection (DI) technology. Three gasoline fuels were blended to meet a range of total aromatics targets (15%, 25%, and 35% by volume) while holding other fuel properties relatively constant within specified ranges, and a fourth fuel was formulated to meet a 35% by volume total aromatics target but with a higher octane number. Our results showed statistically significant increases in carbon monoxide, nonmethane hydrocarbon, particulate matter (PM) mass, particle number, and black carbon emissions with increasing aromatics content for all seven vehicles tested. Only one vehicle showed a statistically significant increase in total hydrocarbon emissions. The monoaromatic hydrocarbon species that were evaluated showed increases with increasing aromatic content in the fuel. Changes in fuel composition had no statistically significant effect on the emissions of nitrogen oxides (NOx), formaldehyde, or acetaldehyde. A good correlation was also found between the PM index and PM mass and number emissions for all vehicle/fuel combinations with the total aromatics group being a significant contributor to the total PM index followed by naphthalenes and indenes.

  16. Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco

    NASA Astrophysics Data System (ADS)

    Bounoua, Z.; Mechaqrane, A.

    2018-05-01

    An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.

  17. [The growth behavior of mouse fibroblasts on intraocular lens surface of various silicone and PMMA materials].

    PubMed

    Kammann, J; Kreiner, C F; Kaden, P

    1994-08-01

    Experience with intraocular lenses (IOL) made of PMMA dates back ca. 40 years, while silicone IOLs have been in use for only about 10 years. The biocompatibility of PMMA and silicone caoutchouc was tested in a comparative study investigating the growth of mouse fibroblasts on different IOL materials. Spectrophotometric determination of protein synthesis and liquid scintillation counting of DNA synthesis were carried out. The spreading of cells was planimetrically determined, and the DNA synthesis of individual cells in direct contact with the test sample was tested. The results showed that the biocompatibility of silicone lenses made of purified caoutchouc is comparable with that of PMMA lenses; there is no statistically significant difference. However, impurities arising during material synthesis result in a statistically significant inhibition of cell growth on the IOL surfaces.

  18. Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk

    PubMed Central

    Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo

    2011-01-01

    Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966

  19. Lubricant and additive effects on spur gear fatigue life

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.; Scibbe, H. W.

    1985-01-01

    Spur gear endurance tests were conducted with six lubricants using a single lot of consumable-electrode vacuum melted (CVM) AISI 9310 spur gears. The sixth lubricant was divided into four batches each of which had a different additive content. Lubricants tested with a phosphorus-type load carrying additive showed a statistically significant improvement in life over lubricants without this type of additive. The presence of sulfur type antiwear additives in the lubricant did not appear to affect the surface fatigue life of the gears. No statistical difference in life was produced with those lubricants of different base stocks but with similar viscosity, pressure-viscosity coefficients and antiwear additives. Gears tested with a 0.1 wt % sulfur and 0.1 wt % phosphorus EP additives in the lubricant had reactive films that were 200 to 400 (0.8 to 1.6 microns) thick.

  20. The relationship between procrastination, learning strategies and statistics anxiety among Iranian college students: a canonical correlation analysis.

    PubMed

    Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali

    2012-01-01

    Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction.

  1. Validation of a modification to Performance-Tested Method 010403: microwell DNA hybridization assay for detection of Listeria spp. in selected foods and selected environmental surfaces.

    PubMed

    Alles, Susan; Peng, Linda X; Mozola, Mark A

    2009-01-01

    A modification to Performance-Tested Method 010403, GeneQuence Listeria Test (DNAH method), is described. The modified method uses a new media formulation, LESS enrichment broth, in single-step enrichment protocols for both foods and environmental sponge and swab samples. Food samples are enriched for 27-30 h at 30 degrees C, and environmental samples for 24-48 h at 30 degrees C. Implementation of these abbreviated enrichment procedures allows test results to be obtained on a next-day basis. In testing of 14 food types in internal comparative studies with inoculated samples, there were statistically significant differences in method performance between the DNAH method and reference culture procedures for only 2 foods (pasteurized crab meat and lettuce) at the 27 h enrichment time point and for only a single food (pasteurized crab meat) in one trial at the 30 h enrichment time point. Independent laboratory testing with 3 foods showed statistical equivalence between the methods for all foods, and results support the findings of the internal trials. Overall, considering both internal and independent laboratory trials, sensitivity of the DNAH method relative to the reference culture procedures was 90.5%. Results of testing 5 environmental surfaces inoculated with various strains of Listeria spp. showed that the DNAH method was more productive than the reference U.S. Department of Agriculture-Food Safety and Inspection Service (USDA-FSIS) culture procedure for 3 surfaces (stainless steel, plastic, and cast iron), whereas results were statistically equivalent to the reference method for the other 2 surfaces (ceramic tile and sealed concrete). An independent laboratory trial with ceramic tile inoculated with L. monocytogenes confirmed the effectiveness of the DNAH method at the 24 h time point. Overall, sensitivity of the DNAH method at 24 h relative to that of the USDA-FSIS method was 152%. The DNAH method exhibited extremely high specificity, with only 1% false-positive reactions overall.

  2. Comparative Evaluation of Immediate Post-Operative Sequelae after Surgical Removal of Impacted Mandibular Third Molar with or without Tube Drain - Split-Mouth Study.

    PubMed

    Kumar, Barun; Bhate, Kalyani; Dolas, R S; Kumar, Sn Santhosh; Waknis, Pushkar

    2016-12-01

    Third molar surgery is one of the most common surgical procedures performed in general dentistry. Post-operative variables such as pain, swelling and trismus are major concerns after impacted mandibular third molar surgery. Use of passive tube drain is supposed to help reduce these immediate post-operative sequelae. The current study was designed to compare the effect of tube drain on immediate post-operative sequelae following impacted mandibular third molar surgery. To compare the post-operative sequelae after surgical removal of impacted mandibular third molar surgery with or without tube drain. Thirty patients with bilateral impacted mandibular third molars were divided into two groups: Test (with tube drain) and control (without tube drain) group. In the test group, a tube drain was inserted through the releasing incision, and kept in place for three days. The control group was left without a tube drain. The post-operative variables like, pain, swelling, and trismus were calculated after 24 hours, 72 hours, 7 days, and 15 days in both the groups and analyzed statistically using chi-square and t-test analysis. The test group showed lesser swelling as compared to control group, with the swelling variable showing statistically significant difference at post-operative day 3 and 7 (p≤ 0.05) in both groups. There were no statistically significant differences in pain and trismus variables in both the groups. The use of tube drain helps to control swelling following impacted mandibular third molar surgery. However, it does not have much effect on pain or trismus.

  3. The effect of four-phase teaching method on midwifery students’ emotional intelligence in managing the childbirth

    PubMed Central

    Mohamadirizi, Soheila; Fahami, Fariba; Bahadoran, Parvin; Ehsanpour, Soheila

    2015-01-01

    Background: An active teaching method has been used widely in medical education. The aim of this study was to determine the effectiveness of the four-phase teaching method on midwifery students’ emotional intelligence (EQ) in managing the childbirth. Materials and Methods: This was an experimental study that performed in 2013 in Isfahan University of Medical Sciences. Thirty midwifery students were involved in this study and selected through a random sampling method. The EQ questionnaire (43Q) was completed by both the groups, before and after the education. The collected data were analyzed using SPSS 14, the independent t-test, and the paired t-test. The statistically significant level was considered to be <0.05. Results: The findings of the independent t-test did not show any significant difference between EQ scores of the experimental and the control group before the intervention, whereas a statistically significant difference was observed after the intervention between the scores of two groups (P = 0.009). The paired t-test showed a statistically significant difference in EQ scores in the two groups after the intervention in the four-phase and the control group, respectively, as P = 0.005 and P = 0.018. Furthermore, the rate of self-efficiency has increased in the experimental group and control group as 66% and 13% (P = 0.024), respectively. Conclusion: The four-phase teaching method can increase the EQ levels of midwifery students. Therefore, the conduction of this educational model is recommended as an effective learning method. PMID:26097861

  4. Statistical classification approach to discrimination between weak earthquakes and quarry blasts recorded by the Israel Seismic Network

    NASA Astrophysics Data System (ADS)

    Kushnir, A. F.; Troitsky, E. V.; Haikin, L. M.; Dainty, A.

    1999-06-01

    A semi-automatic procedure has been developed to achieve statistically optimum discrimination between earthquakes and explosions at local or regional distances based on a learning set specific to a given region. The method is used for step-by-step testing of candidate discrimination features to find the optimum (combination) subset of features, with the decision taken on a rigorous statistical basis. Linear (LDF) and Quadratic (QDF) Discriminant Functions based on Gaussian distributions of the discrimination features are implemented and statistically grounded; the features may be transformed by the Box-Cox transformation z=(1/ α)( yα-1) to make them more Gaussian. Tests of the method were successfully conducted on seismograms from the Israel Seismic Network using features consisting of spectral ratios between and within phases. Results showed that the QDF was more effective than the LDF and required five features out of 18 candidates for the optimum set. It was found that discrimination improved with increasing distance within the local range, and that eliminating transformation of the features and failing to correct for noise led to degradation of discrimination.

  5. Guide to Using Onionskin Analysis Code (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fugate, Michael Lynn; Morzinski, Jerome Arthur

    2016-09-15

    This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less

  6. [Prosthodontic research design from the standpoint of statistical analysis: learning and knowing the research design].

    PubMed

    Tanoue, Naomi

    2007-10-01

    For any kind of research, "Research Design" is the most important. The design is used to structure the research, to show how all of the major parts of the research project. It is necessary for all the researchers to begin the research after planning research design for what is the main theme, what is the background and reference, what kind of data is needed, and what kind of analysis is needed. It seems to be a roundabout route, but, in fact, it will be a shortcut. The research methods must be appropriate to the objectives of the study. Regarding the hypothesis-testing research that is the traditional style of the research, the research design based on statistics is undoubtedly necessary considering that the research basically proves "a hypothesis" with data and statistics theory. On the subject of the clinical trial, which is the clinical version of the hypothesis-testing research, the statistical method must be mentioned in a clinical trial planning. This report describes the basis of the research design for a prosthodontics study.

  7. Analyzing the Influence of a New Dental Implant Design on Primary Stability.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Shimano, Antonio Carlos; Lepri, César Penazzo; dos Reis, Andréa Cândido

    2016-02-01

    The macrogeometry of dental implants strongly influences the primary stability and hence the osseointegration process. Compare the performance of conventional and modified implant models in terms of primary stability. A total of 36 implants (Neodent®) with two different formats (n = 18): Alvim CM (Conical CM, Ø 4.3 mm × 10 mm in length) and Titamax Ti (Cylindrical HE, Ø 4.0 mm × 11 mm in length) were inserted into artificial bone blocks. Nine implants from each set were selected to undergo external geometry changes. The primary stability was quantified by insertion torque and resonance frequency using an Osstell device and the pullout test. One-way analysis of variance and Tukey's test were used for statistical evaluation. The comparative analysis of the implants showed a significant increase of the insertion torque for the modified Conical CM implants (p = 0.000) and Cylindrical HE (p = 0.043); for the resonance frequency the modified Cylindrical HE showed a lower statistical mean (p = 0.002) when compared to the conventional model, and in the pullout test both modified implants showed significant reduction (p = 0.000). Within the limitations of this study, the proposed modification showed good stability levels and advantages when compared to the conventional implants. © 2015 Wiley Periodicals, Inc.

  8. Statistical analysis of trends in monthly precipitation at the Limbang River Basin, Sarawak (NW Borneo), Malaysia

    NASA Astrophysics Data System (ADS)

    Krishnan, M. V. Ninu; Prasanna, M. V.; Vijith, H.

    2018-05-01

    Effect of climate change in a region can be characterised by the analysis of rainfall trends. In the present research, monthly rainfall trends at Limbang River Basin (LRB) in Sarawak, Malaysia for a period of 45 years (1970-2015) were characterised through the non-parametric Mann-Kendall and Spearman's Rho tests and relative seasonality index. Statistically processed monthly rainfall of 12 well distributed rain gauging stations in LRB shows almost equal amount of rainfall in all months. Mann-Kendall and Spearman's Rho tests revealed a specific pattern of rainfall trend with a definite boundary marked in the months of January and August with positive trends in all stations. Among the stations, Limbang DID, Long Napir and Ukong showed positive (increasing) trends in all months with a maximum increase of 4.06 mm/year (p = 0.01) in November. All other stations showed varying trends (both increasing and decreasing). Significant (p = 0.05) decreasing trend was noticed in Ulu Medalam and Setuan during September (- 1.67 and - 1.79 mm/year) and October (- 1.59 and - 1.68 mm/year) in Mann-Kendall and Spearman's Rho tests. Spatial pattern of monthly rainfall trends showed two clusters of increasing rainfalls (maximas) in upper and lower part of the river basin separated with a dominant decreasing rainfall corridor. The results indicate a generally increasing trend of rainfall in Sarawak, Borneo.

  9. Quality of reporting statistics in two Indian pharmacology journals.

    PubMed

    Jaykaran; Yadav, Preeti

    2011-04-01

    To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.

  10. Effect of open rhinoplasty on the smile line.

    PubMed

    Tabrizi, Reza; Mirmohamadsadeghi, Hoori; Daneshjoo, Danadokht; Zare, Samira

    2012-05-01

    Open rhinoplasty is an esthetic surgical technique that is becoming increasingly popular, and can affect the nose and upper lip compartments. The aim of this study was to evaluate the effect of open rhinoplasty on tooth show and the smile line. The study participants were 61 patients with a mean age of 24.3 years (range, 17.2 to 39.6 years). The surgical procedure consisted of an esthetic open rhinoplasty without alar resection. Analysis of tooth show was limited to pre- and postoperative (at 12 months) tooth show measurements at rest and the maximum smile with a ruler (when participants held their heads naturally). Statistical analyses were performed with SPSS 13.0, and paired-sample t tests were used to compare tooth show means before and after the operation. Analysis of the rest position showed no statistically significant change in tooth show (P = .15), but analysis of participants' maximum smile data showed a statistically significant increase in tooth show after surgery (P < .05). In contrast, Pearson correlation analysis showed a positive relation between rhinoplasty and tooth show increases in maximum smile, especially in subjects with high smile lines. This study shows that the nasolabial compartment is a single unit and any change in 1 part may influence the other parts. Further studies should be conducted to investigate these interactions. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  11. PROMISE: a tool to identify genomic features with a specific biologically interesting pattern of associations with multiple endpoint variables.

    PubMed

    Pounds, Stan; Cheng, Cheng; Cao, Xueyuan; Crews, Kristine R; Plunkett, William; Gandhi, Varsha; Rubnitz, Jeffrey; Ribeiro, Raul C; Downing, James R; Lamba, Jatinder

    2009-08-15

    In some applications, prior biological knowledge can be used to define a specific pattern of association of multiple endpoint variables with a genomic variable that is biologically most interesting. However, to our knowledge, there is no statistical procedure designed to detect specific patterns of association with multiple endpoint variables. Projection onto the most interesting statistical evidence (PROMISE) is proposed as a general procedure to identify genomic variables that exhibit a specific biologically interesting pattern of association with multiple endpoint variables. Biological knowledge of the endpoint variables is used to define a vector that represents the biologically most interesting values for statistics that characterize the associations of the endpoint variables with a genomic variable. A test statistic is defined as the dot-product of the vector of the observed association statistics and the vector of the most interesting values of the association statistics. By definition, this test statistic is proportional to the length of the projection of the observed vector of correlations onto the vector of most interesting associations. Statistical significance is determined via permutation. In simulation studies and an example application, PROMISE shows greater statistical power to identify genes with the interesting pattern of associations than classical multivariate procedures, individual endpoint analyses or listing genes that have the pattern of interest and are significant in more than one individual endpoint analysis. Documented R routines are freely available from www.stjuderesearch.org/depts/biostats and will soon be available as a Bioconductor package from www.bioconductor.org.

  12. Feasibility and effects of newly developed balance control trainer for mobility and balance in chronic stroke patients: a randomized controlled trial.

    PubMed

    Lee, So Hyun; Byun, Seung Deuk; Kim, Chul Hyun; Go, Jin Young; Nam, Hyeon Uk; Huh, Jin Seok; Jung, Tae Du

    2012-08-01

    To investigate the feasibility and effects of balance training with a newly developed Balance Control Trainer (BCT) that applied the concept of vertical movement for the improvements of mobility and balance in chronic stroke patients. Forty chronic stroke patients were randomly assigned to an experimental or a control group. The experimental group (n=20) underwent training with a BCT for 20 minutes a day, 5 days a week for 4 weeks, in addition to concurrent conventional physical therapy. The control group (n=20) underwent only conventional therapy for 4 weeks. All participants were assessed by: the Functional Ambulation Categories (FAC), 10-meter Walking Test (10mWT), Timed Up and Go test (TUG), Berg Balance Scale (BBS), Korean Modified Barthel Index (MBI), and Manual Muscle Test (MMT) before training, and at 2 and 4 weeks of training. There were statistically significant improvements in all parameters except knee extensor power at 2 weeks of treatment, and in all parameters except MBI which showed further statistically significant progress in the experimental group over the next two weeks (p<0.05). Statistically significant improvements on all measurements were observed in the experimental group after 4 weeks total. Comparing the two groups at 2 and 4 weeks of training respectively, 10mWT, TUG, and BBS showed statistically more significant improvements in the experimental group (p<0.05). Balance training with a newly developed BCT is feasible and may be an effective tool to improve balance and gait in ambulatory chronic stroke patients. Furthermore, it may provide additional benefits when used in conjunction with conventional therapies.

  13. An evaluation of shear bond strength of self-etch adhesive on pre-etched enamel: an in vitro study.

    PubMed

    Rao, Bhadra; Reddy, Satti Narayana; Mujeeb, Abdul; Mehta, Kanchan; Saritha, G

    2013-11-01

    To determine the shear bond strength of self-etch adhesive G-bond on pre-etched enamel. Thirty caries free human mandibular premolars extracted for orthodontic purpose were used for the study. Occlusal surfaces of all the teeth were flattened with diamond bur and a silicon carbide paper was used for surface smoothening. The thirty samples were randomly grouped into three groups. Three different etch systems were used for the composite build up: group 1 (G-bond self-etch adhesive system), group 2 (G-bond) and group 3 (Adper single bond). Light cured was applied for 10 seconds with a LED unit for composite buildup on the occlusal surface of each tooth with 8 millimeters (mm) in diameter and 3 mm in thickness. The specimens in each group were tested in shear mode using a knife-edge testing apparatus in a universal testing machine across head speed of 1 mm/ minute. Shear bond strength values in Mpa were calculated from the peak load at failure divided by the specimen surface area. The mean shear bond strength of all the groups were calculated and statistical analysis was carried out using one-way Analysis of Variance (ANOVA). The mean bond strength of group 1 is 15.5 Mpa, group 2 is 19.5 Mpa and group 3 is 20.1 Mpa. Statistical analysis was carried out between the groups using one-way ANOVA. Group 1 showed statistically significant lower bond strength when compared to groups 2 and 3. No statistical significant difference between groups 2 and 3 (p < 0.05). Self-etch adhesive G-bond showed increase in shear bond strength on pre-etched enamel.

  14. Just Be It! Healthy and Fit Increases Fifth Graders' Fruit and Vegetable Intake, Physical Activity, and Nutrition Knowledge

    ERIC Educational Resources Information Center

    DelCampo, Diana; Baca, Jacqueline S.; Jimenez, Desaree; Sanchez, Paula Roybal; DelCampo, Robert

    2011-01-01

    Just Be It! Healthy and Fit reduces the risk factors for childhood obesity for fifth graders using hands-on field trips, in-class lessons, and parent outreach efforts. Pre-test and post-test scores from the year-long classroom instruction showed a statistically significant increase in fruit and vegetable intake, physical activity, and nutrition…

  15. The Usual and the Unusual: Solving Remote Associates Test Tasks Using Simple Statistical Natural Language Processing Based on Language Use

    ERIC Educational Resources Information Center

    Klein, Ariel; Badia, Toni

    2015-01-01

    In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…

  16. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  17. Population activity statistics dissect subthreshold and spiking variability in V1.

    PubMed

    Bányai, Mihály; Koman, Zsombor; Orbán, Gergő

    2017-07-01

    Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.

  18. Statistical Inference for Quality-Adjusted Survival Time

    DTIC Science & Technology

    2003-08-01

    survival functions of QAL. If an influence function for a test statistic exists for complete data case, denoted as ’i, then a test statistic for...the survival function for the censoring variable. Zhao and Tsiatis (2001) proposed a test statistic where O is the influence function of the general...to 1 everywhere until a subject’s death. We have considered other forms of test statistics. One option is to use an influence function 0i that is

  19. The effects of simulated bone loss on the implant-abutment assembly and likelihood of fracture: an in vitro study.

    PubMed

    Manzoor, Behzad; Suleiman, Mahmood; Palmer, Richard M

    2013-01-01

    The crestal bone level around a dental implant may influence its strength characteristics by offering protection against mechanical failures. Therefore, the present study investigated the effect of simulated bone loss on modes, loads, and cycles to failure in an in vitro model. Different amounts of bone loss were simulated: 0, 1.5, 3.0, and 4.5 mm from the implant head. Forty narrow-diameter (3.0-mm) implant-abutment assemblies were tested using compressive bending and cyclic fatigue testing. Weibull and accelerated life testing analysis were used to assess reliability and functional life. Statistical analyses were performed using the Fisher-Exact test and the Spearman ranked correlation. Compressive bending tests showed that the level of bone loss influenced the load-bearing capacity of implant-abutment assemblies. Fatigue testing showed that the modes, loads, and cycles to failure had a statistically significant relationship with the level of bone loss. All 16 samples with bone loss of 3.0 mm or more experienced horizontal implant body fractures. In contrast, 14 of 16 samples with 0 and 1.5 mm of bone loss showed abutment and screw fractures. Weibull and accelerated life testing analysis indicated a two-group distribution: the 0- and 1.5-mm bone loss samples had better functional life and reliability than the 3.0- and 4.5-mm samples. Progressive bone loss had a significant effect on modes, loads, and cycles to failure. In addition, bone loss influenced the functional life and reliability of the implant-abutment assemblies. Maintaining crestal bone levels is important in ensuring biomechanical sustainability and predictable long-term function of dental implant assemblies.

  20. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bihn T. Pham; Jeffrey J. Einerson

    2010-06-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automatedmore » processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.« less

  1. Detecting epistasis with the marginal epistasis test in genetic mapping studies of quantitative traits

    PubMed Central

    Zeng, Ping; Mukherjee, Sayan; Zhou, Xiang

    2017-01-01

    Epistasis, commonly defined as the interaction between multiple genes, is an important genetic component underlying phenotypic variation. Many statistical methods have been developed to model and identify epistatic interactions between genetic variants. However, because of the large combinatorial search space of interactions, most epistasis mapping methods face enormous computational challenges and often suffer from low statistical power due to multiple test correction. Here, we present a novel, alternative strategy for mapping epistasis: instead of directly identifying individual pairwise or higher-order interactions, we focus on mapping variants that have non-zero marginal epistatic effects—the combined pairwise interaction effects between a given variant and all other variants. By testing marginal epistatic effects, we can identify candidate variants that are involved in epistasis without the need to identify the exact partners with which the variants interact, thus potentially alleviating much of the statistical and computational burden associated with standard epistatic mapping procedures. Our method is based on a variance component model, and relies on a recently developed variance component estimation method for efficient parameter inference and p-value computation. We refer to our method as the “MArginal ePIstasis Test”, or MAPIT. With simulations, we show how MAPIT can be used to estimate and test marginal epistatic effects, produce calibrated test statistics under the null, and facilitate the detection of pairwise epistatic interactions. We further illustrate the benefits of MAPIT in a QTL mapping study by analyzing the gene expression data of over 400 individuals from the GEUVADIS consortium. PMID:28746338

  2. Effects of animal-assisted therapy on agitated behaviors and social interactions of older adults with dementia.

    PubMed

    Richeson, Nancy E

    2003-01-01

    The effects of a therapeutic recreation intervention using animal-assisted therapy (AAT) on the agitated behaviors and social interactions of older adults with dementia were examined using the Cohen-Mansfield Agitation Inventory and the Animal-Assisted Therapy Flow Sheet. In a pilot study, 15 nursing home residents with dementia participated in a daily AAT intervention for three weeks. Results showed statistically significant decreases in agitated behaviors and a statistically significant increase in social interaction pretest to post-test.

  3. Photon counting statistics analysis of biophotons from hands.

    PubMed

    Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup

    2003-05-01

    The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.

  4. Corneal permeability changes in dry eye disease: an observational study.

    PubMed

    Fujitani, Kenji; Gadaria, Neha; Lee, Kyu-In; Barry, Brendan; Asbell, Penny

    2016-05-13

    Diagnostic tests for dry eye disease (DED), including ocular surface disease index (OSDI), tear breakup time (TBUT), corneal fluorescein staining, and lissamine staining, have great deal of variability. We investigated whether fluorophotometry correlated with previously established DED diagnostic tests and whether it could serve as a novel objective metric to evaluate DED. Dry eye patients who have had established signs or symptoms for at least 6 months were included in this observational study. Normal subjects with no symptoms of dry eyes served as controls. Each eye had a baseline fluorescein scan prior to any fluorescein dye. Fluorescein dye was then placed into both eyes, rinsed with saline solution, and scanned at 5, 10, 15, and 30 min. Patients were administered the following diagnostic tests to correlate with fluorophotometry: OSDI, TBUT, fluorescein, and lissamine. Standard protocols were used. P < 0.05 was considered significant. Fifty eyes from 25 patients (DED = 22 eyes, 11 patients; Normal = 28 eyes, 14 patients) were included. Baseline scans of the dry eye and control groups did not show any statistical difference (p = 0.84). Fluorescein concentration of DED and normal patients showed statistical significance at all time intervals (p < 10(-5), 0.001, 0.002, 0.049 for 5, 10, 15, & 30 min respectively). Fluorophotometry values converged towards baseline as time elapsed, but both groups were still statistically different at 30 min (p < 0.01). We used four fluorophotometry scoring methods and correlated them with OSDI, TBUT, fluorescein, and lissamine along with adjusted and aggregate scores. The four scoring schemes did not show any significant correlations with the other tests, except for correlations seen with lissamine and 10 (p = 0.045, 0.034) and 15 min (p = 0.013, 0.012), and with aggregate scores and 15 min (p = 0.042, 0.017). Fluorophotometry generally did not correlate with any other DED tests, even though it showed capability of differentiating between DED and normal eyes up to 30 min after fluorescein dye instillation. There may be an aspect of DED that is missed in the current regimen of DED tests and only captured with fluorophotometry. Adding fluorophotometry may be useful in screening, diagnosing, and monitoring patients with DED.

  5. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  6. Robust inference from multiple test statistics via permutations: a better alternative to the single test statistic approach for randomized trials.

    PubMed

    Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie

    2013-01-01

    Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Wilcoxon's signed-rank statistic: what null hypothesis and why it matters.

    PubMed

    Li, Heng; Johnson, Terri

    2014-01-01

    In statistical literature, the term 'signed-rank test' (or 'Wilcoxon signed-rank test') has been used to refer to two distinct tests: a test for symmetry of distribution and a test for the median of a symmetric distribution, sharing a common test statistic. To avoid potential ambiguity, we propose to refer to those two tests by different names, as 'test for symmetry based on signed-rank statistic' and 'test for median based on signed-rank statistic', respectively. The utility of such terminological differentiation should become evident through our discussion of how those tests connect and contrast with sign test and one-sample t-test. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  8. Estimation on the concentration of total suspended matter in Lombok Coastal using Landsat 8 OLI, Indonesia

    NASA Astrophysics Data System (ADS)

    Emiyati; Manoppo, Anneke K. S.; Budhiman, Syarif

    2017-01-01

    Total Suspended Matter (TSM) are fine materials which suspended and floated in water column. Water column could be turbid due to TSM that reduces the depth of light penetration and causes low productivity in coastal waters. The objective of this study was to estimate TSM concentration using Landsat 8 OLI data in Lombok coastal waters Indonesia by using empirical and analytic approach between three visible bands of Landsat 8 OLI subsurface reflectance (OLI 2, OLI 3 and OLI 4) and field data. The accuracy of model was tested using error estimation and statistical analysis. Colour of waters, transparency and reflectance values showed, the clear water has high transparency and low reflectance while the turbid waters have low transparency and high reflectance. The estimation of TSM concentrations in Lombok coastal waters are 0.39 to 20.7 mg/l. TSM concentrations becoming high when it is on coast and low when it is far from the coast. The statistical analysis showed that TSM model from Landsat 8 OLI data could describe TSM from field measurement with correlation 91.8% and RMSE value 0.52. The t-test and f-test showed that the TSM derived from Landsat 8 OLI and TSM measured in field were not significantly different.

  9. The Effects of CO2 Laser with or without Nanohydroxyapatite Paste in the Occlusion of Dentinal Tubules

    PubMed Central

    Al-maliky, Mohammed Abbood; Mahmood, Ali Shukur; Al-karadaghi, Tamara Sardar; Kurzmann, Christoph; Laky, Markus; Franz, Alexander; Moritz, Andreas

    2014-01-01

    The aim of this study was to evaluate a new treatment modality for the occlusion of dentinal tubules (DTs) via the combination of 10.6 µm carbon dioxide (CO2) laser and nanoparticle hydroxyapatite paste (n-HAp). Forty-six sound human molars were used in the current experiment. Ten of the molars were used to assess the temperature elevation during lasing. Thirty were evaluated for dentinal permeability test, subdivided into 3 groups: the control group (C), laser only (L−), and laser plus n-HAp (L+). Six samples, two per group, were used for surface and cross section morphology, evaluated through scanning electron microscope (SEM). The temperature measurement results showed that the maximum temperature increase was 3.2°C. Morphologically groups (L−) and (L+) presented narrower DTs, and almost a complete occlusion of the dentinal tubules for group (L+) was found. The Kruskal-Wallis nonparametric test for permeability test data showed statistical differences between the groups (P < 0.05). For intergroup comparison all groups were statistically different from each other, with group (L+) showing significant less dye penetration than the control group. We concluded that CO2 laser in moderate power density combined with n-HAp seems to be a good treatment modality for reducing the permeability of dentin. PMID:25386616

  10. Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, R. S.; van Dyk, D. A.

    2000-05-01

    The likelihood ratio test (LRT) and F-test popularized in astrophysics by Bevington (Data Reduction and Error Analysis in the Physical Sciences ) and Cash (1977, ApJ 228, 939), do not (even asymptotically) adhere to their nominal χ2 and F distributions in many statistical tests commonly used in astrophysics. The many legitimate uses of the LRT (see, e.g., the examples given in Cash (1977)) notwithstanding, it can be impossible to compute the false positive rate of the LRT or related tests such as the F-test. For example, although Cash (1977) did not suggest the LRT for detecting a line profile in a spectral model, it has become common practice despite the lack of certain required mathematical regularity conditions. Contrary to common practice, the nominal distribution of the LRT statistic should not be used in these situations. In this paper, we characterize an important class of problems where the LRT fails, show the non-standard behavior of the test in this setting, and provide a Bayesian alternative to the LRT, i.e., posterior predictive p-values. We emphasize that there are many legitimate uses of the LRT in astrophysics, and even when the LRT is inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). We illustrate this point in our analysis of GRB 970508 that was studied by Piro et al. in ApJ, 514:L73-L77, 1999.

  11. Task-based learning versus problem-oriented lecture in neurology continuing medical education.

    PubMed

    Vakani, Farhan; Jafri, Wasim; Ahmad, Amina; Sonawalla, Aziz; Sheerani, Mughis

    2014-01-01

    To determine whether general practitioners learned better with task-based learning or problem-oriented lecture in a Continuing Medical Education (CME) set-up. Quasi-experimental study. The Aga Khan University, Karachi campus, from April to June 2012. Fifty-nine physicians were given a choice to opt for either Task-based Learning (TBL) or Problem Oriented Lecture (PBL) in a continuing medical education set-up about headaches. The TBL group had 30 participants divided into 10 small groups, and were assigned case-based tasks. The lecture group had 29 participants. Both groups were given a pre and a post-test. Pre/post assessment was done using one-best MCQs. The reliability coefficient of scores for both the groups was estimated through Cronbach's alpha. An item analysis for difficulty and discriminatory indices was calculated for both the groups. Paired t-test was used to determine the difference between pre- and post-test scores of both groups. Independent t-test was used to compare the impact of the two teaching methods in terms of learning through scores produced by MCQ test. Cronbach's alpha was 0.672 for the lecture group and 0.881 for TBL group. Item analysis for difficulty (p) and discriminatory indexes (d) was obtained for both groups. The results for the lecture group showed pre-test (p) = 42% vs. post-test (p) = 43%; pre- test (d) = 0.60 vs. post-test (d) = 0.40. The TBL group showed pre -test (p) = 48% vs. post-test (p) = 70%; pre-test (d) = 0.69 vs. post-test (d) = 0.73. Lecture group pre-/post-test mean scores were (8.52 ± 2.95 vs. 12.41 ± 2.65; p < 0.001), where TBL group showed (9.70 ± 3.65 vs. 14 ± 3.99; p < 0.001). Independent t-test exhibited an insignificant difference at baseline (lecture 8.52 ± 2.95 vs. TBL 9.70 ± 3.65; p = 0.177). The post-scores were not statistically different lecture 12.41 ± 2.65 vs. TBL 14 ± 3.99; p = 0.07). Both delivery methods were found to be equally effective, showing statistically insignificant differences. However, TBL groups' post-test higher mean scores and radical increase in the post-test difficulty index demonstrated improved learning through TBL delivery and calls for further exploration of longitudinal studies in the context of CME.

  12. Evaluation of bearing capacity of piles from cone penetration test data.

    DOT National Transportation Integrated Search

    2007-12-01

    A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...

  13. The effect of rare variants on inflation of the test statistics in case-control analyses.

    PubMed

    Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P

    2015-02-20

    The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

  14. Selecting the most appropriate inferential statistical test for your quantitative research study.

    PubMed

    Bettany-Saltikov, Josette; Whittaker, Victoria Jane

    2014-06-01

    To discuss the issues and processes relating to the selection of the most appropriate statistical test. A review of the basic research concepts together with a number of clinical scenarios is used to illustrate this. Quantitative nursing research generally features the use of empirical data which necessitates the selection of both descriptive and statistical tests. Different types of research questions can be answered by different types of research designs, which in turn need to be matched to a specific statistical test(s). Discursive paper. This paper discusses the issues relating to the selection of the most appropriate statistical test and makes some recommendations as to how these might be dealt with. When conducting empirical quantitative studies, a number of key issues need to be considered. Considerations for selecting the most appropriate statistical tests are discussed and flow charts provided to facilitate this process. When nursing clinicians and researchers conduct quantitative research studies, it is crucial that the most appropriate statistical test is selected to enable valid conclusions to be made. © 2013 John Wiley & Sons Ltd.

  15. Heparin Reversal After Cardiopulmonary Bypass: Are Point-of-Care Coagulation Tests Interchangeable?

    PubMed

    Willems, Ariane; Savan, Veaceslav; Faraoni, David; De Ville, Andrée; Rozen, Laurence; Demulder, Anne; Van der Linden, Philippe

    2016-10-01

    Protamine is used to neutralize heparin after patient separation from cardiopulmonary bypass (CPB). Different bedside tests are used to monitor the adequacy of heparin neutralization. For this study, the interchangeability of the activated coagulation time (ACT) and thromboelastometry (ROTEM; Tem Innovations GmbH, Basel, Switzerland) clotting time (CT) ratios in children undergoing cardiac surgery was assessed. Single-center, retrospective, cohort study between September 2010 and January 2012. University children's hospital. The study comprised children 0 to 16 years old undergoing elective cardiac surgery with CPB. Exclusion criteria were preoperative coagulopathy, Jehovah's witnesses, and children in a moribund condition (American Society of Anesthesiologists score 5). None. After heparin neutralization with protamine, the ratio between ACT, with and without heparinase, and the CT measured with INTEM/HEPTEM (intrinsic test activated with ellagic acid was performed without heparinase [INTEM] and with heparinase [HEPTEM]) using tests of ROTEM were calculated. Agreement was evaluated using Cohen's kappa statistics, Passing-Bablok regression, and Bland-Altman analysis. Among the 173 patients included for analysis, agreement between both tests showed a Cohen's kappa statistic of 0.06 (95% CI: -0.02 to 0.14; p = 0.22). Bland-Altman analysis showed a bias of 0.01, with a standard deviation of 0.13, and limits of agreement between -0.24 and 0.26. Passing-Bablok regression showed a systematic difference of 0.40 (95% CI: 0.16-0.59) and a proportional difference of 0.61 (95% CI: 0.42-0.86). The residual standard deviation was 0.11 (95% CI: -0.22 to 0.22), and the test for linearity showed p = 0.10. ACT, with or without heparinase, and the INTEM/HEPTEM CT ratios are not interchangeable to evaluate heparin reversal after pediatric patient separation from CPB. Therefore, the results of these tests should be corroborated with the absence/presence of bleeding and integrated into center-specific treatment algorithms. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Assessing student understanding of measurement and uncertainty

    NASA Astrophysics Data System (ADS)

    Abbott, David Scot

    A test to assess student understanding of measurement and uncertainty has been developed and administered to more than 500 students at two large research universities. The aim is two-fold: (1) to assess what students learn in the first semester of introductory physics labs and (2) to uncover patterns in student reasoning and practice. The forty minute, eleven item test focuses on direct measurement and student attitudes toward multiple measurements. After one revision cycle using think-aloud interviews, the test was administered to students to three groups: students enrolled in traditional laboratory lab sections of first semester physics at North Carolina State University (NCSU), students in an experimental (SCALE-UP) section of first semester physics at NCSU, and students in first semester physics at the University of North Carolina at Chapel Hill. The results were analyzed using a mixture of qualitative and quantitative methods. In the traditional NCSU labs, where students receive no instruction in uncertainty and measurement, students show no improvement on any of the areas examined by the test. In SCALE-UP and at UNC, students show statistically significant gains in most areas of the test. Gains on specific test items in SCALE-UP and at UNC correspond to areas of instructional emphasis. Test items were grouped into four main aspects of performance: "point/set" reasoning, meaning of spread, ruler reading and "stacking." Student performance on the pretest was examined to identify links between these aspects. Items within each aspect are correlated to one another, sometimes quite strongly, but items from different aspects rarely show statistically significant correlation. Taken together, these results suggest that student difficulties may not be linked to a single underlying cause. The study shows that current instruction techniques improve student understanding, but that many students exit the introductory physics lab course without appreciation or coherent understanding for the concept of measurement uncertainty.

  17. Quality of reporting statistics in two Indian pharmacology journals

    PubMed Central

    Jaykaran; Yadav, Preeti

    2011-01-01

    Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals’ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Results: Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7–83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of “mean (SD)” or “mean ± SD.” Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6–38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Conclusion: Articles published in two Indian pharmacology journals are not devoid of statistical errors. PMID:21772766

  18. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  19. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  20. Cyclic fatigue resistance of 3 different nickel-titanium reciprocating instruments in artificial canals.

    PubMed

    Higuera, Oscar; Plotino, Gianluca; Tocci, Luigi; Carrillo, Gabriela; Gambarini, Gianluca; Jaramillo, David E

    2015-06-01

    The purpose of this study was to evaluate the cyclic fatigue resistance of 3 different nickel-titanium reciprocating instruments. A total of 45 nickel-titanium instruments were tested and divided into 3 experimental groups (n = 15): group 1, WaveOne Primary instruments; group 2, Reciproc R25 instruments; and group 3, Twisted File (TF) Adaptive M-L1 instruments. The instruments were then subjected to cyclic fatigue test on a static model consisting of a metal block with a simulated canal with 60° angle of curvature and a 5-mm radius of curvature. WaveOne Primary, Reciproc R25, and TF Adaptive instruments were activated by using their proprietary movements, WaveOne ALL, Reciproc ALL, and TF Adaptive, respectively. All instruments were activated until fracture occurred, and the time to fracture was recorded visually for each file with a 1/100-second chronometer. Mean number of cycles to failure and standard deviations were calculated for each group, and data were statistically analyzed (P < .05). Instruments were also observed through scanning electron microscopy to evaluate type of fracture. Cyclic fatigue resistance of Reciproc R25 and TF Adaptive M-L1 was significantly higher than that of WaveOne Primary (P = .009 and P = .002, respectively). The results showed no statistically significant difference between TF Adaptive M-L1 and Reciproc R25 (P = .686). Analysis of the fractured portion under scanning electron microscopy indicated that all instruments showed morphologic characteristics of ductile fracture that were due to accumulation of metal fatigue. No statistically significant differences were found between the instruments tested except for WaveOne Primary, which showed the lowest resistance to cyclic fatigue. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  1. Single Trial EEG Patterns for the Prediction of Individual Differences in Fluid Intelligence.

    PubMed

    Qazi, Emad-Ul-Haq; Hussain, Muhammad; Aboalsamh, Hatim; Malik, Aamir Saeed; Amin, Hafeez Ullah; Bamatraf, Saeed

    2016-01-01

    Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.

  2. A novel metric that quantifies risk stratification for evaluating diagnostic tests: The example of evaluating cervical-cancer screening tests across populations.

    PubMed

    Katki, Hormuzd A; Schiffman, Mark

    2018-05-01

    Our work involves assessing whether new biomarkers might be useful for cervical-cancer screening across populations with different disease prevalences and biomarker distributions. When comparing across populations, we show that standard diagnostic accuracy statistics (predictive values, risk-differences, Youden's index and Area Under the Curve (AUC)) can easily be misinterpreted. We introduce an intuitively simple statistic for a 2 × 2 table, Mean Risk Stratification (MRS): the average change in risk (pre-test vs. post-test) revealed for tested individuals. High MRS implies better risk separation achieved by testing. MRS has 3 key advantages for comparing test performance across populations with different disease prevalences and biomarker distributions. First, MRS demonstrates that conventional predictive values and the risk-difference do not measure risk-stratification because they do not account for test-positivity rates. Second, Youden's index and AUC measure only multiplicative relative gains in risk-stratification: AUC = 0.6 achieves only 20% of maximum risk-stratification (AUC = 0.9 achieves 80%). Third, large relative gains in risk-stratification might not imply large absolute gains if disease is rare, demonstrating a "high-bar" to justify population-based screening for rare diseases such as cancer. We illustrate MRS by our experience comparing the performance of cervical-cancer screening tests in China vs. the USA. The test with the worst AUC = 0.72 in China (visual inspection with acetic acid) provides twice the risk-stratification (i.e. MRS) of the test with best AUC = 0.83 in the USA (human papillomavirus and Pap cotesting) because China has three times more cervical precancer/cancer. MRS could be routinely calculated to better understand the clinical/public-health implications of standard diagnostic accuracy statistics. Published by Elsevier Inc.

  3. Biomonitoring of pollen grains of a river bank suburban city, Konnagar, Calcutta, India, and its link and impact on local people.

    PubMed

    Ghosal, Kavita; Pandey, Naren; Bhattacharya, Swati Gupta

    2015-01-01

    Pollen grains released by plants are dispersed into the air and can become trapped in human nasal mucosa, causing immediate release of allergens triggering severe Type 1 hypersensitivity reactions in susceptible allergic patients. Recent epidemiologic data show that 11-12% of people suffer from this type of disorders in India. Hence, it is important to examine whether pollen grains have a role in dissipating respiratory problems, including allergy and astma, in a subtropical suburban city. Meteorological data were collected for a period of two years, together with aerobiological sampling with a Burkard sampler. A pollen calendar was prepared for the city. A health survey and the hospitalization rate of local people for the above problems were documented following statistical analysis between pollen counts and the data from the two above-mentioned sources. Skin Prick Test and Indirect ELISA were performer for the identification of allergenic pollen grains. Bio-monitoring results showed that a total of 36 species of pollen grains were located in the air of the study area, where their presence is controlled by many important meteorological parameters proved from SPSS statistical analysis and by their blooming periods. Statistical analysis showed that there is a high positive correlation of monthly pollen counts with the data from the survey and hospital. Biochemical tests revealed the allergic nature of pollen grains of many local species found in the sampler. Bio-monitoring, together with statistical and biochemical results, leave no doubt about the role of pollen as a bio-pollutant. General knowledge about pollen allergy and specific allergenic pollen grains of a particular locality could be a good step towards better health for the cosmopolitan suburban city.

  4. What can we learn from noise? — Mesoscopic nonequilibrium statistical physics —

    PubMed Central

    KOBAYASHI, Kensuke

    2016-01-01

    Mesoscopic systems — small electric circuits working in quantum regime — offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics. PMID:27477456

  5. What can we learn from noise? - Mesoscopic nonequilibrium statistical physics.

    PubMed

    Kobayashi, Kensuke

    2016-01-01

    Mesoscopic systems - small electric circuits working in quantum regime - offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics.

  6. A seven-year follow-up of intelligence test scores of foster grandparents.

    PubMed

    Troll, L E; Saltz, R; Dunin-Markiewicz, A

    1976-09-01

    After 7 years, a group of originally nonemployed poverty-level older people (over 60) who had been employed as foster grandparents were retested with the WAIS. Four WAIS subtests - Vocabulary Similarities, Digit Span, and Block Design - were employed. Of the original group of 39, complete data were available for 28; 18 of these were still working on the project, and the other 10 had dropped out. Dropouts as a group tested lower originally and also showed more deterioration in functional health ratings over time. For the total group of 32 foster grandparents, three subtest scores showed stability over the 7 years. Only Digit Span showed a statistically significant drop. Neither age nor the initial level of health or WAIS scores was related to test-score changes over time.

  7. Ranking metrics in gene set enrichment analysis: do they matter?

    PubMed

    Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna

    2017-05-12

    There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.

  8. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  9. Programmable quantum random number generator without postprocessing.

    PubMed

    Nguyen, Lac; Rehain, Patrick; Sua, Yong Meng; Huang, Yu-Ping

    2018-02-15

    We demonstrate a viable source of unbiased quantum random numbers whose statistical properties can be arbitrarily programmed without the need for any postprocessing such as randomness distillation or distribution transformation. It is based on measuring the arrival time of single photons in shaped temporal modes that are tailored with an electro-optical modulator. We show that quantum random numbers can be created directly in customized probability distributions and pass all randomness tests of the NIST and Dieharder test suites without any randomness extraction. The min-entropies of such generated random numbers are measured close to the theoretical limits, indicating their near-ideal statistics and ultrahigh purity. Easy to implement and arbitrarily programmable, this technique can find versatile uses in a multitude of data analysis areas.

  10. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  11. Imprints of magnetic power and helicity spectra on radio polarimetry statistics

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Enßlin, T. A.

    2011-06-01

    The statistical properties of turbulent magnetic fields in radio-synchrotron sources should be imprinted on the statistics of polarimetric observables. In search of these imprints, i.e. characteristic modifications of the polarimetry statistics caused by magnetic field properties, we calculate correlation and cross-correlation functions from a set of observables that contain total intensity I, polarized intensity P, and Faraday depth φ. The correlation functions are evaluated for all combinations of observables up to fourth order in magnetic field B. We derive these analytically as far as possible and from first principles using only some basic assumptions, such as Gaussian statistics for the underlying magnetic field in the observed region and statistical homogeneity. We further assume some simplifications to reduce the complexity of the calculations, because for a start we were interested in a proof of concept. Using this statistical approach, we show that it is possible to gain information about the helical part of the magnetic power spectrum via the correlation functions < P(kperp) φ(k'_{perp)φ(k''perp)>B} and < I(kperp) φ(k'_{perp)φ(k''perp)>B}. Using this insight, we construct an easy-to-use test for helicity called LITMUS (Local Inference Test for Magnetic fields which Uncovers heliceS), which gives a spectrally integrated measure of helicity. For now, all calculations are given in a Faraday-free case, but set up so that Faraday rotational effects can be included later.

  12. Correlation of MRI Visual Scales with Neuropsychological Profile in Mild Cognitive Impairment of Parkinson's Disease.

    PubMed

    Vasconcellos, Luiz Felipe; Pereira, João Santos; Adachi, Marcelo; Greca, Denise; Cruz, Manuela; Malak, Ana Lara; Charchat-Fichman, Helenice; Spitz, Mariana

    2017-01-01

    Few studies have evaluated magnetic resonance imaging (MRI) visual scales in Parkinson's disease-Mild Cognitive Impairment (PD-MCI). We selected 79 PD patients and 92 controls (CO) to perform neurologic and neuropsychological evaluation. Brain MRI was performed to evaluate the following scales: Global Cortical Atrophy (GCA), Fazekas, and medial temporal atrophy (MTA). The analysis revealed that both PD groups (amnestic and nonamnestic) showed worse performance on several tests when compared to CO. Memory, executive function, and attention impairment were more severe in amnestic PD-MCI group. Overall analysis of frequency of MRI visual scales by MCI subtype did not reveal any statistically significant result. Statistically significant inverse correlation was observed between GCA scale and Mini-Mental Status Examination (MMSE), Montreal Cognitive Assessment (MoCA), semantic verbal fluency, Stroop test, figure memory test, trail making test (TMT) B, and Rey Auditory Verbal Learning Test (RAVLT). The MTA scale correlated with Stroop test and Fazekas scale with figure memory test, digit span, and Stroop test according to the subgroup evaluated. Visual scales by MRI in MCI should be evaluated by cognitive domain and might be more useful in more severely impaired MCI or dementia patients.

  13. Improved Statistics for Genome-Wide Interaction Analysis

    PubMed Central

    Ueki, Masao; Cordell, Heather J.

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670

  14. Climatological Characterization of Three-Dimensional Storm Structure from Operational Radar and Rain Gauge Data.

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Houze, Robert A., Jr.; Yuter, Sandra E.

    1995-09-01

    Three algorithms extract information on precipitation type, structure, and amount from operational radar and rain gauge data. Tests on one month of data from one site show that the algorithms perform accurately and provide products that characterize the essential features of the precipitation climatology. Input to the algorithms are the operationally executed volume scans of a radar and the data from a surrounding rain gauge network. The algorithms separate the radar echoes into convective and stratiform regions, statistically summarize the vertical structure of the radar echoes, and determine precipitation rates and amounts on high spatial resolution.The convective and stratiform regions are separated on the basis of the intensity and sharpness of the peaks of echo intensity. The peaks indicate the centers of the convective region. Precipitation not identified as convective is stratiform. This method avoids the problem of underestimating the stratiform precipitation. The separation criteria are applied in exactly the same way throughout the observational domain and the product generated by the algorithm can be compared directly to model output. An independent test of the algorithm on data for which high-resolution dual-Doppler observations are available shows that the convective stratiform separation algorithm is consistent with the physical definitions of convective and stratiform precipitation.The vertical structure algorithm presents the frequency distribution of radar reflectivity as a function of height and thus summarizes in a single plot the vertical structure of all the radar echoes observed during a month (or any other time period). Separate plots reveal the essential differences in structure between the convective and stratiform echoes.Tests yield similar results (within less than 10%) for monthly rain statistics regardless of the technique used for estimating the precipitation, as long as the radar reflectivity values are adjusted to agree with monthly rain gauge data. It makes little difference whether the adjustment is by monthly mean rates or percentiles. Further tests show that 1-h sampling is sufficient to obtain an accurate estimate of monthly rain statistics.

  15. Knowledge about sources of dietary fibres and health effects using a validated scale: a cross-country study.

    PubMed

    Guiné, R P F; Duarte, J; Ferreira, M; Correia, P; Leal, M; Rumbak, I; Barić, I C; Komes, D; Satalić, Z; Sarić, M M; Tarcea, M; Fazakas, Z; Jovanoska, D; Vanevski, D; Vittadini, E; Pellegrini, N; Szűcs, V; Harangozó, J; El-Kenawy, A; El-Shenawy, O; Yalçın, E; Kösemeci, C; Klava, D; Straumite, E

    2016-12-01

    Dietary fibre (DF) is one of the components of diet that strongly contributes to health improvements, particularly on the gastrointestinal system. Hence, this work intended to evaluate the relations between some sociodemographic variables such as age, gender, level of education, living environment or country on the levels of knowledge about dietary fibre (KADF), its sources and its effects on human health, using a validated scale. The present study was a cross-sectional study. A methodological study was conducted with 6010 participants, residing in 10 countries from different continents (Europe, America, Africa). The instrument was a questionnaire of self-response, aimed at collecting information on knowledge about food fibres. The instrument was used to validate a scale (KADF) which model was used in the present work to identify the best predictors of knowledge. The statistical tools used were as follows: basic descriptive statistics, decision trees, inferential analysis (t-test for independent samples with Levene test and one-way ANOVA with multiple comparisons post hoc tests). The results showed that the best predictor for the three types of knowledge evaluated (about DF, about its sources and about its effects on human health) was always the country, meaning that the social, cultural and/or political conditions greatly determine the level of knowledge. On the other hand, the tests also showed that statistically significant differences were encountered regarding the three types of knowledge for all sociodemographic variables evaluated: age, gender, level of education, living environment and country. The results showed that to improve the level of knowledge the actions planned should not be delineated in general as to reach all sectors of the populations, and that in addressing different people, different methodologies must be designed so as to provide an effective health education. Copyright © 2016 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  16. Three-dimensional textural features of conventional MRI improve diagnostic classification of childhood brain tumours.

    PubMed

    Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N

    2015-09-01

    The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Measurements of Turbulent Flow Field in Separate Flow Nozzles with Enhanced Mixing Devices - Test Report

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2002-01-01

    As part of the Advanced Subsonic Technology Program, a series of experiments was conducted at NASA Glenn Research Center on the effect of mixing enhancement devices on the aeroacoustic performance of separate flow nozzles. Initial acoustic evaluations of the devices showed that they reduced jet noise significantly, while creating very little thrust loss. The explanation for the improvement required that turbulence measurements, namely single point mean and RMS statistics and two-point spatial correlations, be made to determine the change in the turbulence caused by the mixing enhancement devices that lead to the noise reduction. These measurements were made in the summer of 2000 in a test program called Separate Nozzle Flow Test 2000 (SFNT2K) supported by the Aeropropulsion Research Program at NASA Glenn Research Center. Given the hot high-speed flows representative of a contemporary bypass ratio 5 turbofan engine, unsteady flow field measurements required the use of an optical measurement method. To achieve the spatial correlations, the Particle Image Velocimetry technique was employed, acquiring high-density velocity maps of the flows from which the required statistics could be derived. This was the first successful use of this technique for such flows, and shows the utility of this technique for future experimental programs. The extensive statistics obtained were likewise unique and give great insight into the turbulence which produces noise and how the turbulence can be modified to reduce jet noise.

  18. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Remineralization Property of an Orthodontic Primer Containing a Bioactive Glass with Silver and Zinc

    PubMed Central

    Lee, Seung-Min; Kim, In-Ryoung; Park, Bong-Soo; Ko, Ching-Chang; Son, Woo-Sung; Kim, Yong-Il

    2017-01-01

    White spot lesions (WSLs) are irreversible damages in orthodontic treatment due to excessive etching or demineralization by microorganisms. In this study, we conducted a mechanical and cell viability test to examine the antibacterial properties of 0.2% and 1% bioactive glass (BAG) and silver-doped and zinc-doped BAGs in a primer and evaluated their clinical applicability to prevent WSLs. The microhardness statistically significantly increased in the adhesive-containing BAG, while the other samples showed no statistically significant difference compared with the control group. The shear bond strength of all samples increased compared with that of the control group. The cell viability of the control and sample groups was similar within 24 h, but decreased slightly over 48 h. All samples showed antibacterial properties. Regarding remineralization property, the group containing 0.2% of the samples showed remineralization properties compared with the control group, but was not statistically significant; further, the group containing 1% of the samples showed a significant difference compared with the control group. Among them, the orthodontic bonding primer containing 1% silver-doped BAG showed the highest remineralization property. The new orthodontic bonding primer used in this study showed an antimicrobial effect, chemical remineralization effect, and WSL prevention as well as clinically applicable properties, both physically and biologically. PMID:29088092

  20. Assessment of a Learning Strategy among Spine Surgeons.

    PubMed

    Gotfryd, Alberto Ofenhejm; Corredor, Jose Alfredo; Teixeira, William Jacobsen; Martins, Delio Eulálio; Milano, Jeronimo; Iutaka, Alexandre Sadao

    2017-02-01

    Pilot test, observational study. To evaluate objectively the knowledge transfer provided by theoretical and practical activities during AOSpine courses for spine surgeons. During two AOSpine principles courses, 62 participants underwent precourse assessment, which consisted of questions about their professional experience, preferences regarding adolescent idiopathic scoliosis (AIS) classification, and classifying the curves by means of the Lenke classification of two AIS clinical cases. Two learning strategies were used during the course. A postcourse questionnaire was applied to reclassify the same deformity cases. Differences in the correct answers of clinical cases between pre- and postcourse were analyzed, revealing the number of participants whose accuracy in classification improved after the course. Analysis showed a decrease in the number of participants with wrong answers in both cases after the course. In the first case, statistically significant differences were observed in both curve pattern (83.3%, p   =  0.005) and lumbar spine modifier (46.6%, p   =  0.049). No statistically significant improvement was seen in the sagittal thoracic modifier (33.3%, p   =  0.309). In the second case, statistical improvement was obtained in curve pattern (27.4%, p   =  0.018). No statistically significant improvement was seen regarding lumbar spine modifier (9.8%, p   =  0.121) and sagittal thoracic modifier (12.9%, p   =  0.081). This pilot test showed objectively that learning strategies used during AOSpine courses improved the participants' knowledge. Teaching strategies must be continually improved to ensure an optimal level of knowledge transfer.

  1. Assessment of a Learning Strategy among Spine Surgeons

    PubMed Central

    Gotfryd, Alberto Ofenhejm; Teixeira, William Jacobsen; Martins, Delio Eulálio; Milano, Jeronimo; Iutaka, Alexandre Sadao

    2017-01-01

    Study Design Pilot test, observational study. Objective To evaluate objectively the knowledge transfer provided by theoretical and practical activities during AOSpine courses for spine surgeons. Methods During two AOSpine principles courses, 62 participants underwent precourse assessment, which consisted of questions about their professional experience, preferences regarding adolescent idiopathic scoliosis (AIS) classification, and classifying the curves by means of the Lenke classification of two AIS clinical cases. Two learning strategies were used during the course. A postcourse questionnaire was applied to reclassify the same deformity cases. Differences in the correct answers of clinical cases between pre- and postcourse were analyzed, revealing the number of participants whose accuracy in classification improved after the course. Results Analysis showed a decrease in the number of participants with wrong answers in both cases after the course. In the first case, statistically significant differences were observed in both curve pattern (83.3%, p  =  0.005) and lumbar spine modifier (46.6%, p  =  0.049). No statistically significant improvement was seen in the sagittal thoracic modifier (33.3%, p  =  0.309). In the second case, statistical improvement was obtained in curve pattern (27.4%, p  =  0.018). No statistically significant improvement was seen regarding lumbar spine modifier (9.8%, p  =  0.121) and sagittal thoracic modifier (12.9%, p  =  0.081). Conclusion This pilot test showed objectively that learning strategies used during AOSpine courses improved the participants' knowledge. Teaching strategies must be continually improved to ensure an optimal level of knowledge transfer. PMID:28451507

  2. Self esteem and organizational commitment among health information management staff in tertiary care hospitals in Tehran.

    PubMed

    Sadoughi, Farahnaz; Ebrahimi, Kamal

    2014-12-12

    Self esteem (SE) and organizational commitment (OC)? have significant impact on the quality of work life. This study aims to gain a better understanding of the relationships between SE and OC among health information management staff in tertiary care hospitals in Tehran (Iran). This was a descriptive correlational and cross sectional study conducted on the health information management staff of tertiary care hospitals in Tehran, Iran. A total of 155 participants were randomly selected from 400 staff. Data were collected by two standard questionnaires. The SE and OC was measured using Eysenck SE scale and Meyer and Allen's three component model, respectively. The collected data were analyzed with the SPSS (version 16) using statistical tests of of independent T-test, Pearson Correlation coefficient, one way ANOVA and F tests. The OC and SE of the employees' were 67.8?, out of 120 (weak) and 21.0 out of 30 (moderate), respectively. The values for affective commitment, normative commitment, and continuance commitment were respectively 21.3 out of 40 (moderate), 23.9 out of 40 (moderate), and 22.7 out of 40 (moderate). The Pearson correlation coefficient test showed a significant OC and SE was statistically significant (P<0.05). The one way ANOVA test (P<0.05) did not show any significant difference between educational degree and work experience with SE and OC. This research showed that SE and OC ?are moderate. SE and OC have strong correlation with turnover, critical thinking, job satisfaction, and individual and organizational improvement. Therefore, applying appropriate human resource policies is crucial to reinforce these measures.

  3. Longitudinal and Immediate Effect of Kundalini Yoga on Salivary Levels of Cortisol and Activity of Alpha-Amylase and Its Effect on Perceived Stress

    PubMed Central

    García-Sesnich, Jocelyn N; Flores, Mauricio Garrido; Ríos, Marcela Hernández; Aravena, Jorge Gamonal

    2017-01-01

    Context: Stress is defined as an alteration of an organism's balance in response to a demand perceived from the environment. Diverse methods exist to evaluate physiological response. A noninvasive method is salivary measurement of cortisol and alpha-amylase. A growing body of evidence suggests that the regular practice of Yoga would be an effective treatment for stress. Aims: To determine the Kundalini Yoga (KY) effect, immediate and after 3 months of regular practice, on the perception of psychological stress and the salivary levels of cortisol and alpha-amylase activity. Settings and Design: To determine the psychological perceived stress, levels of cortisol and alpha-amylase activity in saliva, and compare between the participants to KY classes performed for 3 months and a group that does not practice any type of yoga. Subjects and Methods: The total sample consisted of 26 people between 18 and 45-year-old; 13 taking part in KY classes given at the Faculty of Dentistry, University of Chile and 13 controls. Salivary samples were collected, enzyme-linked immunosorbent assay was performed to quantify cortisol and kinetic reaction test was made to determine alpha-amylase activity. Perceived Stress Scale was applied at the beginning and at the end of the intervention. Statistical Analysis Used: Statistical analysis was applied using Stata v11.1 software. Shapiro–Wilk test was used to determine data distribution. The paired analysis was fulfilled by t-test or Wilcoxon signed-rank test. T-test or Mann–Whitney's test was applied to compare longitudinal data. A statistical significance was considered when P < 0.05. Results: KY practice had an immediate effect on salivary cortisol. The activity of alpha-amylase did not show significant changes. A significant decrease of perceived stress in the study group was found. Conclusions: KY practice shows an immediate effect on salivary cortisol levels and on perceived stress after 3 months of practice. PMID:28546677

  4. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  5. Analysis of residual stress and hardness in regions of pre-manufactured and manual bends in fixation plates for maxillary advancement.

    PubMed

    Araújo, Marcelo Marotta; Lauria, Andrezza; Mendes, Marcelo Breno Meneses; Claro, Ana Paula Rosifini Alves; Claro, Cristiane Aparecida de Assis; Moreira, Roger William Fernandes

    2015-12-01

    The aim of this study was to analyze, through Vickers hardness test and photoelasticity analysis, pre-bent areas, manually bent areas, and areas without bends of 10-mm advancement pre-bent titanium plates (Leibinger system). The work was divided into three groups: group I-region without bend, group II-region of 90° manual bend, and group III-region of 90° pre-fabricated bends. All the materials were evaluated through hardness analysis by the Vickers hardness test, stress analysis by residual images obtained in a polariscope, and photoelastic analysis by reflection during the manual bending. The data obtained from the hardness tests were statistically analyzed using ANOVA and Tukey's tests at a significance level of 5 %. The pre-bent plate (group III) showed hardness means statistically significantly higher (P < 0.05) than those of the other groups (I-region without bends, II-90° manually bent region). Through the study of photoelastic reflection, it was possible to identify that the stress gradually increased, reaching a pink color (1.81 δ / λ), as the bending was performed. A general analysis of the results showed that the bent plate region of pre-bent titanium presented the best results.

  6. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    PubMed Central

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing equivalent from nonequivalent cell populations. FlowMap‐FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F‐measure of 0.88 was obtained, indicating high precision and recall of the FR‐based population matching results. FlowMap‐FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © 2015 International Society for Advancement of Cytometry PMID:26274018

  7. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    PubMed

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell populations. FlowMap-FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F-measure of 0.88 was obtained, indicating high precision and recall of the FR-based population matching results. FlowMap-FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC.

  8. An Establishment-Level Test of the Statistical Discrimination Hypothesis.

    ERIC Educational Resources Information Center

    Tomaskovic-Devey, Donald; Skaggs, Sheryl

    1999-01-01

    Analysis of a sample of 306 workers shows that neither the gender nor racial composition of the workplace is associated with productivity. An alternative explanation for lower wages of women and minorities is social closure--the monopolizing of desirable positions by advantaged workers. (SK)

  9. Effect of water extract of Psidium guajava leaves on alloxan-induced diabetic rats.

    PubMed

    Mukhtar, H M; Ansari, S H; Ali, M; Naved, T; Bhat, Z A

    2004-09-01

    A water extract of Psidium guajava leaves was screened for hypoglycemic activity on alloxan-induced diabetic rats. In both acute and sub-acute tests, the water extract, at an oral dose of 250 mg/kg, showed statistically significant hypoglycemic activity.

  10. Teaching and Learning with Individually Unique Exercises

    ERIC Educational Resources Information Center

    Joerding, Wayne

    2010-01-01

    In this article, the author describes the pedagogical benefits of giving students individually unique homework exercises from an exercise template. Evidence from a test of this approach shows statistically significant improvements in subsequent exam performance by students receiving unique problems compared with students who received traditional…

  11. Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic

    ERIC Educational Resources Information Center

    Satorra, Albert; Bentler, Peter M.

    2010-01-01

    A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…

  12. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    PubMed Central

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  13. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies.

    PubMed

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-28

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  14. Not Just a Sum? Identifying Different Types of Interplay between Constituents in Combined Interventions

    PubMed Central

    Van Deun, Katrijn; Thorrez, Lieven; van den Berg, Robert A.; Smilde, Age K.; Van Mechelen, Iven

    2015-01-01

    Motivation Experiments in which the effect of combined manipulations is compared with the effects of their pure constituents have received a great deal of attention. Examples include the study of combination therapies and the comparison of double and single knockout model organisms. Often the effect of the combined manipulation is not a mere addition of the effects of its constituents, with quite different forms of interplay between the constituents being possible. Yet, a well-formalized taxonomy of possible forms of interplay is lacking, let alone a statistical methodology to test for their presence in empirical data. Results Starting from a taxonomy of a broad range of forms of interplay between constituents of a combined manipulation, we propose a sound statistical hypothesis testing framework to test for the presence of each particular form of interplay. We illustrate the framework with analyses of public gene expression data on the combined treatment of dendritic cells with curdlan and GM-CSF and show that these lead to valuable insights into the mode of action of the constituent treatments and their combination. Availability and Implementation R code implementing the statistical testing procedure for microarray gene expression data is available as supplementary material. The data are available from the Gene Expression Omnibus with accession number GSE32986. PMID:25965065

  15. Not Just a Sum? Identifying Different Types of Interplay between Constituents in Combined Interventions.

    PubMed

    Van Deun, Katrijn; Thorrez, Lieven; van den Berg, Robert A; Smilde, Age K; Van Mechelen, Iven

    2015-01-01

    Experiments in which the effect of combined manipulations is compared with the effects of their pure constituents have received a great deal of attention. Examples include the study of combination therapies and the comparison of double and single knockout model organisms. Often the effect of the combined manipulation is not a mere addition of the effects of its constituents, with quite different forms of interplay between the constituents being possible. Yet, a well-formalized taxonomy of possible forms of interplay is lacking, let alone a statistical methodology to test for their presence in empirical data. Starting from a taxonomy of a broad range of forms of interplay between constituents of a combined manipulation, we propose a sound statistical hypothesis testing framework to test for the presence of each particular form of interplay. We illustrate the framework with analyses of public gene expression data on the combined treatment of dendritic cells with curdlan and GM-CSF and show that these lead to valuable insights into the mode of action of the constituent treatments and their combination. R code implementing the statistical testing procedure for microarray gene expression data is available as supplementary material. The data are available from the Gene Expression Omnibus with accession number GSE32986.

  16. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    NASA Astrophysics Data System (ADS)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  17. Permutation-based inference for the AUC: A unified approach for continuous and discontinuous data.

    PubMed

    Pauly, Markus; Asendorf, Thomas; Konietschke, Frank

    2016-11-01

    We investigate rank-based studentized permutation methods for the nonparametric Behrens-Fisher problem, that is, inference methods for the area under the ROC curve. We hereby prove that the studentized permutation distribution of the Brunner-Munzel rank statistic is asymptotically standard normal, even under the alternative. Thus, incidentally providing the hitherto missing theoretical foundation for the Neubert and Brunner studentized permutation test. In particular, we do not only show its consistency, but also that confidence intervals for the underlying treatment effects can be computed by inverting this permutation test. In addition, we derive permutation-based range-preserving confidence intervals. Extensive simulation studies show that the permutation-based confidence intervals appear to maintain the preassigned coverage probability quite accurately (even for rather small sample sizes). For a convenient application of the proposed methods, a freely available software package for the statistical software R has been developed. A real data example illustrates the application. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A study of tensile test on open-cell aluminum foam sandwich

    NASA Astrophysics Data System (ADS)

    Ibrahim, N. A.; Hazza, M. H. F. Al; Adesta, E. Y. T.; Abdullah Sidek, Atiah Bt.; Endut, N. A.

    2018-01-01

    Aluminum foam sandwich (AFS) panels are one of the growing materials in the various industries because of its lightweight behavior. AFS also known for having excellent stiffness to weight ratio and high-energy absorption. Due to their advantages, many researchers’ shows an interest in aluminum foam material for expanding the use of foam structure. However, there is still a gap need to be fill in order to develop reliable data on mechanical behavior of AFS with different parameters and analysis method approach. Least of researcher focusing on open-cell aluminum foam and statistical analysis. Thus, this research conducted by using open-cell aluminum foam core grade 6101 with aluminum sheets skin tested under tension. The data is analyzed using full factorial in JMP statistical analysis software (version 11). ANOVA result show a significant value of the model which less than 0.500. While scatter diagram and 3D plot surface profiler found that skins thickness gives a significant impact to stress/strain value compared to core thickness.

  19. The Effect of Communication Skills Training on Quality of Care, Self-Efficacy, Job Satisfaction and Communication Skills Rate of Nurses in Hospitals of Tabriz, Iran

    PubMed Central

    Khodadadi, Esmail; Ebrahimi, Hossein; Moghaddasian, Sima; Babapour, Jalil

    2013-01-01

    Introduction: Having an effective relationship with the patient in the process of treatment is essential. Nurses must have communication skills in order to establish effective relationships with the patients. This study evaluated the impact of communication skills training on quality of care, self-efficacy, job satisfaction and communication skills of nurses. Methods: This is an experimental study with a control group that has been done in 2012. The study sample consisted of 73 nurses who work in hospitals of Tabriz; they were selected by proportional randomizing method. The intervention was only conducted on the experimental group. In order to measure the quality of care 160 patients, who had received care by nurses, participated in this study. The Data were analyzed by SPSS (ver.13). Results: Comparing the mean scores of communication skills showed a statistically significant difference between control and experimental groups after intervention. The paired t-test showed a statistically significant difference in the experimental group before and after the intervention. Independent t-test showed a statistically significant difference between the rate of quality of care in patients of control and experimental groups after the intervention. Conclusion: The results showed that the training of communication skills can increase the nurse's rate of communication skills and cause elevation in quality of nursing care. Therefore, in order to improve the quality of nursing care it is recommended that communication skills be established and taught as a separate course in nursing education. PMID:25276707

  20. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    NASA Astrophysics Data System (ADS)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  1. Nursing students' attitudes toward statistics: Effect of a biostatistics course and association with examination performance.

    PubMed

    Kiekkas, Panagiotis; Panagiotarou, Aliki; Malja, Alvaro; Tahirai, Daniela; Zykai, Rountina; Bakalis, Nick; Stefanopoulos, Nikolaos

    2015-12-01

    Although statistical knowledge and skills are necessary for promoting evidence-based practice, health sciences students have expressed anxiety about statistics courses, which may hinder their learning of statistical concepts. To evaluate the effects of a biostatistics course on nursing students' attitudes toward statistics and to explore the association between these attitudes and their performance in the course examination. One-group quasi-experimental pre-test/post-test design. Undergraduate nursing students of the fifth or higher semester of studies, who attended a biostatistics course. Participants were asked to complete the pre-test and post-test forms of The Survey of Attitudes Toward Statistics (SATS)-36 scale at the beginning and end of the course respectively. Pre-test and post-test scale scores were compared, while correlations between post-test scores and participants' examination performance were estimated. Among 156 participants, post-test scores of the overall SATS-36 scale and of the Affect, Cognitive Competence, Interest and Effort components were significantly higher than pre-test ones, indicating that the course was followed by more positive attitudes toward statistics. Among 104 students who participated in the examination, higher post-test scores of the overall SATS-36 scale and of the Affect, Difficulty, Interest and Effort components were significantly but weakly correlated with higher examination performance. Students' attitudes toward statistics can be improved through appropriate biostatistics courses, while positive attitudes contribute to higher course achievements and possibly to improved statistical skills in later professional life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. [Therapy of organic brain syndrome with nicergoline given once a day].

    PubMed

    Ladurner, G; Erhart, P; Erhart, C; Scheiber, V

    1991-01-01

    In a double-blind, active-controlled study 30 patients with mild to moderate multiinfarct dementia diagnosed according to DSM III definition were treated by either 20 mg nicergoline or 4.5 mg co-dergocrine mesilate once daily during eight weeks. Therapeutic effects on symptoms of the organic brain syndrome were quantitatively measured by standardized psychological and psychometric methods evaluating cognitive and thymopsychic functions. Main criteria, which were tested by inferential analysis, were SCAG total score (Sandoz Clinical Assessment Geriatric Scale), SCAG overall impression and the AD Test (alphabetischer Durchstreichtest). Other results were assessed by descriptive statistics. Both treatments resulted in a statistically significant improvement in most of the tested functions. The effects of 4.5 mg co-dergocrine mesilate s.i.d. were in accordance with published results. Although differing slightly with respect to individual results 20 mg of nicergoline once daily showed the same efficacy on the whole.

  3. Fracture load and failure analysis of zirconia single crowns veneered with pressed and layered ceramics after chewing simulation.

    PubMed

    Stawarczyk, Bogna; Ozcan, Mutlu; Roos, Malgorzata; Trottmann, Albert; Hämmerle, Christoph H F

    2011-01-01

    This study determined the fracture load of zirconia crowns veneered with four overpressed and four layered ceramics after chewing simulation. The veneered zirconia crowns were cemented and subjected to chewing cycling. Subsequently, the specimens were loaded at an angle of 45° in a Universal Testing Machine to determine the fracture load. One-way ANOVA, followed by a post-hoc Scheffé test, t-test and Weibull statistic were performed. Overpressed crowns showed significantly lower fracture load (543-577 N) compared to layered ones (805-1067 N). No statistical difference was found between the fracture loads within the overpressed group. Within the layered groups, LV (1067 N) presented significantly higher results compared to LC (805 N). The mean values of all other groups were not significantly different. Single zirconia crowns veneered with overpressed ceramics exhibited lower fracture load than those of the layered ones after chewing simulation.

  4. A comment on measuring the Hurst exponent of financial time series

    NASA Astrophysics Data System (ADS)

    Couillard, Michel; Davison, Matt

    2005-03-01

    A fundamental hypothesis of quantitative finance is that stock price variations are independent and can be modeled using Brownian motion. In recent years, it was proposed to use rescaled range analysis and its characteristic value, the Hurst exponent, to test for independence in financial time series. Theoretically, independent time series should be characterized by a Hurst exponent of 1/2. However, finite Brownian motion data sets will always give a value of the Hurst exponent larger than 1/2 and without an appropriate statistical test such a value can mistakenly be interpreted as evidence of long term memory. We obtain a more precise statistical significance test for the Hurst exponent and apply it to real financial data sets. Our empirical analysis shows no long-term memory in some financial returns, suggesting that Brownian motion cannot be rejected as a model for price dynamics.

  5. Enabling High-Energy, High-Voltage Lithium-Ion Cells: Standardization of Coin-Cell Assembly, Electrochemical Testing, and Evaluation of Full Cells

    DOE PAGES

    Long, Brandon R.; Rinaldo, Steven G.; Gallagher, Kevin G.; ...

    2016-11-09

    Coin-cells are often the test format of choice for laboratories engaged in battery research and development as they provide a convenient platform for rapid testing of new materials on a small scale. However, reliable, reproducible data via the coin-cell format is inherently difficult, particularly in the full-cell configuration. In addition, statistical evaluation to prove the consistency and reliability of such data is often neglected. Herein we report on several studies aimed at formalizing physical process parameters and coin-cell construction related to full cells. Statistical analysis and performance benchmarking approaches are advocated as a means to more confidently track changes inmore » cell performance. Finally, we show that trends in the electrochemical data obtained from coin-cells can be reliable and informative when standardized approaches are implemented in a consistent manner.« less

  6. Extractive-spectrophotometric determination of disopyramide and irbesartan in their pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.

    2007-04-01

    Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.

  7. Using a cross section to train veterinary students to visualize anatomical structures in three dimensions

    NASA Astrophysics Data System (ADS)

    Provo, Judy; Lamar, Carlton; Newby, Timothy

    2002-01-01

    A cross section was used to enhance three-dimensional knowledge of anatomy of the canine head. All veterinary students in two successive classes (n = 124) dissected the head; experimental groups also identified structures on a cross section of the head. A test assessing spatial knowledge of the head generated 10 dependent variables from two administrations. The test had content validity and statistically significant interrater and test-retest reliability. A live-dog examination generated one additional dependent variable. Analysis of covariance controlling for performance on course examinations and quizzes revealed no treatment effect. Including spatial skill as a third covariate revealed a statistically significant effect of spatial skill on three dependent variables. Men initially had greater spatial skill than women, but spatial skills were equal after 8 months. A qualitative analysis showed the positive impact of this experience on participants. Suggestions for improvement and future research are discussed.

  8. A global logrank test for adaptive treatment strategies based on observational studies.

    PubMed

    Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara

    2014-02-28

    In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Distinguishing synchronous and time-varying synergies using point process interval statistics: motor primitives in frog and rat

    PubMed Central

    Hart, Corey B.; Giszter, Simon F.

    2013-01-01

    We present and apply a method that uses point process statistics to discriminate the forms of synergies in motor pattern data, prior to explicit synergy extraction. The method uses electromyogram (EMG) pulse peak timing or onset timing. Peak timing is preferable in complex patterns where pulse onsets may be overlapping. An interval statistic derived from the point processes of EMG peak timings distinguishes time-varying synergies from synchronous synergies (SS). Model data shows that the statistic is robust for most conditions. Its application to both frog hindlimb EMG and rat locomotion hindlimb EMG show data from these preparations is clearly most consistent with synchronous synergy models (p < 0.001). Additional direct tests of pulse and interval relations in frog data further bolster the support for synchronous synergy mechanisms in these data. Our method and analyses support separated control of rhythm and pattern of motor primitives, with the low level execution primitives comprising pulsed SS in both frog and rat, and both episodic and rhythmic behaviors. PMID:23675341

  10. Vehicle occupants' exposure to aromatic volatile organic compounds while commuting on an urban-suburban route in Korea.

    PubMed

    Jo, W K; Choi, S J

    1996-08-01

    This study identified in-auto and in-bus exposures to six selected aromatic volatile organic compounds (VOCs) for commutes on an urban-suburban route in Korea. A bus-service route was selected to include three segments of Taegu and one suburban segment (Hayang) to satisfy the criteria specified for this study. This study indicates that motor vehicle exhaust and evaporative emissions are major sources of both auto and bus occupants' exposures to aromatic VOCs in both Taegu and Hayang. A nonparametric statistical test (Wilcoxon test) showed that in-auto benzene levels were significantly different from in-bus benzene levels for both urban-segment and suburban-segment commutes. The test also showed that the benzene-level difference between urban-segment and suburban-segment commutes was significant for both autos and buses. An F-test showed the same statistical results for the comparison of the summed in-vehicle concentration of the six target VOCs (benzene, toluene, ethylbenzene, and o,m,p-xylenes) as those for the comparison of the in-vehicle benzene concentration. On the other hand, the in-vehicle benzene level only and the sum were not significantly different among the three urban-segment commutes and between the morning and evening commutes. The in-auto VOC concentrations were intermediate between the results for the Los Angeles and Boston. The in-bus VOC concentrations were about one-tenth of the Taipei, Taiwan results.

  11. Comparative Evaluation of Microleakage Between Nano-Ionomer, Giomer and Resin Modified Glass Ionomer Cement in Class V Cavities- CLSM Study.

    PubMed

    Bollu, Indira Priyadarshini; Hari, Archana; Thumu, Jayaprakash; Velagula, Lakshmi Deepa; Bolla, Nagesh; Varri, Sujana; Kasaraneni, Srikanth; Nalli, Siva Venkata Malathi

    2016-05-01

    Marginal integrity of adhesive restorative materials provides better sealing ability for enamel and dentin and plays an important role in success of restoration in Class V cavities. Restorative material with good marginal adaptation improves the longevity of restorations. Aim of this study was to evaluate microleakage in Class V cavities which were restored with Resin Modified Glass Ionomer Cement (RMGIC), Giomer and Nano-Ionomer. This in-vitro study was performed on 60 human maxillary and mandibular premolars which were extracted for orthodontic reasons. A standard wedge shaped defect was prepared on the buccal surfaces of teeth with the gingival margin placed near Cemento Enamel Junction (CEJ). Teeth were divided into three groups of 20 each and restored with RMGIC, Giomer and Nano-Ionomer and were subjected to thermocycling. Teeth were then immersed in 0.5% Rhodamine B dye for 48 hours. They were sectioned longitudinally from the middle of cavity into mesial and distal parts. The sections were observed under Confocal Laser Scanning Microscope (CLSM) to evaluate microleakage. Depth of dye penetration was measured in millimeters. The data was analysed using the Kruskal Wallis test. Pair wise comparison was done with Mann Whitney U Test. A p-value<0.05 is taken as statistically significant. Nano-Ionomer showed less microleakage which was statistically significant when compared to Giomer (p=0.0050). Statistically no significant difference was found between Nano Ionomer and RMGIC (p=0.3550). There was statistically significant difference between RMGIC and Giomer (p=0.0450). Nano-Ionomer and RMGIC showed significantly less leakage and better adaptation than Giomer and there was no statistically significant difference between Nano-Ionomer and RMGIC.

  12. PROMISE: a tool to identify genomic features with a specific biologically interesting pattern of associations with multiple endpoint variables

    PubMed Central

    Pounds, Stan; Cheng, Cheng; Cao, Xueyuan; Crews, Kristine R.; Plunkett, William; Gandhi, Varsha; Rubnitz, Jeffrey; Ribeiro, Raul C.; Downing, James R.; Lamba, Jatinder

    2009-01-01

    Motivation: In some applications, prior biological knowledge can be used to define a specific pattern of association of multiple endpoint variables with a genomic variable that is biologically most interesting. However, to our knowledge, there is no statistical procedure designed to detect specific patterns of association with multiple endpoint variables. Results: Projection onto the most interesting statistical evidence (PROMISE) is proposed as a general procedure to identify genomic variables that exhibit a specific biologically interesting pattern of association with multiple endpoint variables. Biological knowledge of the endpoint variables is used to define a vector that represents the biologically most interesting values for statistics that characterize the associations of the endpoint variables with a genomic variable. A test statistic is defined as the dot-product of the vector of the observed association statistics and the vector of the most interesting values of the association statistics. By definition, this test statistic is proportional to the length of the projection of the observed vector of correlations onto the vector of most interesting associations. Statistical significance is determined via permutation. In simulation studies and an example application, PROMISE shows greater statistical power to identify genes with the interesting pattern of associations than classical multivariate procedures, individual endpoint analyses or listing genes that have the pattern of interest and are significant in more than one individual endpoint analysis. Availability: Documented R routines are freely available from www.stjuderesearch.org/depts/biostats and will soon be available as a Bioconductor package from www.bioconductor.org. Contact: stanley.pounds@stjude.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19528086

  13. Spontaneous genetic damage in the tegu lizard (Tupinambis merianae): the effect of age.

    PubMed

    Schaumburg, Laura G; Poletta, Gisela L; Siroski, Pablo A; Mudry, Marta D

    2014-05-15

    Several studies indicate that certain factors such as age, sex or nutritional status among others, may affect the level of DNA damage, both induced and spontaneous, so it is very important to consider them for a more accurate interpretation of the findings. The aim of this study was to analyze the influence of age, sex, and nest of origin on spontaneous genetic damage of Tupinambis merianae determined by the comet assay (CA) and the micronucleus (MN) test, in order to improve reference data for future in vivo studies of xenobiotics exposure in this species. Sixty-five tegu lizards of three different ages: newborns (NB), juveniles (JUV) and adults (AD), both sexes and from different nests of origin were used. Blood samples were collected from the caudal vein of all animals and the MN test and CA were applied on peripheral blood erythrocytes to determine basal frequency of MN (BFMN) and basal damage index (BDI). The comparison between age groups showed statistically significant differences in the BFMN and BDI (p<0.05). NB animals showed significantly higher BDI values in relation to JUV and AD (p<0.016), but no statistically differences were found between the latter two. NB showed lower BFMN respect to other age groups, being statistically significant only when compared to AD (p<0.016). BFMN or BDI showed no statistically significant differences between sexes or nests of origin (p>0.05). A weak negative relationship was found only between BFMN and weight of NB tegu lizard (p=0.014; R(2)=0.245). Basal values of genetic damage obtained with both biomarkers in the tegu lizard evidenced that age is an intrinsic factor that should be taken into account to avoid misunderstanding of the results in future biomonitoring studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  15. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  16. Comparative Evaluation of Marginal Adaptation of BiodentineTM and Other Commonly Used Root End Filling Materials-An Invitro Study

    PubMed Central

    P.V., Ravichandra; Vemisetty, Harikumar; K., Deepthi; Reddy S, Jayaprada; D., Ramkiran; Krishna M., Jaya Nagendra; Malathi, Gita

    2014-01-01

    Aim: The purpose of this investigation was to evaluate the marginal adaptation of three root-end filling materials Glass ionomer cement, Mineral trioxide aggregate and BiodentineTM. Methodology: Thirty human single-rooted teeth were resected 3 mm from the apex. Root-end cavities were then prepared using an ultrasonic tip and filled with one of the following materials Glass ionomer cement (GIC), Mineral trioxide aggregate (MTA) and a bioactive cement BiodentineTM. The apical portions of the roots were then sectioned to obtain three 1 mm thick transversal sections. Confocal laser scanning microscopy (CLSM) was used to determine area of gaps and adaptation of the root-end filling materials with the dentin. The Post hoc test, a multiple comparison test was used for statistical data analysis. Results: Statistical analysis showed lowest marginal gaps (11143.42±967.753m2) and good marginal adaptation with BiodentineTM followed by MTA (22300.97±3068.883m2) and highest marginal gaps with GIC (33388.17±12155.903m2) which were statistically significant (p<0.0001). Conclusion: A new root end filling material BiodentineTM showed better marginal adaptation than commonly used root end filling materials PMID:24783148

  17. Volumetric analysis of hand, reciprocating and rotary instrumentation techniques in primary molars using spiral computed tomography: An in vitro comparative study.

    PubMed

    Jeevanandan, Ganesh; Thomas, Eapen

    2018-01-01

    This present study was conducted to analyze the volumetric change in the root canal space and instrumentation time between hand files, hand files in reciprocating motion, and three rotary files in primary molars. One hundred primary mandibular molars were randomly allotted to one of the five groups. Instrumentation was done using Group I; nickel-titanium (Ni-Ti) hand file, Group II; Ni-Ti hand files in reciprocating motion, Group III; Race rotary files, Group IV; prodesign pediatric rotary files, and Group V; ProTaper rotary files. The mean volumetric changes were assessed using pre- and post-operative spiral computed tomography scans. Instrumentation time was recorded. Statistical analysis to access intergroup comparison for mean canal volume and instrumentation time was done using Bonferroni-adjusted Mann-Whitney test and Mann-Whitney test, respectively. Intergroup comparison of mean canal volume showed statistically significant difference between Groups II versus IV, Groups III versus V, and Groups IV versus V. Intergroup comparison of mean instrumentation time showed statistically significant difference among all the groups except Groups IV versus V. Among the various instrumentation techniques available, rotary instrumentation is the considered to be the better instrumentation technique for canal preparation in primary teeth.

  18. Efficacy and safety of 3% minoxidil versus combined 3% minoxidil / 0.1% finasteride in male pattern hair loss: a randomized, double-blind, comparative study.

    PubMed

    Tanglertsampan, Chuchai

    2012-10-01

    Topical minoxidil and oral finasteride have been used to treat men with androgenetic alopecia (AGA). There are concerns about side effects of oral finasteride especially erectile dysfunction. To compare the efficacy and safety of the 24 weeks application of 3% minoxidil lotion (MNX) versus combined 3% minoxidil and 0.1% finasteride lotion (MFX) in men with AGA. Forty men with AGA were randomized treated with MNX or MFX. Efficacy was evaluated by hair counts and global photographic assessment. Safety assessment was performed by history and physical examination. At week 24, hair counts were increased from baseline in both groups. However paired t-test revealed statistical difference only in MFX group (p = 0.044). Unpaired t-test revealed no statistical difference between two groups with respect to change of hair counts at 24 weeks from baseline (p = 0.503). MFX showed significantly higher efficacy than MNX by global photographic assessment (p = 0.003). There was no significant difference in side effects between both groups. Although change of hair counts was not statistically different between two groups, global photographic assessment showed significantly greater improvement in the MFX group than the MNX group. There was no sexual side effect. MFX may be a safe and effective treatment option.

  19. Random fractional ultrapulsed CO2 resurfacing of photodamaged facial skin: long-term evaluation.

    PubMed

    Tretti Clementoni, Matteo; Galimberti, Michela; Tourlaki, Athanasia; Catenacci, Maximilian; Lavagno, Rosalia; Bencini, Pier Luca

    2013-02-01

    Although numerous papers have recently been published on ablative fractional resurfacing, there is a lack of information in literature on very long-term results. The aim of this retrospective study is to evaluate the efficacy, adverse side effects, and long-term results of a random fractional ultrapulsed CO2 laser on a large population with photodamaged facial skin. Three hundred twelve patients with facial photodamaged skin were enrolled and underwent a single full-face treatment. Six aspects of photodamaged skin were recorded using a 5 point scale at 3, 6, and 24 months after the treatment. The results were compared with a non-parametric statistical test, the Wilcoxon's exact test. Three hundred one patients completed the study. All analyzed features showed a significant statistical improvement 3 months after the procedure. Three months later all features, except for pigmentations, once again showed a significant statistical improvement. Results after 24 months were similar to those assessed 18 months before. No long-term or other serious complications were observed. From the significant number of patients analyzed, long-term results demonstrate not only how fractional ultrapulsed CO2 resurfacing can achieve good results on photodamaged facial skin but also how these results can be considered stable 2 years after the procedure.

  20. Women victims of intentional homicide in Italy: New insights comparing Italian trends to German and U.S. trends, 2008-2014.

    PubMed

    Terranova, Claudio; Zen, Margherita

    2018-01-01

    National statistics on female homicide could be a useful tool to evaluate the phenomenon and plan adequate strategies to prevent and reduce this crime. The aim of the study is to contribute to the analysis of intentional female homicides in Italy by comparing Italian trends to German and United States trends from 2008 to 2014. This is a population study based on data deriving primarily from national and European statistical institutes, from the U.S. Federal Bureau of Investigation's Uniform Crime Reporting and from the National Center for Health Statistics. Data were analyzed in relation to trends and age by Chi-square test, Student's t-test and linear regression. Results show that female homicides, unlike male homicides, remained stable in the three countries. Regression analysis showed a higher risk for female homicide in all age groups in the U.S. Middle-aged women result at higher risk, and the majority of murdered women are killed by people they know. These results confirm previous findings and suggest the need to focus also in Italy on preventive strategies to reduce those precipitating factors linked to violence and present in the course of a relationship or within the family. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Dentascan – Is the Investment Worth the Hype ???

    PubMed Central

    Shah, Monali A; Shah, Sneha S; Dave, Deepak

    2013-01-01

    Background: Open Bone Measurement (OBM) and Bone Sounding (BS) are most reliable but invasive clinical methods for Alveolar Bone Level (ABL) assessment, causing discomfort to the patient. Routinely, IOPAs & OPGs are the commonest radiographic techniques used, which tend to underestimate bone loss and obscure buccal/lingual defects. Novel technique like dentascan (CBCT) eliminates this limitation by giving images in 3 planes – sagittal, coronal and axial. Aim: To compare & correlate non-invasive 3D radiographic technique of Dentascan with BS & OBM, and IOPA and OPG, in assessing the ABL. Settings and Design: Cross-sectional diagnostic study. Material and Methods: Two hundred and five sites were subjected to clinical and radiographic diagnostic techniques. Relative distance between the alveolar bone crest and reference wire was measured. All the measurements were compared and tested against the OBM. Statistical Analysis: Student’s t-test, ANOVA, Pearson correlation coefficient. Results: There is statistically significant difference between dentascan and OBM, only BS showed agreement with OBM (p < 0.05). Dentascan weakly correlated with OBM & BS lingually.Rest all techniques showed statistically significant difference between them (p= 0.00). Conclusion: Within the limitations of this study, only BS seems to be comparable with OBM with no superior result of Dentascan over the conventional techniques, except for lingual measurements. PMID:24551722

  2. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  3. Quantifying variation in speciation and extinction rates with clade data.

    PubMed

    Paradis, Emmanuel; Tedesco, Pablo A; Hugueny, Bernard

    2013-12-01

    High-level phylogenies are very common in evolutionary analyses, although they are often treated as incomplete data. Here, we provide statistical tools to analyze what we name "clade data," which are the ages of clades together with their numbers of species. We develop a general approach for the statistical modeling of variation in speciation and extinction rates, including temporal variation, unknown variation, and linear and nonlinear modeling. We show how this approach can be generalized to a wide range of situations, including testing the effects of life-history traits and environmental variables on diversification rates. We report the results of an extensive simulation study to assess the performance of some statistical tests presented here as well as of the estimators of speciation and extinction rates. These latter results suggest the possibility to estimate correctly extinction rate in the absence of fossils. An example with data on fish is presented. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  4. [Health-related behavior in a sample of Brazilian college students: gender differences].

    PubMed

    Colares, Viviane; Franca, Carolina da; Gonzalez, Emília

    2009-03-01

    This study investigated whether undergraduate students' health-risk behaviors differed according to gender. The sample consisted of 382 subjects, aged 20-29 years, from public universities in Pernambuco State, Brazil. Data were collected using the National College Health Risk Behavior Survey, previously validated in Portuguese. Descriptive and inferential statistical techniques were used. Associations were analyzed with the chi-square test or Fisher's exact test. Statistical significance was set at p < or = 0.05. In general, females engaged in the following risk behaviors less frequently than males: alcohol consumption (p = 0.005), smoking (p = 0.002), experimenting with marijuana (p = 0.002), consumption of inhalants (p < or = 0.001), steroid use (p = 0.003), carrying weapons (p = 0.001), and involvement in physical fights (p = 0.014). Meanwhile, female students displayed more concern about losing or maintaining weight, although they exercised less frequently than males. The findings thus showed statistically different health behaviors between genders. In conclusion, different approaches need to be used for the two genders.

  5. Elemental, microstructural, and mechanical characterization of high gold orthodontic brackets after intraoral aging.

    PubMed

    Hersche, Sepp; Sifakakis, Iosif; Zinelis, Spiros; Eliades, Theodore

    2017-02-01

    The purpose of the present study was to investigate the elemental composition, the microstructure, and the selected mechanical properties of high gold orthodontic brackets after intraoral aging. Thirty Incognito™ (3M Unitek, Bad Essen, Germany) lingual brackets were studied, 15 brackets as received (control group) and 15 brackets retrieved from different patients after orthodontic treatment. The surface of the wing area was examined by scanning electron microscopy (SEM). Backscattered electron imaging (BEI) was performed, and the elemental composition was determined by X-ray EDS analysis (EDX). After appropriate metallographic preparation, the mechanical properties tested were Martens hardness (HM), indentation modulus (EIT), elastic index (ηIT), and Vickers hardness (HV). These properties were determined employing instrumented indentation testing (IIT) with a Vickers indenter. The results were statistically analyzed by unpaired t-test (α=0.05). There were no statistically significant differences evidenced in surface morphology and elemental content between the control and the experimental group. These two groups of brackets showed no statistically significant difference in surface morphology. Moreover, the mean values of HM, EIT, ηIT, and HV did not reach statistical significance between the groups (p>0.05). Under the limitations of this study, it may be concluded that the surface elemental content and microstructure as well as the evaluated mechanical properties of the Incognito™ lingual brackets remain unaffected by intraoral aging.

  6. The Influence of 16-year-old Students' Gender, Mental Abilities, and Motivation on their Reading and Drawing Submicrorepresentations Achievements

    NASA Astrophysics Data System (ADS)

    Devetak, Iztok; Aleksij Glažar, Saša

    2010-08-01

    Submicrorepresentations (SMRs) are a powerful tool for identifying misconceptions of chemical concepts and for generating proper mental models of chemical phenomena in students' long-term memory during chemical education. The main purpose of the study was to determine which independent variables (gender, formal reasoning abilities, visualization abilities, and intrinsic motivation for learning chemistry) have the maximum influence on students' reading and drawing SMRs. A total of 386 secondary school students (aged 16.3 years) participated in the study. The instruments used in the study were: test of Chemical Knowledge, Test of Logical Thinking, two tests of visualization abilities Patterns and Rotations, and questionnaire on Intrinsic Motivation for Learning Science. The results show moderate, but statistically significant correlations between students' intrinsic motivation, formal reasoning abilities and chemical knowledge at submicroscopic level based on reading and drawing SMRs. Visualization abilities are not statistically significantly correlated with students' success on items that comprise reading or drawing SMRs. It can be also concluded that there is a statistically significant difference between male and female students in solving problems that include reading or drawing SMRs. Based on these statistical results and content analysis of the sample problems, several educational strategies can be implemented for students to develop adequate mental models of chemical concepts on all three levels of representations.

  7. Temperature, Not Fine Particulate Matter (PM2.5), is Causally Associated with Short-Term Acute Daily Mortality Rates: Results from One Hundred United States Cities

    PubMed Central

    Cox, Tony; Popken, Douglas; Ricci, Paolo F

    2013-01-01

    Exposures to fine particulate matter (PM2.5) in air (C) have been suspected of contributing causally to increased acute (e.g., same-day or next-day) human mortality rates (R). We tested this causal hypothesis in 100 United States cities using the publicly available NMMAPS database. Although a significant, approximately linear, statistical C-R association exists in simple statistical models, closer analysis suggests that it is not causal. Surprisingly, conditioning on other variables that have been extensively considered in previous analyses (usually using splines or other smoothers to approximate their effects), such as month of the year and mean daily temperature, suggests that they create strong, nonlinear confounding that explains the statistical association between PM2.5 and mortality rates in this data set. As this finding disagrees with conventional wisdom, we apply several different techniques to examine it. Conditional independence tests for potential causation, non-parametric classification tree analysis, Bayesian Model Averaging (BMA), and Granger-Sims causality testing, show no evidence that PM2.5 concentrations have any causal impact on increasing mortality rates. This apparent absence of a causal C-R relation, despite their statistical association, has potentially important implications for managing and communicating the uncertain health risks associated with, but not necessarily caused by, PM2.5 exposures. PMID:23983662

  8. Comparative Evaluation of Immediate Post-Operative Sequelae after Surgical Removal of Impacted Mandibular Third Molar with or without Tube Drain - Split-Mouth Study

    PubMed Central

    Bhate, Kalyani; Dolas, RS; Kumar, SN Santhosh; Waknis, Pushkar

    2016-01-01

    Introduction Third molar surgery is one of the most common surgical procedures performed in general dentistry. Post-operative variables such as pain, swelling and trismus are major concerns after impacted mandibular third molar surgery. Use of passive tube drain is supposed to help reduce these immediate post-operative sequelae. The current study was designed to compare the effect of tube drain on immediate post-operative sequelae following impacted mandibular third molar surgery. Aim To compare the post-operative sequelae after surgical removal of impacted mandibular third molar surgery with or without tube drain. Materials and Methods Thirty patients with bilateral impacted mandibular third molars were divided into two groups: Test (with tube drain) and control (without tube drain) group. In the test group, a tube drain was inserted through the releasing incision, and kept in place for three days. The control group was left without a tube drain. The post-operative variables like, pain, swelling, and trismus were calculated after 24 hours, 72 hours, 7 days, and 15 days in both the groups and analyzed statistically using chi-square and t-test analysis. Results The test group showed lesser swelling as compared to control group, with the swelling variable showing statistically significant difference at post-operative day 3 and 7 (p≤ 0.05) in both groups. There were no statistically significant differences in pain and trismus variables in both the groups. Conclusion The use of tube drain helps to control swelling following impacted mandibular third molar surgery. However, it does not have much effect on pain or trismus. PMID:28209003

  9. Does the use of a novel self-adhesive flowable composite reduce nanoleakage?

    PubMed

    Naga, Abeer Abo El; Yousef, Mohammed; Ramadan, Rasha; Fayez Bahgat, Sherif; Alshawwa, Lana

    2015-01-01

    The aim of the study reported here was to evaluate the performance of a self-adhesive flowable composite and two self-etching adhesive systems, when subjected to cyclic loading, in preventing the nanoleakage of Class V restorations. Wedge-shape Class V cavities were prepared (4×2×2 mm [length × width × depth]) on the buccal surfaces of 90 sound human premolars. Cavities were divided randomly into three groups (n=30) according to the used adhesive (Xeno(®) V [self-etching adhesive system]) and BOND-1(®) SF (solvent-free self-etching adhesive system) in conjunction with Artiste(®) Nano Composite resin, and Fusio™ Liquid Dentin (self-adhesive flowable composite), consecutively. Each group was further divided into three subgroups (n=10): (A) control, (B) subjected to occlusal cyclic loading (90N for 5,000 cycles), and (C) subjected to occlusal cyclic loading (90N for 10,000 cycles). Teeth then were coated with nail polish up to 1 mm from the interface, immersed in 50% silver nitrate solution for 24 hours and tested for nanoleakage using the environmental scanning electron microscopy and energy dispersive analysis X-ray analysis. Data were statistically analyzed using two-way analysis of variance and Tukey's post hoc tests (P≤0.05). The Fusio Liquid Dentin group showed statistically significant lower percentages of silver penetration (0.55 μ) compared with the BOND-1 SF (3.45 μ) and Xeno V (3.82 μ) groups, which were not statistically different from each other, as they both showed higher silver penetration. Under the test conditions, the self-adhesive flowable composite provided better sealing ability. Aging of the two tested adhesive systems, as a function of cyclic loading, increased nanoleakage.

  10. Does the use of a novel self-adhesive flowable composite reduce nanoleakage?

    PubMed Central

    Naga, Abeer Abo El; Yousef, Mohammed; Ramadan, Rasha; Fayez Bahgat, Sherif; Alshawwa, Lana

    2015-01-01

    Objective The aim of the study reported here was to evaluate the performance of a self-adhesive flowable composite and two self-etching adhesive systems, when subjected to cyclic loading, in preventing the nanoleakage of Class V restorations. Methods Wedge-shape Class V cavities were prepared (4×2×2 mm [length × width × depth]) on the buccal surfaces of 90 sound human premolars. Cavities were divided randomly into three groups (n=30) according to the used adhesive (Xeno® V [self-etching adhesive system]) and BOND-1® SF (solvent-free self-etching adhesive system) in conjunction with Artiste® Nano Composite resin, and Fusio™ Liquid Dentin (self-adhesive flowable composite), consecutively. Each group was further divided into three subgroups (n=10): (A) control, (B) subjected to occlusal cyclic loading (90N for 5,000 cycles), and (C) subjected to occlusal cyclic loading (90N for 10,000 cycles). Teeth then were coated with nail polish up to 1 mm from the interface, immersed in 50% silver nitrate solution for 24 hours and tested for nanoleakage using the environmental scanning electron microscopy and energy dispersive analysis X-ray analysis. Data were statistically analyzed using two-way analysis of variance and Tukey’s post hoc tests (P≤0.05). Results The Fusio Liquid Dentin group showed statistically significant lower percentages of silver penetration (0.55 μ) compared with the BOND-1 SF (3.45 μ) and Xeno V (3.82 μ) groups, which were not statistically different from each other, as they both showed higher silver penetration. Conclusion Under the test conditions, the self-adhesive flowable composite provided better sealing ability. Aging of the two tested adhesive systems, as a function of cyclic loading, increased nanoleakage. PMID:25848318

  11. On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle

    NASA Astrophysics Data System (ADS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-09-01

    The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.

  12. Recurrence network measures for hypothesis testing using surrogate data: Application to black hole light curves

    NASA Astrophysics Data System (ADS)

    Jacob, Rinku; Harikrishnan, K. P.; Misra, R.; Ambika, G.

    2018-01-01

    Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

  13. Physical fitness profile of professional Italian firefighters: differences among age groups.

    PubMed

    Perroni, Fabrizio; Cignitti, Lamberto; Cortis, Cristina; Capranica, Laura

    2014-05-01

    Firefighters perform many tasks which require a high level of fitness and their personal safety may be compromised by the physiological aging process. The aim of the study was to evaluate strength (bench-press), power (countermovement jump), sprint (20 m) and endurance (with and without Self Contained Breathing Apparatus - S.C.B.A.) of 161 Italian firefighters recruits in relation to age groups (<25 yr; 26-30 yr; 31-35 yr; 36-40 yr; 41-42 yr). Descriptive statistics and an ANOVA were calculated to provide the physical fitness profile for each parameter and to assess differences (p < 0.05) among age groups. Anthropometric values showed an age-effect for height and BMI, while performances values showed statistical differences for strength, power, sprint tests and endurance test with S.C.B.A. Wearing the S.C.B.A., 14% of all recruits failed to complete the endurance test. We propose that the firefighters should participate in an assessment of work capacity and specific fitness programs aimed to maintain an optimal fitness level for all ages. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Educational Gaming for Pharmacy Students - Design and Evaluation of a Diabetes-themed Escape Room.

    PubMed

    Eukel, Heidi N; Frenzel, Jeanne E; Cernusca, Dan

    2017-09-01

    Objective. To design an educational game that will increase third-year professional pharmacy students' knowledge of diabetes mellitus disease management and to evaluate their perceived value of the game. Methods. Faculty members created an innovative educational game, the diabetes escape room. An authentic escape room gaming environment was established through the use of a locked room, an escape time limit, and game rules within which student teams completed complex puzzles focused on diabetes disease management. To evaluate the impact, students completed a pre-test and post-test to measure the knowledge they've gained and a perception survey to identify moderating factors that could help instructors improve the game's effectiveness and utility. Results. Students showed statistically significant increases in knowledge after completion of the game. A one-sample t -test indicated that students' mean perception was statistically significantly higher than the mean value of the evaluation scale. This statically significant result proved that this gaming act offers a potential instructional benefit beyond its novelty. Conclusion. The diabetes escape room proved to be a valuable educational game that increased students' knowledge of diabetes mellitus disease management and showed a positive perceived overall value by student participants.

  15. On Determining the Rise, Size, and Duration Classes of a Sunspot Cycle

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The behavior of ascent duration, maximum amplitude, and period for cycles 1 to 21 suggests that they are not mutually independent. Analysis of the resultant three-dimensional contingency table for cycles divided according to rise time (ascent duration), size (maximum amplitude), and duration (period) yields a chi-square statistic (= 18.59) that is larger than the test statistic (= 9.49 for 4 degrees-of-freedom at the 5-percent level of significance), thereby, inferring that the null hypothesis (mutual independence) can be rejected. Analysis of individual 2 by 2 contingency tables (based on Fisher's exact test) for these parameters shows that, while ascent duration is strongly related to maximum amplitude in the negative sense (inverse correlation) - the Waldmeier effect, it also is related (marginally) to period, but in the positive sense (direct correlation). No significant (or marginally significant) correlation is found between period and maximum amplitude. Using cycle 22 as a test case, we show that by the 12th month following conventional onset, cycle 22 appeared highly likely to be a fast-rising, larger-than-average-size cycle. Because of the inferred correlation between ascent duration and period, it also seems likely that it will have a period shorter than average length.

  16. The Relationship Between Procrastination, Learning Strategies and Statistics Anxiety Among Iranian College Students: A Canonical Correlation Analysis

    PubMed Central

    Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali

    2012-01-01

    Objective: Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. Methods: A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Results: Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. Conclusion: These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction. PMID:24644468

  17. The Clinical Utility of Vestibular Evoked Myogenic Potentials in Patients of Benign Paroxysmal Positional Vertigo.

    PubMed

    Sreenivasan, Anuprasad; Sivaraman, Ganesan; Parida, Pradiptata Kumar; Alexander, Arun; Saxena, Sunil Kumar; Suria, Gopalakrishnan

    2015-06-01

    Vestibular Evoked Myogenic Potentials (VEMP) is an emerging tool to diagnose Benign Paroxysmal Positional Vertigo (BPPV). The clinical utility of VEMP has been reported only in small accord in Indian literature. To study the latency and amplitude of VEMP in patients with BPPV and compare it with that of normal subjects. The study included two groups. Group one (control group) were 18 normal subjects. Group two (test group) were 15 subjects with unilateral BPPV. Those subjects who fulfilled the selection criteria based on case history and audiological assessment were taken for the VEMP recording. The VEMP response consist of positive and negative successive waves (pI-nI), with latency values in adults about 13 and 23 milliseconds respectively. Data was analysed using Statistical Package for Social Sciences (SPSS) version 12 (Chicago, IL, USA). Unpaired t-test was employed to measure the statistical difference between control group and test group. The difference in n23 and the peak to peak amplitude between the ipsilateral and contralateral ears of the test group were statistically significant, whereas such a difference in the p13 latency turned out to be statistically insignificant. It should be noted that, out of 15 patients in the test group, five patients showed only artifact tracer recordings in both the ears which is considered as no response. The heterogeneity of the results extended from absence of VEMP to prolongation of both p13, n23; prolongation of p13 alone; and even side to side variations. Absent response from the ipsilateral ear, prolonged latency of n23 and decreased peak to peak amplitude (p13, n23), indicates the disease pathology. However, large sample size is required to draw further conclusions and to consolidate the usage of VEMP in the diagnosis of BPPV.

  18. [Oral health status of women with normal and high-risk pregnancies].

    PubMed

    Chaloupka, P; Korečko, V; Turek, J; Merglová, V

    2014-01-01

    The aim of this study was to compare the oral health status of women with normal pregnancies and those with high-risk pregnancies. A total of 142 women in the third trimester of pregnancy were randomly selected for this study. The pregnant women were divided into two groups: a normal pregnancy group (group F, n = 61) and a high-risk pregnancy group (group R, n = 81). The following variables were recorded for each woman: age, general health status, DMF index, CPITN index, PBI index, amounts of Streptococcus mutans in the saliva and dental treatment needs. The data obtained were analysed statistically. The Mann-Whitney test, Kruskal-Wallis test and chi square test were used, and p-values less than 0.05 were considered statistically significant. The two-sided t-test was used to compare the two cohorts. Women with high-risk pregnancies showed increased values in all measured indices and tests, but there were no statistically significant differences between the two groups in the DMF index, CPITN index and amounts of Streptococcus mutans present in the saliva. Statistically significant differences were detected between the two groups for the PBI index and dental treatment needs. In group F, the maximum PBI index value was 2.9 in group F and 3.8 in group R. Significant differences were found also in mean PBI values. Out of the entire study cohort, 94 women (66.2%) required dental treatment, including 52% (n = 32) of the women with normal pregnancies and 77% (n = 62) of the women with high-risk pregnancies. This study found that women with complications during pregnancy had severe gingivitis and needed more frequent dental treatment than women with normal pregnancies.

  19. Physique and Performance of Young Wheelchair Basketball Players in Relation with Classification

    PubMed Central

    Zancanaro, Carlo

    2015-01-01

    The relationships among physical characteristics, performance, and functional ability classification of younger wheelchair basketball players have been barely investigated to date. The purpose of this work was to assess anthropometry, body composition, and performance in sport-specific field tests in a national sample of Italian younger wheelchair basketball players as well as to evaluate the association of these variables with the players’ functional ability classification and game-related statistics. Several anthropometric measurements were obtained for 52 out of 91 eligible players nationwide. Performance was assessed in seven sport-specific field tests (5m sprint, 20m sprint with ball, suicide, maximal pass, pass for accuracy, spot shot and lay-ups) and game-related statistics (free-throw points scored per match, two- and three-point field-goals scored per match, and their sum). Association between variables, and predictivity was assessed by correlation and regression analysis, respectively. Players were grouped into four Classes of increasing functional ability (A-D). One-way ANOVA with Bonferroni’s correction for multiple comparisons was used to assess differences between Classes. Sitting height and functional ability Class especially correlated with performance outcomes, but wheelchair basketball experience and skinfolds did not. Game-related statistics and sport-specific field-test scores all showed significant correlation with each other. Upper arm circumference and/or maximal pass and lay-ups test scores were able to explain 42 to 59% of variance in game-related statistics (P<0.001). A clear difference in performance was only found for functional ability Class A and D. Conclusion: In younger wheelchair basketball players, sitting height positively contributes to performance. The maximal pass and lay-ups test should be carefully considered in younger wheelchair basketball training plans. Functional ability Class reflects to a limited extent the actual differences in performance. PMID:26606681

  20. Effect of the infrastructure material on the failure behavior of prosthetic crowns.

    PubMed

    Sonza, Queli Nunes; Della Bona, Alvaro; Borba, Márcia

    2014-05-01

    To evaluate the effect of infrastructure (IS) material on the fracture behavior of prosthetic crowns. Restorations were fabricated using a metal die simulating a prepared tooth. Four groups were evaluated: YZ-C, Y-TZP (In-Ceram YZ, Vita) IS produced by CAD-CAM; IZ-C, In-Ceram Zirconia (Vita) IS produced by CAD-CAM; IZ-S, In-Ceram Zirconia (Vita) IS produced by slip-cast; MC, metal IS (control). The IS were veneered with porcelain and resin cemented to fiber-reinforced composite dies. Specimens were loaded in compression to failure using a universal testing machine. The 30° angle load was applied by a spherical piston, in 37°C distilled water. Fractography was performed using stereomicroscope and SEM. Data were statistically analyzed with Anova and Student-Newman-Keuls tests (α=0.05). Significant differences were found between groups (p=0.022). MC showed the highest mean failure load, statistically similar to YZ-C. There was no statistical difference between YZ-C, IZ-C and IZ-S. MC and YZ-C showed no catastrophic failure. IZ-C and IZ-S showed chipping and catastrophic failures. The fracture behavior is similar to reported clinical failures. Considering the ceramic systems evaluated, YZ-C and MC crowns present greater fracture load and a more favorable failure mode than In-Ceram Zirconia crowns, regardless of the fabrication type (CAD-CAM or slip-cast). Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. Comparison of safety, efficacy and tolerability of dexibuprofen and ibuprofen in the treatment of osteoarthritis of the hip or knee.

    PubMed

    Zamani, Omid; Böttcher, Elke; Rieger, Jörg D; Mitterhuber, Johann; Hawel, Reinhold; Stallinger, Sylvia; Eller, Norbert

    2014-06-01

    In this observer-blinded, multicenter, non-inferiority study, 489 patients suffering from painful osteoarthritis of the hip or knee were included to investigate safety and tolerability of Dexibuprofen vs. Ibuprofen powder for oral suspension. Only patients who had everyday joint pain for the past 3 months and "moderate" to "severe" global pain intensity in the involved hip/knee of within the last 48 h were enrolled. The treatment period was up to 14 days with a control visit after 3 days. The test product was Dexibuprofen 400 mg powder for oral suspension (daily dose 800 mg) compared to Ibuprofen 400 mg powder for oral suspension (daily dose 1,600 mg). Gastrointestinal adverse drug reactions were reported in 8 patients (3.3 %) in the Dexibuprofen group and in 19 patients (7.8 %) in the Ibuprofen group. Statistically significant non-inferiority was shown for Dexibuprofen. Comparing both groups by a Chi square test showed a statistical significant lower proportion of related gastrointestinal events in the Dexibuprofen group. All analyses of secondary tolerability parameters showed the same result of a significantly better safety profile in this therapy setting for Dexibuprofen compared to Ibuprofen. The sum of pain intensity, pain relief and global assessments showed no significant difference between treatment groups. In summary, analyses revealed at least non-inferiority in terms of efficacy and a statistically significant better safety profile for the Dexibuprofen treatment.

  2. Effect of Eye Movement Desensitization and Reprocessing (EMDR) on Depression in Patients With Myocardial Infarction (MI)

    PubMed Central

    Behnammoghadam, Mohammad; Alamdari, Ali Karam; Behnammoghadam, Aziz; Darban, Fatemeh

    2015-01-01

    Background: Coronary heart disease is the most important cause of death and inability in all communities. Depressive symptoms are frequent among post-myocardial infarction (MI) patients and may cause negative effects on cardiac prognosis. This study was conducted to identify efficacy of EMDR on depression of patients with MI. Methods: This study is a clinical trial. Sixty patients with MI were selected by simple sampling, and were separated randomly into experimental and control groups. To collect data, demographic questionnaire and Beck Depression Questionnaire were used. In experimental group, EMDR therapy were performed in three sessions alternate days for 45–90 minutes, during four months after their MI. Depression level of patients was measured before, and a week after EMDR therapy. Data were analyzed using paired –t- test, t–test, and Chi-square. Results: The mean depression level in experimental group 27.26± 6.41 before intervention, and it was 11.76 ± 3.71 after intervention. Hence, it showed a statistically significant difference (P<0.001). The mean depression level in control group was 24.53 ± 5.81 before intervention, and it was 31.66± 6.09 after intervention, so it showed statistically significant difference (P<0.001). The comparison of mean depression level at post treatment, in both groups showed statistically significant difference (P<0.001). Conclusion: EMDR is an effective, useful, efficient, and non-invasive method for treatment and reducing depression in patients with MI. PMID:26153191

  3. On Mars too, expect macroweather

    NASA Astrophysics Data System (ADS)

    Boisvert, Jean-Philippe; Lovejoy, Shaun; Muller, Jan-Peter

    2015-04-01

    Terrestrial atmospheric and oceanic spectra show drastic transitions at τw ˜ 10 days and τow ˜ 1 year respectively; this has been theorized as the lifetime of planetary scale structures. For wind and temperature, the forms of the low and high frequency parts of the spectra (macroweather, weather) as well as the τw can be theoretically estimated, the latter depending notably on the solar induced turbulent energy flux. We extend the theory to other planets and test it using Viking lander and reanalysis data from Mars. When the Martian spectra are scaled by the theoretical amount, they agree very well with their terrestrial atmospheric counterparts. Although the usual interpretation of Martian atmospheric dynamics is highly mechanistic (e.g. wave and tidal explanations are invoked), trace moment analysis of the reanalysis fields shows that the statistics well respect the predictions of multiplicative cascade theories. This shows that statistical scaling can be compatible with conventional deterministic thinking. However, since we are usually interested in statistical knowledge, it is the former not the latter that is of primary interest. We discuss the implications for understanding planetary fluid dynamical systems.

  4. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    PubMed

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  5. Using CRANID to test the population affinity of known crania.

    PubMed

    Kallenberger, Lauren; Pilbrow, Varsha

    2012-11-01

    CRANID is a statistical program used to infer the source population of a cranium of unknown origin by comparing its cranial dimensions with a worldwide craniometric database. It has great potential for estimating ancestry in archaeological, forensic and repatriation cases. In this paper we test the validity of CRANID in classifying crania of known geographic origin. Twenty-three crania of known geographic origin but unknown sex were selected from the osteological collections of the University of Melbourne. Only 18 crania showed good statistical match with the CRANID database. Without considering accuracy of sex allocation, 11 crania were accurately classified into major geographic regions and nine were correctly classified to geographically closest available reference populations. Four of the five crania with poor statistical match were nonetheless correctly allocated to major geographical regions, although none was accurately assigned to geographically closest reference samples. We conclude that if sex allocations are overlooked, CRANID can accurately assign 39% of specimens to geographically closest matching reference samples and 48% to major geographic regions. Better source population representation may improve goodness of fit, but known sex-differentiated samples are needed to further test the utility of CRANID. © 2012 The Authors Journal of Anatomy © 2012 Anatomical Society.

  6. Communication skills in individuals with spastic diplegia.

    PubMed

    Lamônica, Dionísia Aparecida Cusin; Paiva, Cora Sofia Takaya; Abramides, Dagma Venturini Marques; Biazon, Jamile Lozano

    2015-01-01

    To assess communication skills in children with spastic diplegia. The study included 20 subjects, 10 preschool children with spastic diplegia and 10 typical matched according to gender, mental age, and socioeconomic status. Assessment procedures were the following: interviews with parents, Stanford - Binet method, Gross Motor Function Classification System, Observing the Communicative Behavior, Vocabulary Test by Peabody Picture, Denver Developmental Screening Test II, MacArthur Development Inventory on Communicative Skills. Statistical analysis was performed using the values of mean, median, minimum and maximum value, and using Student's t-test, Mann-Whitney test, and Paired t-test. Individuals with spastic diplegia, when compared to their peers of the same mental age, presented no significant difference in relation to receptive and expressive vocabulary, fine motor skills, adaptive, personal-social, and language. The most affected area was the gross motor skills in individuals with spastic cerebral palsy. The participation in intervention procedures and the pairing of participants according to mental age may have approximated the performance between groups. There was no statistically significant difference in the comparison between groups, showing appropriate communication skills, although the experimental group has not behaved homogeneously.

  7. Applying the Anderson-Darling test to suicide clusters: evidence of contagion at U. S. universities?

    PubMed

    MacKenzie, Donald W

    2013-01-01

    Suicide clusters at Cornell University and the Massachusetts Institute of Technology (MIT) prompted popular and expert speculation of suicide contagion. However, some clustering is to be expected in any random process. This work tested whether suicide clusters at these two universities differed significantly from those expected under a homogeneous Poisson process, in which suicides occur randomly and independently of one another. Suicide dates were collected for MIT and Cornell for 1990-2012. The Anderson-Darling statistic was used to test the goodness-of-fit of the intervals between suicides to distribution expected under the Poisson process. Suicides at MIT were consistent with the homogeneous Poisson process, while those at Cornell showed clustering inconsistent with such a process (p = .05). The Anderson-Darling test provides a statistically powerful means to identify suicide clustering in small samples. Practitioners can use this method to test for clustering in relevant communities. The difference in clustering behavior between the two institutions suggests that more institutions should be studied to determine the prevalence of suicide clustering in universities and its causes.

  8. On the Performance of the Marginal Homogeneity Test to Detect Rater Drift.

    PubMed

    Sgammato, Adrienne; Donoghue, John R

    2018-06-01

    When constructed response items are administered repeatedly, "trend scoring" can be used to test for rater drift. In trend scoring, raters rescore responses from the previous administration. Two simulation studies evaluated the utility of Stuart's Q measure of marginal homogeneity as a way of evaluating rater drift when monitoring trend scoring. In the first study, data were generated based on trend scoring tables obtained from an operational assessment. The second study tightly controlled table margins to disentangle certain features present in the empirical data. In addition to Q , the paired t test was included as a comparison, because of its widespread use in monitoring trend scoring. Sample size, number of score categories, interrater agreement, and symmetry/asymmetry of the margins were manipulated. For identical margins, both statistics had good Type I error control. For a unidirectional shift in margins, both statistics had good power. As expected, when shifts in the margins were balanced across categories, the t test had little power. Q demonstrated good power for all conditions and identified almost all items identified by the t test. Q shows substantial promise for monitoring of trend scoring.

  9. Effects of Heterogeniety on Spatial Pattern Analysis of Wild Pistachio Trees in Zagros Woodlands, Iran

    NASA Astrophysics Data System (ADS)

    Erfanifard, Y.; Rezayan, F.

    2014-10-01

    Vegetation heterogeneity biases second-order summary statistics, e.g., Ripley's K-function, applied for spatial pattern analysis in ecology. Second-order investigation based on Ripley's K-function and related statistics (i.e., L- and pair correlation function g) is widely used in ecology to develop hypothesis on underlying processes by characterizing spatial patterns of vegetation. The aim of this study was to demonstrate effects of underlying heterogeneity of wild pistachio (Pistacia atlantica Desf.) trees on the second-order summary statistics of point pattern analysis in a part of Zagros woodlands, Iran. The spatial distribution of 431 wild pistachio trees was accurately mapped in a 40 ha stand in the Wild Pistachio & Almond Research Site, Fars province, Iran. Three commonly used second-order summary statistics (i.e., K-, L-, and g-functions) were applied to analyse their spatial pattern. The two-sample Kolmogorov-Smirnov goodness-of-fit test showed that the observed pattern significantly followed an inhomogeneous Poisson process null model in the study region. The results also showed that heterogeneous pattern of wild pistachio trees biased the homogeneous form of K-, L-, and g-functions, demonstrating a stronger aggregation of the trees at the scales of 0-50 m than actually existed and an aggregation at scales of 150-200 m, while regularly distributed. Consequently, we showed that heterogeneity of point patterns may bias the results of homogeneous second-order summary statistics and we also suggested applying inhomogeneous summary statistics with related null models for spatial pattern analysis of heterogeneous vegetations.

  10. Quantification and Statistical Analysis Methods for Vessel Wall Components from Stained Images with Masson's Trichrome

    PubMed Central

    Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco

    2016-01-01

    Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643

  11. The effect of hydroxyzine on treating bruxism of 2- to 14-year-old children admitted to the clinic of Bandar Abbas Children Hospital in 2013-2014

    PubMed Central

    Rahmati, M; Moayedi, A; Zakery Shahvari, S; Golmirzaei, J; Zahirinea, M; Abbasi, B

    2015-01-01

    Introduction. Bruxism is to press or grind teeth against each other in non-physiologic cases, when an individual does not swallow or chew. If not treated, teeth problems, stress, mental disorders, frequent night waking, and headache is expected. This research aimed to study the effect of hydroxyzine on treating bruxism of 2- to 14-year-old children admitted to the clinic of Bandar Abbas Children Hospital. Methodology. In this clinical trial, 143 children with the ages between 4-12 years were admitted to the Children Hospital and were divided randomly into test and control groups. The test group consisted of 88 hydroxyzine-treated children and the control group consisted of 55 children who used hot towels. Both groups were examined in some stages including the pre-test stages or the stage before starting treatments at two, four, and six weeks and four months after stopping the treatment. The effects of each treatment on reducing bruxism symptoms were assessed by a questionnaire. The data were analyzed by using SPSS in descriptive statistics, t-test, and ANOVA. Results. As far as bruxism severity was concerned, the results showed a significant difference between the test group members who received hydroxyzine and the control group members who received no medication. T-test results showed a statistically significant difference between the test and the control groups in the second post-test (four weeks later) (p. value ≤ 0.05). Mean of the scores of bruxism severity in the test group has changed significantly in the post-test (at two weeks, four weeks, and six weeks later) as compared to the pre-test. Whereas, as far as the response to the treatment, no significant difference was recorded between the control group and the test group 4 weeks after the treatment. Discussion. The results showed that prescribing hydroxyzine for 4 weeks had a considerable effect in diminishing bruxism severity between the test groups. PMID:28316738

  12. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  13. Comparison of antibacterial activity of Talok (Muntingia calabura L) leaves ethanolic and n-hexane extracts on Propionibacterium acnes

    NASA Astrophysics Data System (ADS)

    Desrini, Sufi; Ghiffary, Hifzhan Maulana

    2018-04-01

    Muntingia calabura L., also known locally as Talok or Kersen, is a plant which has been widely used as traditional medicine in Indonesia. In this study, we evaluated the antibacterial activity of Muntingia calabura L. Leaves ethanolic and n-hexane extract extract on Propionibacterium acnes. Antibacterial activity was determined in the extracts using agar well diffusion method. The antibacterial activities of each extract (2 mg/mL, 8 mg/ml, 20 mg/mL 30 mg/mL, and 40 mg/mL) were tested against to Propionibacterium acnes. Zone of inhibition of ethanolic extract and n-hexane extract was measured, compared, and analyzed by using a statistical programme. The phytochemical analyses of the plants were carried out using thin chromatography layer (TLC). The average diameter zone of inhibition at the concentration of 2 mg/mL of the ethanolic extract is 9,97 mm while n-Hexane extract at the same concentration showed 0 mm. Statistical test used was non-parametric test using Kruskal Wallis test which was continued to the Mann-Whitney to see the magnitude of the difference between concentration among groups. Kruskal-Wallis test revealed a significant value 0,000. Based on the result of Post Hoc test using Mann - Whitney test, there is the statistically significant difference between each concentration of ethanolic extract and n-hexane as well as positive control group (p-value < 0,05). Both extracts have antibacterial activity on P.acne. However, ethanolic extract of Muntingia calabura L. is better in inhibiting Propionibacterium acnes growth than n-hexane extract.

  14. Testing for qualitative heterogeneity: An application to composite endpoints in survival analysis.

    PubMed

    Oulhaj, Abderrahim; El Ghouch, Anouar; Holman, Rury R

    2017-01-01

    Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection-union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer's disease.

  15. The results of STEM education methods for enhancing critical thinking and problem solving skill in physics the 10th grade level

    NASA Astrophysics Data System (ADS)

    Soros, P.; Ponkham, K.; Ekkapim, S.

    2018-01-01

    This research aimed to: 1) compare the critical think and problem solving skills before and after learning using STEM Education plan, 2) compare student achievement before and after learning about force and laws of motion using STEM Education plan, and 3) the satisfaction of learning by using STEM Education. The sample used were 37 students from grade 10 at Borabu School, Borabu District, Mahasarakham Province, semester 2, Academic year 2016. Tools used in this study consist of: 1) STEM Education plan about the force and laws of motion for grade 10 students of 1 schemes with total of 14 hours, 2) The test of critical think and problem solving skills with multiple-choice type of 5 options and 2 option of 30 items, 3) achievement test on force and laws of motion with multiple-choice of 4 options of 30 items, 4) satisfaction learning with 5 Rating Scale of 20 items. The statistics used in data analysis were percentage, mean, standard deviation, and t-test (Dependent). The results showed that 1) The student with learning using STEM Education plan have score of critical think and problem solving skills on post-test higher than pre-test with statistically significant level .01. 2) The student with learning using STEM Education plan have achievement score on post-test higher than pre-test with statistically significant level of .01. 3) The student'level of satisfaction toward the learning by using STEM Education plan was at a high level (X ¯ = 4.51, S.D=0.56).

  16. Comparison of immediate complete denture, tooth and implant-supported overdenture on vertical dimension and muscle activity

    PubMed Central

    Shah, Farhan Khalid; Gebreel, Ashraf; Elshokouki, Ali hamed; Habib, Ahmed Ali

    2012-01-01

    PURPOSE To compare the changes in the occlusal vertical dimension, activity of masseter muscles and biting force after insertion of immediate denture constructed with conventional, tooth-supported and Implant-supported immediate mandibular complete denture. MATERIALS AND METHODS Patients were selected and treatment was carried out with all the three different concepts i.e, immediate denture constructed with conventional (Group A), tooth-supported (Group B) and Implant-supported (Group C) immediate mandibular complete dentures. Parameters of evaluation and comparison were occlusal vertical dimension measured by radiograph (at three different time intervals), Masseter muscle electromyographic (EMG) measurement by EMG analysis (at three different positions of jaws) and bite force measured by force transducer (at two different time intervals). The obtained data were statistically analyzed by using ANOVA-F test at 5% level of significance. If the F test was significant, Least Significant Difference test was performed to test further significant differences between variables. RESULTS Comparison between mean differences in occlusal vertical dimension for tested groups showed that it was only statistically significant at 1 year after immediate dentures insertion. Comparison between mean differences in wavelet packet coefficients of the electromyographic signals of masseter muscles for tested groups was not significant at rest position, but significant at initial contact position and maximum voluntary clench position. Comparison between mean differences in maximum biting force for tested groups was not statistically significant at 5% level of significance. CONCLUSION Immediate complete overdentures whether tooth or implant supported prosthesis is recommended than totally mucosal supported prosthesis. PMID:22737309

  17. Bee Venom Pharmacopuncture Responses According to Sasang Constitution and Gender

    PubMed Central

    Kim, Chaeweon; Lee, Kwangho

    2013-01-01

    Objectives: The current study was performed to compare the bee venom pharmacopuncture skin test reactions among groups with different sexes and Sasang constitutions. Methods: Between July 2012 and June 2013, all 76 patients who underwent bee venom pharmacopuncture skin tests and Sasang constitution diagnoses at Oriental Medicine Hospital of Sangji University were included in this study. The skin test was performed on the patient’s forearm intracutaneously with 0.05 ml of sweet bee venom (SBV) on their first visit. If the patients showed a positive response, the test was discontinued. On the other hand, if the patient showed a negative response, the test was performed on the opposite forearm intracutaneously with 0.05 ml of bee venom pharmacopuncture 25% on the next day or the next visit. Three groups were made to compare the differences in the bee venom pharmacopuncture skin tests according to sexual difference and Sasang constitution: group A showed a positive response to SBV, group B showed a positive response to bee venom pharmacopuncture 25%, and group C showed a negative response on all bee venom pharmacopuncture skin tests. Fisher’s exact test was performed to evaluate the differences statistically. Results: The results of the bee venom pharmacopuncture skin tests showed no significant differences according to Sasang constitution (P = 0.300) or sexual difference (P = 0.163). Conclusion: No significant differences on the results of bee venom pharmacopuncture skin tests were observed according to two factors, Sasang constitution and the sexual difference. PMID:25780682

  18. High Impact = High Statistical Standards? Not Necessarily So

    PubMed Central

    Tressoldi, Patrizio E.; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors. PMID:23418533

  19. High impact  =  high statistical standards? Not necessarily so.

    PubMed

    Tressoldi, Patrizio E; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

  20. Can We Spin Straw Into Gold? An Evaluation of Immigrant Legal Status Imputation Approaches

    PubMed Central

    Van Hook, Jennifer; Bachmeier, James D.; Coffman, Donna; Harel, Ofer

    2014-01-01

    Researchers have developed logical, demographic, and statistical strategies for imputing immigrants’ legal status, but these methods have never been empirically assessed. We used Monte Carlo simulations to test whether, and under what conditions, legal status imputation approaches yield unbiased estimates of the association of unauthorized status with health insurance coverage. We tested five methods under a range of missing data scenarios. Logical and demographic imputation methods yielded biased estimates across all missing data scenarios. Statistical imputation approaches yielded unbiased estimates only when unauthorized status was jointly observed with insurance coverage; when this condition was not met, these methods overestimated insurance coverage for unauthorized relative to legal immigrants. We next showed how bias can be reduced by incorporating prior information about unauthorized immigrants. Finally, we demonstrated the utility of the best-performing statistical method for increasing power. We used it to produce state/regional estimates of insurance coverage among unauthorized immigrants in the Current Population Survey, a data source that contains no direct measures of immigrants’ legal status. We conclude that commonly employed legal status imputation approaches are likely to produce biased estimates, but data and statistical methods exist that could substantially reduce these biases. PMID:25511332

  1. Use of Tests of Statistical Significance and Other Analytic Choices in a School Psychology Journal: Review of Practices and Suggested Alternatives.

    ERIC Educational Resources Information Center

    Snyder, Patricia A.; Thompson, Bruce

    The use of tests of statistical significance was explored, first by reviewing some criticisms of contemporary practice in the use of statistical tests as reflected in a series of articles in the "American Psychologist" and in the appointment of a "Task Force on Statistical Inference" by the American Psychological Association…

  2. A General Class of Test Statistics for Van Valen’s Red Queen Hypothesis

    PubMed Central

    Wiltshire, Jelani; Huffer, Fred W.; Parker, William C.

    2014-01-01

    Van Valen’s Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen’s work, various statistical approaches have been used to address the relationship between taxon age and the rate of extinction. We propose a general class of test statistics that can be used to test for the effect of age on the rate of extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead we control for covariate effects by pairing or grouping together similar species. Simulations are used to compare the power of the statistics. We apply the test statistics to data on Foram extinctions and find that age has a positive effect on the rate of extinction. A derivation of the null distribution of one of the test statistics is provided in the supplementary material. PMID:24910489

  3. A General Class of Test Statistics for Van Valen's Red Queen Hypothesis.

    PubMed

    Wiltshire, Jelani; Huffer, Fred W; Parker, William C

    2014-09-01

    Van Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon age and the rate of extinction. We propose a general class of test statistics that can be used to test for the effect of age on the rate of extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead we control for covariate effects by pairing or grouping together similar species. Simulations are used to compare the power of the statistics. We apply the test statistics to data on Foram extinctions and find that age has a positive effect on the rate of extinction. A derivation of the null distribution of one of the test statistics is provided in the supplementary material.

  4. (abstract) A VLBI Test of Tropospheric Delay Calibration with WVRs

    NASA Technical Reports Server (NTRS)

    Linfield, R. P.; Teitelbaum, L. P.; Keihm, S. J.; Resch, G. M.; Mahoney, M. J.; Treuhaft, R. N.

    1994-01-01

    Dual frequency (S/X band) very long baseline interferometry (VLBI) observations were used to test troposphere calibration by water vapor radiometers (WVRs). Comparison of the VLBI and WVR measurements show a statistical agreement (specifically, their structure functions agree) on time scales less than 700 seconds. On longer time scales, VLBI instrumental errors become important. The improvement in VLBI residual delays from WVR calibration was consistent with the measured level of tropospheric fluctuations.

  5. Navy Littoral Combat Ship (LCS)/Frigate Program: Background and Issues for Congress

    DTIC Science & Technology

    2015-09-23

    Defense Daily, June 2, 1014 : 4-5; Michael Fabey, “Robust Air Defense Not Needed In New Frigates, Studies Show,” Aerospace Daily & Defense Report...the frigate, according to an Aug. 7 notice posted to the Federal Business Opportunities website.48 Technical Risk and Issues Relating to Program...the Pentagon’s top test and evaluation officer. “Recent developmental testing provides no statistical evidence that the system is demonstrating

  6. SIRU utilization. Volume 1: Theory, development and test evaluation

    NASA Technical Reports Server (NTRS)

    Musoff, H.

    1974-01-01

    The theory, development, and test evaluations of the Strapdown Inertial Reference Unit (SIRU) are discussed. The statistical failure detection and isolation, single position calibration, and self alignment techniques are emphasized. Circuit diagrams of the system components are provided. Mathematical models are developed to show the performance characteristics of the subsystems. Specific areas of the utilization program are identified as: (1) error source propagation characteristics and (2) local level navigation performance demonstrations.

  7. Technology-assisted stroke rehabilitation in Mexico: a pilot randomized trial comparing traditional therapy to circuit training in a Robot/technology-assisted therapy gym.

    PubMed

    Bustamante Valles, Karla; Montes, Sandra; Madrigal, Maria de Jesus; Burciaga, Adan; Martínez, María Elena; Johnson, Michelle J

    2016-09-15

    Stroke rehabilitation in low- and middle-income countries, such as Mexico, is often hampered by lack of clinical resources and funding. To provide a cost-effective solution for comprehensive post-stroke rehabilitation that can alleviate the need for one-on-one physical or occupational therapy, in lower and upper extremities, we proposed and implemented a technology-assisted rehabilitation gymnasium in Chihuahua, Mexico. The Gymnasium for Robotic Rehabilitation (Robot Gym) consisted of low- and high-tech systems for upper and lower limb rehabilitation. Our hypothesis is that the Robot Gym can provide a cost- and labor-efficient alternative for post-stroke rehabilitation, while being more or as effective as traditional physical and occupational therapy approaches. A typical group of stroke patients was randomly allocated to an intervention (n = 10) or a control group (n = 10). The intervention group received rehabilitation using the devices in the Robot Gym, whereas the control group (n = 10) received time-matched standard care. All of the study subjects were subjected to 24 two-hour therapy sessions over a period of 6 to 8 weeks. Several clinical assessments tests for upper and lower extremities were used to evaluate motor function pre- and post-intervention. A cost analysis was done to compare the cost effectiveness for both therapies. No significant differences were observed when comparing the results of the pre-intervention Mini-mental, Brunnstrom Test, and Geriatric Depression Scale Test, showing that both groups were functionally similar prior to the intervention. Although, both training groups were functionally equivalent, they had a significant age difference. The results of all of the upper extremity tests showed an improvement in function in both groups with no statistically significant differences between the groups. The Fugl-Meyer and the 10 Meters Walk lower extremity tests showed greater improvement in the intervention group compared to the control group. On the Time Up and Go Test, no statistically significant differences were observed pre- and post-intervention when comparing the control and the intervention groups. For the 6 Minute Walk Test, both groups presented a statistically significant difference pre- and post-intervention, showing progress in their performance. The robot gym therapy was more cost-effective than the traditional one-to-one therapy used during this study in that it enabled therapist to train up to 1.5 to 6 times more patients for the approximately same cost in the long term. The results of this study showed that the patients that received therapy using the Robot Gym had enhanced functionality in the upper extremity tests similar to patients in the control group. In the lower extremity tests, the intervention patients showed more improvement than those subjected to traditional therapy. These results support that the Robot Gym can be as effective as traditional therapy for stroke patients, presenting a more cost- and labor-efficient option for countries with scarce clinical resources and funding. ISRCTN98578807 .

  8. Effect of repeated simulated clinical use and sterilization on the cutting efficiency and flexibility of Hyflex CM nickel-titanium rotary files.

    PubMed

    Seago, Scott T; Bergeron, Brian E; Kirkpatrick, Timothy C; Roberts, Mark D; Roberts, Howard W; Himel, Van T; Sabey, Kent A

    2015-05-01

    Recent nickel-titanium manufacturing processes have resulted in an alloy that remains in a twinned martensitic phase at operating temperature. This alloy has been shown to have increased flexibility with added tolerance to cyclic and torsional fatigue. The aim of this study was to assess the effect of repeated simulated clinical use and sterilization on cutting efficiency and flexibility of Hyflex CM rotary files. Cutting efficiency was determined by measuring the load required to maintain a constant feed rate while instrumenting simulated canals. Flexibility was determined by using a 3-point bending test. Files were autoclaved after each use according to the manufacturer's recommendations. Files were tested through 10 simulated clinical uses. For cutting efficiency, mean data were analyzed by using multiple factor analysis of variance and the Dunnett post hoc test (P < .05). For flexibility, mean data were analyzed by using Levene's Test of Equality of Error and a general linear model (P < .05). No statistically significant decrease in cutting efficiency was noted in groups 2, 5, 6, and 7. A statistically significant decrease in cutting efficiency was noted in groups 3, 4, 8, 9, and 10. No statistically significant decrease in flexibility was noted in groups 2, 3, and 7. A statistically significant decrease in flexibility was noted in groups 4, 5, 6, 8, 9, 10, and 11. Repeated simulated clinical use and sterilization showed no effect on cutting efficiency through 1 use and no effect on flexibility through 2 uses. Published by Elsevier Inc.

  9. Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) scores generated from the MMPI-2 and MMPI-2-RF test booklets: internal structure comparability in a sample of criminal defendants.

    PubMed

    Tarescavage, Anthony M; Alosco, Michael L; Ben-Porath, Yossef S; Wood, Arcangela; Luna-Jones, Lynn

    2015-04-01

    We investigated the internal structure comparability of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) scores derived from the MMPI-2 and MMPI-2-RF booklets in a sample of 320 criminal defendants (229 males and 54 females). After exclusion of invalid protocols, the final sample consisted of 96 defendants who were administered the MMPI-2-RF booklet and 83 who completed the MMPI-2. No statistically significant differences in MMPI-2-RF invalidity rates were observed between the two forms. Individuals in the final sample who completed the MMPI-2-RF did not statistically differ on demographics or referral question from those who were administered the MMPI-2 booklet. Independent t tests showed no statistically significant differences between MMPI-2-RF scores generated with the MMPI-2 and MMPI-2-RF booklets on the test's substantive scales. Statistically significant small differences were observed on the revised Variable Response Inconsistency (VRIN-r) and True Response Inconsistency (TRIN-r) scales. Cronbach's alpha and standard errors of measurement were approximately equal between the booklets for all MMPI-2-RF scales. Finally, MMPI-2-RF intercorrelations produced from the two forms yielded mostly small and a few medium differences, indicating that discriminant validity and test structure are maintained. Overall, our findings reflect the internal structure comparability of MMPI-2-RF scale scores generated from MMPI-2 and MMPI-2-RF booklets. Implications of these results and limitations of these findings are discussed. © The Author(s) 2014.

  10. Tissue sparing, behavioral recovery, supraspinal axonal sparing/regeneration following sub-acute glial transplantation in a model of spinal cord contusion.

    PubMed

    Barbour, Helen R; Plant, Christine D; Harvey, Alan R; Plant, Giles W

    2013-09-27

    It has been shown that olfactory ensheathing glia (OEG) and Schwann cell (SCs) transplantation are beneficial as cellular treatments for spinal cord injury (SCI), especially acute and sub-acute time points. In this study, we transplanted DsRED transduced adult OEG and SCs sub-acutely (14 days) following a T10 moderate spinal cord contusion injury in the rat. Behaviour was measured by open field (BBB) and horizontal ladder walking tests to ascertain improvements in locomotor function. Fluorogold staining was injected into the distal spinal cord to determine the extent of supraspinal and propriospinal axonal sparing/regeneration at 4 months post injection time point. The purpose of this study was to investigate if OEG and SCs cells injected sub acutely (14 days after injury) could: (i) improve behavioral outcomes, (ii) induce sparing/regeneration of propriospinal and supraspinal projections, and (iii) reduce tissue loss. OEG and SCs transplanted rats showed significant increased locomotion when compared to control injury only in the open field tests (BBB). However, the ladder walk test did not show statistically significant differences between treatment and control groups. Fluorogold retrograde tracing showed a statistically significant increase in the number of supraspinal nuclei projecting into the distal spinal cord in both OEG and SCs transplanted rats. These included the raphe, reticular and vestibular systems. Further pairwise multiple comparison tests also showed a statistically significant increase in raphe projecting neurons in OEG transplanted rats when compared to SCs transplanted animals. Immunohistochemistry of spinal cord sections short term (2 weeks) and long term (4 months) showed differences in host glial activity, migration and proteoglycan deposits between the two cell types. Histochemical staining revealed that the volume of tissue remaining at the lesion site had increased in all OEG and SCs treated groups. Significant tissue sparing was observed at both time points following glial SCs transplantation. In addition, OEG transplants showed significantly decreased chondroitin proteoglycan synthesis in the lesion site, suggesting a more CNS tolerant graft. These results show that transplantation of OEG and SCs in a sub-acute phase can improve anatomical outcomes after a contusion injury to the spinal cord, by increasing the number of spared/regenerated supraspinal fibers, reducing cavitation and enhancing tissue integrity. This provides important information on the time window of glial transplantation for the repair of the spinal cord.

  11. Comparing perceived and test-based knowledge of cancer risk and prevention among Hispanic and African Americans: an example of community participatory research.

    PubMed

    Jones, Loretta; Bazargan, Mohsen; Lucas-Wright, Anna; Vadgama, Jaydutt V; Vargas, Roberto; Smith, James; Otoukesh, Salman; Maxwell, Annette E

    2013-01-01

    Most theoretical formulations acknowledge that knowledge and awareness of cancer screening and prevention recommendations significantly influence health behaviors. This study compares perceived knowledge of cancer prevention and screening with test-based knowledge in a community sample. We also examine demographic variables and self-reported cancer screening and prevention behaviors as correlates of both knowledge scores, and consider whether cancer related knowledge can be accurately assessed using just a few, simple questions in a short and easy-to-complete survey. We used a community-partnered participatory research approach to develop our study aims and a survey. The study sample was composed of 180 predominantly African American and Hispanic community individuals who participated in a full-day cancer prevention and screening promotion conference in South Los Angeles, California, on July 2011. Participants completed a self-administered survey in English or Spanish at the beginning of the conference. Our data indicate that perceived and test-based knowledge scores are only moderately correlated. Perceived knowledge score shows a stronger association with demographic characteristics and other cancer related variables than the test-based score. Thirteen out of twenty variables that are examined in our study showed a statistically significant correlation with the perceived knowledge score, however, only four variables demonstrated a statistically significant correlation with the test-based knowledge score. Perceived knowledge of cancer prevention and screening was assessed with fewer items than test-based knowledge. Thus, using this assessment could potentially reduce respondent burden. However, our data demonstrate that perceived and test-based knowledge are separate constructs.

  12. Low-level contrast statistics are diagnostic of invariance of natural textures

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as “different” compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance. PMID:22701419

  13. Evaluation of dredged material proposed for ocean disposal from Arthur Kill Project Area, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruendell, B.D.; Barrows, E.S.; Borde, A.B.

    1997-01-01

    The objective of the bioassay reevaluation of Arthur Kill Federal Project was to reperform toxicity testing on proposed dredged material following current ammonia reduction protocols. Arthur Kill was one of four waterways sampled and evaluated for dredging and disposal in April 1993. Sediment samples were recollected from the Arthur Kill Project areas in August 1995. Tests and analyses were conducted according to the manual developed by the USACE and the U.S. Environmental Protection Agency (EPA), Evaluation of Dredged Material Proposed for Ocean Disposal (Testing Manual), commonly referred to as the {open_quotes}Green Book,{close_quotes} and the regional manual developed by the USACE-NYDmore » and EPA Region II, Guidance for Performing Tests on Dredged Material to be Disposed of in Ocean Waters. The reevaluation of proposed dredged material from the Arthur Kill project areas consisted of benthic acute toxicity tests. Thirty-three individual sediment core samples were collected from the Arthur Kill project area. Three composite sediments, representing each reach of the area proposed for dredging, was used in benthic acute toxicity testing. Benthic acute toxicity tests were performed with the amphipod Ampelisca abdita and the mysid Mysidopsis bahia. The amphipod and mysid benthic toxicity test procedures followed EPA guidance for reduction of total ammonia concentrations in test systems prior to test initiation. Statistically significant acute toxicity was found in all Arthur Kill composites in the static renewal tests with A. abdita, but not in the static tests with M. bahia. Statistically significant acute toxicity and a greater than 20% increase in mortality over the reference sediment was found in the static renewal tests with A. abdita. M. bahia did not show statistically significant acute toxicity or a greater than 10% increase in mortality over reference sediment in static tests. 5 refs., 2 figs., 2 tabs.« less

  14. [The relationship between ischemic preconditioning-induced infarction size limitation and duration of test myocardial ischemia].

    PubMed

    Blokhin, I O; Galagudza, M M; Vlasov, T D; Nifontov, E M; Petrishchev, N N

    2008-07-01

    Traditionally infarction size reduction by ischemic preconditioning is estimated in duration of test ischemia. This approach limits the understanding of real antiischemic efficacy of ischemic preconditioning. Present study was performed in the in vivo rat model of regional myocardial ischemia-reperfusion and showed that protective effect afforded by ischemic preconditioning progressively decreased with prolongation of test ischemia. There were no statistically significant differences in infarction size between control and preconditioned animals when the duration of test ischemia was increased up to 1 hour. Preconditioning ensured maximal infarction-limiting effect in duration of test ischemia varying from 20 to 40 minutes.

  15. Using Multidimensional Scaling To Assess the Dimensionality of Dichotomous Item Data.

    ERIC Educational Resources Information Center

    Meara, Kevin; Robin, Frederic; Sireci, Stephen G.

    2000-01-01

    Investigated the usefulness of multidimensional scaling (MDS) for assessing the dimensionality of dichotomous test data. Focused on two MDS proximity measures, one based on the PC statistic (T. Chen and M. Davidson, 1996) and other, on interitem Euclidean distances. Simulation results show that both MDS procedures correctly identify…

  16. A multi-scale analysis of landscape statistics

    Treesearch

    Douglas H. Cain; Kurt H. Riitters; Kenneth Orvis

    1997-01-01

    It is now feasible to monitor some aspects of landscape ecological condition nationwide using remotely- sensed imagery and indicators of land cover pattern. Previous research showed redundancies among many reported pattern indicators and identified six unique dimensions of land cover pattern. This study tested the stability of those dimensions and representative...

  17. Impact of Oral Health Education on Oral Health Knowledge of Private School Children in Riyadh City, Saudi Arabia

    PubMed Central

    Al Saffan, Abdulrahman Dahham; Baseer, Mohammad Abdul; Alshammary, Abdul Aziz; Assery, Mansour; Kamel, Ashraf; Rahman, Ghousia

    2017-01-01

    Aims and Objectives: To assess the early effect of oral health education on oral health knowledge of primary and intermediate school students of private schools by utilizing pre/post questionnaires data from oral health educational projects in Riyadh city, Saudi Arabia. Second, to examine topic-specific knowledge differences between genders, nationalities, and educational levels of the students. Materials and Methods: Cross-sectional oral health educational data of private school students (n = 1279) in primary and intermediate levels were extracted from the King Salman Centre for Children's Health (KSCCH) projects undertaken by Riyadh Colleges of Dentistry and Pharmacy. Student's pre- and post-test data were analyzed for changes in oral health knowledge. Overall knowledge score and topic-specific knowledge scores were calculated and the differences between gender, nationality, and educational level were examined using Mann–Whitney U-test. Pre/post change in the oral health knowledge was evaluated by Wilcoxon's sign rank test. Results: Immediately, after oral health educational session high knowledge score category showed an increase of 25.6%, medium and low knowledge score categories showed −3.2% and −22.3% decrease, and this change was statistically significant (P < 0.001). Comparison of correct responses between pre- and post-test showed statistically significant (P < 0.05) increase in all the questions except for the timing of tooth brushing. Females, non-Saudi nationals and students in primary level of education showed significantly high mean knowledge (P < 0.001) at posttest assessment. Conclusion: Primary and intermediate private school student's overall, and topic-specific oral health knowledge improved immediately after educational intervention provided by KSCCH. High knowledge gain was observed among female non-Saudi primary school students. PMID:29285475

  18. Larger differences in utilization of rarely requested tests in primary care in Spain.

    PubMed

    Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Uris, Joaquín; Leiva-Salinas, Carlos

    2015-01-01

    The study was performed to compare and analyze the inter-departmental variability in the request of rarely requested laboratory tests in primary care, as opposed to other more common and highly requested tests. Data from production statistics for the year 2012 from 76 Spanish laboratories was used. The number of antinuclear antibodies, antistreptolysin O, creatinine, cyclic citrullinated peptide antibodies, deaminated peptide gliadine IgA antibodies, glucose, protein electrophoresis, rheumatoid factor, transglutaminase IgA antibodies, urinalysis and uric acid tests requested was collected. The number of test requests per 1000 inhabitants was calculated. In order to explore the variability the coefficient of quartile dispersion was calculated. The smallest variation was seen for creatinine, glucose, uric acid and urinalysis; the most requested tests. The tests that were least requested showed the greatest variability. Our study shows through a very simplified approach, in a population close to twenty million inhabitants, how in primary care, the variability in the request of laboratory tests is inversely proportional to the request rate.

  19. The Practicality of Statistical Physics Handout Based on KKNI and the Constructivist Approach

    NASA Astrophysics Data System (ADS)

    Sari, S. Y.; Afrizon, R.

    2018-04-01

    Statistical physics lecture shows that: 1) the performance of lecturers, social climate, students’ competence and soft skills needed at work are in enough category, 2) students feel difficulties in following the lectures of statistical physics because it is abstract, 3) 40.72% of students needs more understanding in the form of repetition, practice questions and structured tasks, and 4) the depth of statistical physics material needs to be improved gradually and structured. This indicates that learning materials in accordance of The Indonesian National Qualification Framework or Kerangka Kualifikasi Nasional Indonesia (KKNI) with the appropriate learning approach are needed to help lecturers and students in lectures. The author has designed statistical physics handouts which have very valid criteria (90.89%) according to expert judgment. In addition, the practical level of handouts designed also needs to be considered in order to be easy to use, interesting and efficient in lectures. The purpose of this research is to know the practical level of statistical physics handout based on KKNI and a constructivist approach. This research is a part of research and development with 4-D model developed by Thiagarajan. This research activity has reached part of development test at Development stage. Data collection took place by using a questionnaire distributed to lecturers and students. Data analysis using descriptive data analysis techniques in the form of percentage. The analysis of the questionnaire shows that the handout of statistical physics has very practical criteria. The conclusion of this study is statistical physics handouts based on the KKNI and constructivist approach have been practically used in lectures.

  20. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

Top