Sample records for compared statistically results

  1. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Chi-squared and C statistic minimization for low count per bin data

    NASA Astrophysics Data System (ADS)

    Nousek, John A.; Shue, David R.

    1989-07-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  3. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  4. Statistics in three biomedical journals.

    PubMed

    Pilcík, T

    2003-01-01

    In this paper we analyze the use of statistics and associated problems, in three Czech biological journals in the year 2000. We investigated 23 articles Folia Biologica, 60 articles in Folia Microbiologica, and 88 articles in Physiological Research. The highest frequency of publications with statistical content have used descriptive statistics and t-test. The most usual mistake concerns the absence of reference about the used statistical software and insufficient description of the data. We have compared our results with the results of similar studies in some other medical journals. The use of important statistical methods is comparable with those used in most medical journals, the proportion of articles, in which the applied method is described insufficiently is moderately low.

  5. Statistical evaluation of the metallurgical test data in the ORR-PSF-PVS irradiation experiment. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stallmann, F.W.

    1984-08-01

    A statistical analysis of Charpy test results of the two-year Pressure Vessel Simulation metallurgical irradiation experiment was performed. Determination of transition temperature and upper shelf energy derived from computer fits compare well with eyeball fits. Uncertainties for all results can be obtained with computer fits. The results were compared with predictions in Regulatory Guide 1.99 and other irradiation damage models.

  6. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  7. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  8. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  9. Clinical relevance vs. statistical significance: Using neck outcomes in patients with temporomandibular disorders as an example.

    PubMed

    Armijo-Olivo, Susan; Warren, Sharon; Fuentes, Jorge; Magee, David J

    2011-12-01

    Statistical significance has been used extensively to evaluate the results of research studies. Nevertheless, it offers only limited information to clinicians. The assessment of clinical relevance can facilitate the interpretation of the research results into clinical practice. The objective of this study was to explore different methods to evaluate the clinical relevance of the results using a cross-sectional study as an example comparing different neck outcomes between subjects with temporomandibular disorders and healthy controls. Subjects were compared for head and cervical posture, maximal cervical muscle strength, endurance of the cervical flexor and extensor muscles, and electromyographic activity of the cervical flexor muscles during the CranioCervical Flexion Test (CCFT). The evaluation of clinical relevance of the results was performed based on the effect size (ES), minimal important difference (MID), and clinical judgement. The results of this study show that it is possible to have statistical significance without having clinical relevance, to have both statistical significance and clinical relevance, to have clinical relevance without having statistical significance, or to have neither statistical significance nor clinical relevance. The evaluation of clinical relevance in clinical research is crucial to simplify the transfer of knowledge from research into practice. Clinical researchers should present the clinical relevance of their results. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  11. Improving esthetic results in benign parotid surgery: statistical evaluation of facelift approach, sternocleidomastoid flap, and superficial musculoaponeurotic system flap application.

    PubMed

    Bianchi, Bernardo; Ferri, Andrea; Ferrari, Silvano; Copelli, Chiara; Sesenna, Enrico

    2011-04-01

    The purpose of this article was to analyze the efficacy of facelift incision, sternocleidomastoid muscle flap, and superficial musculoaponeurotic system flap for improving the esthetic results in patients undergoing partial parotidectomy for benign parotid tumor resection. The usefulness of partial parotidectomy is discussed, and a statistical evaluation of the esthetic results was performed. From January 1, 1996, to January 1, 2007, 274 patients treated for benign parotid tumors were studied. Of these, 172 underwent partial parotidectomy. The 172 patients were divided into 4 groups: partial parotidectomy with classic or modified Blair incision without reconstruction (group 1), partial parotidectomy with facelift incision and without reconstruction (group 2), partial parotidectomy with facelift incision associated with sternocleidomastoid muscle flap (group 3), and partial parotidectomy with facelift incision associated with superficial musculoaponeurotic system flap (group 4). Patients were considered, after a follow-up of at least 18 months, for functional and esthetic evaluation. The functional outcome was assessed considering the facial nerve function, Frey syndrome, and recurrence. The esthetic evaluation was performed by inviting the patients and a blind panel of 1 surgeon and 2 secretaries of the department to give a score of 1 to 10 to assess the final cosmetic outcome. The statistical analysis was finally performed using the Mann-Whitney U test for nonparametric data to compare the different group results. P less than .05 was considered significant. No recurrence developed in any of the 4 groups or in any of the 274 patients during the follow-up period. The statistical analysis, comparing group 1 and the other groups, revealed a highly significant statistical difference (P < .0001) for all groups. Also, when group 2 was compared with groups 3 and 4, the difference was highly significantly different statistically (P = .0018 for group 3 and P = .0005 for group 4). Finally, when groups 3 and 4 were compared, the difference was not statistically significant (P = .3467). Partial parotidectomy is the real key point for improving esthetic results in benign parotid surgery. The evaluation of functional complications and the recurrence rate in this series of patients has confirmed that this technique can be safely used for parotid benign tumor resection. The use of a facelift incision alone led to a high statistically significant improvement in the esthetic outcome. When the facelift incision was used with reconstructive techniques, such as the sternocleidomastoid muscle flap or the superficial musculoaponeurotic system flap, the esthetic results improved further. Finally, no statistically significant difference resulted comparing the use of the superficial musculoaponeurotic system and the sternocleidomastoid muscle flap. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Statistical analysis of activation and reaction energies with quasi-variational coupled-cluster theory

    NASA Astrophysics Data System (ADS)

    Black, Joshua A.; Knowles, Peter J.

    2018-06-01

    The performance of quasi-variational coupled-cluster (QV) theory applied to the calculation of activation and reaction energies has been investigated. A statistical analysis of results obtained for six different sets of reactions has been carried out, and the results have been compared to those from standard single-reference methods. In general, the QV methods lead to increased activation energies and larger absolute reaction energies compared to those obtained with traditional coupled-cluster theory.

  13. Assessments: an open and closed case

    NASA Astrophysics Data System (ADS)

    Nazim Khan, R.

    2015-10-01

    Open book assessment is not a new idea, but it does not seem to have gained ground in higher education. In particular, not much literature is available on open book examinations in mathematics and statistics in higher education. The objective of this paper is to investigate the appropriateness of open book assessments in a first-year business statistics course. Data over two semesters of open book assessments provided some interesting results when compared with the closed book assessment regime in the following semester. The relevance of the results is discussed and compared with findings from the literature. The implications of insights gained for further practice in the assessment of mathematics and statistics is also discussed.

  14. Comparing Student Success and Understanding in Introductory Statistics under Consensus and Simulation-Based Curricula

    ERIC Educational Resources Information Center

    Hldreth, Laura A.; Robison-Cox, Jim; Schmidt, Jade

    2018-01-01

    This study examines the transferability of results from previous studies of simulation-based curriculum in introductory statistics using data from 3,500 students enrolled in an introductory statistics course at Montana State University from fall 2013 through spring 2016. During this time, four different curricula, a traditional curriculum and…

  15. Consequences of common data analysis inaccuracies in CNS trauma injury basic research.

    PubMed

    Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K

    2013-05-15

    The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.

  16. BrightStat.com: free statistics online.

    PubMed

    Stricker, Daniel

    2008-10-01

    Powerful software for statistical analysis is expensive. Here I present BrightStat, a statistical software running on the Internet which is free of charge. BrightStat's goals, its main capabilities and functionalities are outlined. Three different sample runs, a Friedman test, a chi-square test, and a step-wise multiple regression are presented. The results obtained by BrightStat are compared with results computed by SPSS, one of the global leader in providing statistical software, and VassarStats, a collection of scripts for data analysis running on the Internet. Elementary statistics is an inherent part of academic education and BrightStat is an alternative to commercial products.

  17. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  18. An empirical comparison of statistical tests for assessing the proportional hazards assumption of Cox's model.

    PubMed

    Ng'andu, N H

    1997-03-30

    In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.

  19. A comparative evaluation of dental caries status among hearing-impaired and normal children of Malda, West Bengal, evaluated with the Caries Assessment Spectrum and Treatment.

    PubMed

    Kar, Sudipta; Kundu, Goutam; Maiti, Shyamal Kumar; Ghosh, Chiranjit; Bazmi, Badruddin Ahamed; Mukhopadhyay, Santanu

    2016-01-01

    Dental caries is one of the major modern-day diseases of dental hard tissue. It may affect both normal and hearing-impaired children. This study is aimed to evaluate and compare the prevalence of dental caries in hearing-impaired and normal children of Malda, West Bengal, utilizing the Caries Assessment Spectrum and Treatment (CAST). In a cross-sectional, case-control study of dental caries status of 6-12-year-old children was assessed. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries affected hearing-impaired children found to be about 30.51% compared to 15.81% in normal children, and the result was statistically significant. Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group, and the result is significant at P < 0.05. Statistical analysis was carried out utilizing Z-test. Statistically significant difference was found in studied (hearing-impaired) and control group (normal children). In the present study, caries effected hearing-impaired children found about 30.51% instead of 15.81% in normal children, and the result was statistically significant (P < 0.05). Regarding individual caries assessment criteria, nearly all subgroups reflect statistically significant difference except sealed tooth structure group, internal caries-related discoloration in dentin, and distinct cavitation into dentine group. Dental health of hearing-impaired children was found unsatisfactory than normal children when studied in relation to dental caries status evaluated with CAST.

  20. Efficacy and safety of brand-risperidone versus similar-risperidone in elderly patients with neuropsychiatric disorders: A retrospective study

    PubMed Central

    Folquitto, Jefferson Cunha; de Barros, Sérgio Barbosa; Pinto Junior, Jony Arrais; Bottino, Cássio M.C.

    2010-01-01

    To compare the efficacy and tolerability of brand-risperidone against similar-risperidone in elderly outpatients. Method The medical files of 16 elderly outpatients from the IPq-HCFMUSP treated with two formulations of risperidone (brand and similar) between July/1999 and February/2000 were reviewed. Two independent raters, using the Clinical Global Impression scale, evaluated the efficacy of the treatment with risperidone and the frequency of adverse effects. Results Comparing October/1999 to November/1999, Rater 1 observed a trend (p=0.059) and Rater 2 found a statistically significant difference, in favor of the brand-risperidone group (p=0.014). Comparing October/1999 to February/2000, Rater 1 observed no statistically significant difference (p=0.190), but the Rater 2 found a statistically significant difference in favor of the brand-risperidone group (p=0.029). Comparing November/1999 to February/2000, both raters found no statistically significant differences between both risperidone formulations. Regarding adverse effects, a statistically significant difference (p=0.046) was found in favor of the patients treated with brand-risperidone. Conclusions The risperidone-reference, compared to similar-risperidone, showed a trend toward greater efficacy and tolerability. PMID:29213664

  1. Your Chi-Square Test Is Statistically Significant: Now What?

    ERIC Educational Resources Information Center

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  2. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  3. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  4. Can statistical linkage of missing variables reduce bias in treatment effect estimates in comparative effectiveness research studies?

    PubMed

    Crown, William; Chang, Jessica; Olson, Melvin; Kahler, Kristijan; Swindle, Jason; Buzinec, Paul; Shah, Nilay; Borah, Bijan

    2015-09-01

    Missing data, particularly missing variables, can create serious analytic challenges in observational comparative effectiveness research studies. Statistical linkage of datasets is a potential method for incorporating missing variables. Prior studies have focused upon the bias introduced by imperfect linkage. This analysis uses a case study of hepatitis C patients to estimate the net effect of statistical linkage on bias, also accounting for the potential reduction in missing variable bias. The results show that statistical linkage can reduce bias while also enabling parameter estimates to be obtained for the formerly missing variables. The usefulness of statistical linkage will vary depending upon the strength of the correlations of the missing variables with the treatment variable, as well as the outcome variable of interest.

  5. [Regression on order statistics and its application in estimating nondetects for food exposure assessment].

    PubMed

    Yu, Xiaojin; Liu, Pei; Min, Jie; Chen, Qiguang

    2009-01-01

    To explore the application of regression on order statistics (ROS) in estimating nondetects for food exposure assessment. Regression on order statistics was adopted in analysis of cadmium residual data set from global food contaminant monitoring, the mean residual was estimated basing SAS programming and compared with the results from substitution methods. The results show that ROS method performs better obviously than substitution methods for being robust and convenient for posterior analysis. Regression on order statistics is worth to adopt,but more efforts should be make for details of application of this method.

  6. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  7. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  8. Results of the 1979 NACUBO Comparative Performance Study and Investment Questionnaire.

    ERIC Educational Resources Information Center

    Dresner, Bruce M.

    Results of the 1979 Comparative Performance Study of the National Association of College and Business Officers are presented. The study is designed to aid administrators in evaluating the performance of their investment pools. The report covers comparative performance information and related investment performance statistics and other endowment…

  9. People Patterns: Statistics. Environmental Module for Use in a Mathematics Laboratory Setting.

    ERIC Educational Resources Information Center

    Zastrocky, Michael; Trojan, Arthur

    This module on statistics consists of 18 worksheets that cover such topics as sample spaces, mean, median, mode, taking samples, posting results, analyzing data, and graphing. The last four worksheets require the students to work with samples and use these to compare people's responses. A computer dating service is one result of this work.…

  10. Statistical dielectronic recombination rates for multielectron ions in plasma

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Leont'iev, D. S.; Lisitsa, V. S.; Shurygin, V. A.

    2017-10-01

    We describe the general analytic derivation of the dielectronic recombination (DR) rate coefficient for multielectron ions in a plasma based on the statistical theory of an atom in terms of the spatial distribution of the atomic electron density. The dielectronic recombination rates for complex multielectron tungsten ions are calculated numerically in a wide range of variation of the plasma temperature, which is important for modern nuclear fusion studies. The results of statistical theory are compared with the data obtained using level-by-level codes ADPAK, FAC, HULLAC, and experimental results. We consider different statistical DR models based on the Thomas-Fermi distribution, viz., integral and differential with respect to the orbital angular momenta of the ion core and the trapped electron, as well as the Rost model, which is an analog of the Frank-Condon model as applied to atomic structures. In view of its universality and relative simplicity, the statistical approach can be used for obtaining express estimates of the dielectronic recombination rate coefficients in complex calculations of the parameters of the thermonuclear plasmas. The application of statistical methods also provides information for the dielectronic recombination rates with much smaller computer time expenditures as compared to available level-by-level codes.

  11. Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech

    NASA Astrophysics Data System (ADS)

    Přibil, J.; Přibilová, A.

    2009-01-01

    The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.

  12. Quadriceps Tendon Autograft in Anterior Cruciate Ligament Reconstruction: A Systematic Review.

    PubMed

    Hurley, Eoghan T; Calvo-Gurry, Manuel; Withers, Dan; Farrington, Shane K; Moran, Ray; Moran, Cathal J

    2018-05-01

    To systematically review the current evidence to ascertain whether quadriceps tendon autograft (QT) is a viable option in anterior cruciate ligament reconstruction. A literature review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines. Cohort studies comparing QT with bone-patellar tendon-bone autograft (BPTB) or hamstring tendon autograft (HT) were included. Clinical outcomes were compared, with all statistical analyses performed using IBM SPSS Statistics for Windows, version 22.0, with P < .05 being considered statistically significant. We identified 15 clinical trials with 1,910 patients. In all included studies, QT resulted in lower rates of anterior knee pain than BPTB. There was no difference in the rate of graft rupture between QT and BPTB or HT in any of the studies reporting this. One study found that QT resulted in greater knee stability than BPTB, and another study found increased stability compared with HT. One study found that QT resulted in improved functional outcomes compared with BPTB, and another found improved outcomes compared with HT, but one study found worse outcomes compared with BPTB. Current literature suggests QT is a viable option in anterior cruciate ligament reconstruction, with published literature showing comparable knee stability, functional outcomes, donor-site morbidity, and rerupture rates compared with BPTB and HT. Level III, systematic review of Level I, II, and III studies. Copyright © 2018 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  13. A comparison of two experimental design approaches in applying conjoint analysis in patient-centered outcomes research: a randomized trial.

    PubMed

    Kinter, Elizabeth T; Prior, Thomas J; Carswell, Christopher I; Bridges, John F P

    2012-01-01

    While the application of conjoint analysis and discrete-choice experiments in health are now widely accepted, a healthy debate exists around competing approaches to experimental design. There remains, however, a paucity of experimental evidence comparing competing design approaches and their impact on the application of these methods in patient-centered outcomes research. Our objectives were to directly compare the choice-model parameters and predictions of an orthogonal and a D-efficient experimental design using a randomized trial (i.e., an experiment on experiments) within an application of conjoint analysis studying patient-centered outcomes among outpatients diagnosed with schizophrenia in Germany. Outpatients diagnosed with schizophrenia were surveyed and randomized to receive choice tasks developed using either an orthogonal or a D-efficient experimental design. The choice tasks elicited judgments from the respondents as to which of two patient profiles (varying across seven outcomes and process attributes) was preferable from their own perspective. The results from the two survey designs were analyzed using the multinomial logit model, and the resulting parameter estimates and their robust standard errors were compared across the two arms of the study (i.e., the orthogonal and D-efficient designs). The predictive performances of the two resulting models were also compared by computing their percentage of survey responses classified correctly, and the potential for variation in scale between the two designs of the experiments was tested statistically and explored graphically. The results of the two models were statistically identical. No difference was found using an overall chi-squared test of equality for the seven parameters (p = 0.69) or via uncorrected pairwise comparisons of the parameter estimates (p-values ranged from 0.30 to 0.98). The D-efficient design resulted in directionally smaller standard errors for six of the seven parameters, of which only two were statistically significant, and no differences were found in the observed D-efficiencies of their standard errors (p = 0.62). The D-efficient design resulted in poorer predictive performance, but this was not significant (p = 0.73); there was some evidence that the parameters of the D-efficient design were biased marginally towards the null. While no statistical difference in scale was detected between the two designs (p = 0.74), the D-efficient design had a higher relative scale (1.06). This could be observed when the parameters were explored graphically, as the D-efficient parameters were lower. Our results indicate that orthogonal and D-efficient experimental designs have produced results that are statistically equivalent. This said, we have identified several qualitative findings that speak to the potential differences in these results that may have been statistically identified in a larger sample. While more comparative studies focused on the statistical efficiency of competing design strategies are needed, a more pressing research problem is to document the impact the experimental design has on respondent efficiency.

  14. Shear Bond Strength of Superficial, Intermediate and Deep Dentin In Vitro with Recent Generation Self-etching Primers and Single Nano Composite Resin.

    PubMed

    Singh, Kulshrest; Naik, Rajaram; Hegde, Srinidhi; Damda, Aftab

    2015-01-01

    This in vitro study is intended to compare the shear bond strength of recent self-etching primers to superficial, intermediate, and deep dentin levels. All teeth were sectioned at various levels and grouped randomly into two experimental groups and two control groups having three subgroups. The experimental groups consisted of two different dentin bonding system. The positive control group consisted of All Bond 2 and the negative control group was without the bonding agent. Finally, the specimens were subjected to shear bond strength study under Instron machine. The maximum shear bond strengths were noted at the time of fracture. The results were statistically analyzed. Comparing the shear bond strength values, All Bond 2 (Group III) demonstrated fairly higher bond strength values at different levels of dentin. Generally comparing All Bond 2 with the other two experimental groups revealed highly significant statistical results. In the present investigation with the fourth generation, higher mean shear bond strength values were recorded compared with the self-etching primers. When intermediate dentin shear bond strength was compared with deep dentin shear bond strength statistically significant results were found with Clearfil Liner Bond 2V, All Bond 2 and the negative control. There was a statistically significant difference in shear bond strength values both with self-etching primers and control groups (fourth generation bonding system and without bonding system) at superficial, intermediate, and deep dentin. There was a significant fall in bond strength values as one reaches deeper levels of dentin from superficial to intermediate to deep.

  15. Discrepancy between results and abstract conclusions in industry- vs nonindustry-funded studies comparing topical prostaglandins.

    PubMed

    Alasbali, Tariq; Smith, Michael; Geffen, Noa; Trope, Graham E; Flanagan, John G; Jin, Yaping; Buys, Yvonne M

    2009-01-01

    To investigate the relationship between industry- vs nonindustry-funded publications comparing the efficacy of topical prostaglandin analogs by evaluating the correspondence between the statistical significance of the publication's main outcome measure and its abstract conclusions. Retrospective, observational cohort study. English publications comparing the ocular hypotensive efficacy between any or all of latanoprost, travoprost, and bimatoprost were searched from the MEDLINE database. Each article was reviewed by three independent observers and was evaluated for source of funding, study quality, statistically significant main outcome measure, correspondence between results of main outcome measure and abstract conclusion, number of intraocular pressure outcomes compared, and journal impact factor. Funding was determined by published disclosure or, in cases of no documented disclosure, the corresponding author was contacted directly to confirm industry funding. Discrepancies were resolved by consensus. The main outcome measure was correspondence between abstract conclusion and reported statistical significance of the publications' main outcome measure. Thirty-nine publications were included, of which 29 were industry funded and 10 were nonindustry funded. The published abstract conclusion was not consistent with the results of the main outcome measure in 18 (62%) of 29 of the industry-funded studies compared with zero (0%) of 10 of the nonindustry-funded studies (P = .0006). Twenty-six (90%) of the industry-funded studies had proindustry abstract conclusions. Twenty-four percent of the industry-funded publications had a statistically significant main outcome measure; however, 90% of the industry-funded studies had proindustry abstract conclusions. Both readers and reviewers should scrutinize publications carefully to ensure that data support the authors' conclusions.

  16. Effect of CorrelatedRotational Noise

    NASA Astrophysics Data System (ADS)

    Hancock, Benjamin; Wagner, Caleb; Baskaran, Aparna

    The traditional model of a self-propelled particle (SPP) is one where the body axis along which the particle travels reorients itself through rotational diffusion. If the reorientation process was driven by colored noise, instead of the standard Gaussian white noise, the resulting statistical mechanics cannot be accessed through conventional methods. In this talk we present results comparing three methods of deriving the statistical mechanics of a SPP with a reorientation process driven by colored noise. We illustrate the differences/similarities in the resulting statistical mechanics by their ability to accurately capture the particles response to external aligning fields.

  17. Statistical EMC: A new dimension electromagnetic compatibility of digital electronic systems

    NASA Astrophysics Data System (ADS)

    Tsaliovich, Anatoly

    Electromagnetic compatibility compliance test results are used as a database for addressing three classes of electromagnetic-compatibility (EMC) related problems: statistical EMC profiles of digital electronic systems, the effect of equipment-under-test (EUT) parameters on the electromagnetic emission characteristics, and EMC measurement specifics. Open area test site (OATS) and absorber line shielded room (AR) results are compared for equipment-under-test highest radiated emissions. The suggested statistical evaluation methodology can be utilized to correlate the results of different EMC test techniques, characterize the EMC performance of electronic systems and components, and develop recommendations for electronic product optimal EMC design.

  18. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    PubMed

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  19. Crossover between the Gaussian orthogonal ensemble, the Gaussian unitary ensemble, and Poissonian statistics.

    PubMed

    Schweiner, Frank; Laturner, Jeanine; Main, Jörg; Wunner, Günter

    2017-11-01

    Until now only for specific crossovers between Poissonian statistics (P), the statistics of a Gaussian orthogonal ensemble (GOE), or the statistics of a Gaussian unitary ensemble (GUE) have analytical formulas for the level spacing distribution function been derived within random matrix theory. We investigate arbitrary crossovers in the triangle between all three statistics. To this aim we propose an according formula for the level spacing distribution function depending on two parameters. Comparing the behavior of our formula for the special cases of P→GUE, P→GOE, and GOE→GUE with the results from random matrix theory, we prove that these crossovers are described reasonably. Recent investigations by F. Schweiner et al. [Phys. Rev. E 95, 062205 (2017)2470-004510.1103/PhysRevE.95.062205] have shown that the Hamiltonian of magnetoexcitons in cubic semiconductors can exhibit all three statistics in dependence on the system parameters. Evaluating the numerical results for magnetoexcitons in dependence on the excitation energy and on a parameter connected with the cubic valence band structure and comparing the results with the formula proposed allows us to distinguish between regular and chaotic behavior as well as between existent or broken antiunitary symmetries. Increasing one of the two parameters, transitions between different crossovers, e.g., from the P→GOE to the P→GUE crossover, are observed and discussed.

  20. Toxicity of zero-valent iron nanoparticles to a trichloroethylene-degrading groundwater microbial community.

    PubMed

    Zabetakis, Kara M; Niño de Guzmán, Gabriela T; Torrents, Alba; Yarwood, Stephanie

    2015-01-01

    The microbiological impact of zero-valent iron used in the remediation of groundwater was investigated by exposing a trichloroethylene-degrading anaerobic microbial community to two types of iron nanoparticles. Changes in total bacterial and archaeal population numbers were analyzed using qPCR and were compared to results from a blank and negative control to assess for microbial toxicity. Additionally, the results were compared to those of samples exposed to silver nanoparticles and iron filings in an attempt to discern the source of toxicity. Statistical analysis revealed that the three different iron treatments were equally toxic to the total bacteria and archaea populations, as compared with the controls. Conversely, the silver nanoparticles had a limited statistical impact when compared to the controls and increased the microbial populations in some instances. Therefore, the findings suggest that zero-valent iron toxicity does not result from a unique nanoparticle-based effect.

  1. easyGWAS: A Cloud-Based Platform for Comparing the Results of Genome-Wide Association Studies.

    PubMed

    Grimm, Dominik G; Roqueiro, Damian; Salomé, Patrice A; Kleeberger, Stefan; Greshake, Bastian; Zhu, Wangsheng; Liu, Chang; Lippert, Christoph; Stegle, Oliver; Schölkopf, Bernhard; Weigel, Detlef; Borgwardt, Karsten M

    2017-01-01

    The ever-growing availability of high-quality genotypes for a multitude of species has enabled researchers to explore the underlying genetic architecture of complex phenotypes at an unprecedented level of detail using genome-wide association studies (GWAS). The systematic comparison of results obtained from GWAS of different traits opens up new possibilities, including the analysis of pleiotropic effects. Other advantages that result from the integration of multiple GWAS are the ability to replicate GWAS signals and to increase statistical power to detect such signals through meta-analyses. In order to facilitate the simple comparison of GWAS results, we present easyGWAS, a powerful, species-independent online resource for computing, storing, sharing, annotating, and comparing GWAS. The easyGWAS tool supports multiple species, the uploading of private genotype data and summary statistics of existing GWAS, as well as advanced methods for comparing GWAS results across different experiments and data sets in an interactive and user-friendly interface. easyGWAS is also a public data repository for GWAS data and summary statistics and already includes published data and results from several major GWAS. We demonstrate the potential of easyGWAS with a case study of the model organism Arabidopsis thaliana , using flowering and growth-related traits. © 2016 American Society of Plant Biologists. All rights reserved.

  2. Understanding regulatory networks requires more than computing a multitude of graph statistics. Comment on "Drivers of structural features in gene regulatory networks: From biophysical constraints to biological function" by O.C. Martin et al.

    NASA Astrophysics Data System (ADS)

    Tkačik, Gašper

    2016-07-01

    The article by O. Martin and colleagues provides a much needed systematic review of a body of work that relates the topological structure of genetic regulatory networks to evolutionary selection for function. This connection is very important. Using the current wealth of genomic data, statistical features of regulatory networks (e.g., degree distributions, motif composition, etc.) can be quantified rather easily; it is, however, often unclear how to interpret the results. On a graph theoretic level the statistical significance of the results can be evaluated by comparing observed graphs to ;randomized; ones (bravely ignoring the issue of how precisely to randomize!) and comparing the frequency of appearance of a particular network structure relative to a randomized null expectation. While this is a convenient operational test for statistical significance, its biological meaning is questionable. In contrast, an in-silico genotype-to-phenotype model makes explicit the assumptions about the network function, and thus clearly defines the expected network structures that can be compared to the case of no selection for function and, ultimately, to data.

  3. Determining Differences in Efficacy of Two Disinfectants Using t-Tests.

    ERIC Educational Resources Information Center

    Brehm, Michael A.; And Others

    1996-01-01

    Presents an experiment to compare the effectiveness of 95% ethanol to 20% bleach as disinfectants using t-tests for the statistical analysis of the data. Reports that bleach is a better disinfectant. Discusses the statistical and practical significance of the results. (JRH)

  4. A Statistical Method for Syntactic Dialectometry

    ERIC Educational Resources Information Center

    Sanders, Nathan C.

    2010-01-01

    This dissertation establishes the utility and reliability of a statistical distance measure for syntactic dialectometry, expanding dialectometry's methods to include syntax as well as phonology and the lexicon. It establishes the measure's reliability by comparing its results to those of dialectology and phonological dialectometry on Swedish…

  5. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  6. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  7. Application of microarray analysis on computer cluster and cloud platforms.

    PubMed

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  8. Comparison of accelerated and conventional corneal collagen cross-linking for progressive keratoconus.

    PubMed

    Cınar, Yasin; Cingü, Abdullah Kürşat; Türkcü, Fatih Mehmet; Çınar, Tuba; Yüksel, Harun; Özkurt, Zeynep Gürsel; Çaça, Ihsan

    2014-09-01

    To compare outcomes of accelerated and conventional corneal cross-linking (CXL) for progressive keratoconus (KC). Patients were divided into two groups as the accelerated CXL group and the conventional CXL group. The uncorrected distant visual acuity (UDVA), corrected distant visual acuity (CDVA), refraction and keratometric values were measured preoperatively and postoperatively. The data of the two groups were compared statistically. The mean UDVA and CDVA were better at the six month postoperative when compared with preoperative values in two groups. While change in UDVA and CDVA was statistically significant in the accelerated CXL group (p = 0.035 and p = 0.047, respectively), it did not reach statistical significance in the conventional CXL group (p = 0.184 and p = 0.113, respectively). The decrease in the mean corneal power (Km) and maximum keratometric value (Kmax) were statistically significant in both groups (p = 0.012 and 0.046, respectively in the accelerated CXL group, p = 0.012 and 0.041, respectively, in the conventional CXL group). There was no statistically significant difference in visual and refractive results between the two groups (p > 0.05). Refractive and visual results of the accelerated CXL method and the conventional CXL method for the treatment of KC in short time period were similar. The accelerated CXL method faster and provide high throughput of the patients.

  9. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  10. Upward Flame Propagation and Wire Insulation Flammability: 2006 Round Robin Data Analysis

    NASA Technical Reports Server (NTRS)

    Hirsch, David B.

    2007-01-01

    This viewgraph document reviews test results from tests of different material used for wire insulation for flame propagation and flammability. The presentation focused on investigating data variability both within and between laboratories; evaluated the between-laboratory consistency through consistency statistic h, which indicates how one laboratory s cell average compares with averages from other labs; evaluated the within-laboratory consistency through the consistency statistic k, which is an indicator of how one laboratory s within-laboratory variability compares with the variability of other labs combined; and extreme results were tested to determine whether they resulted by chance or from nonrandom causes (human error, instrument calibration shift, non-adherence to procedures, etc.)

  11. An application of statistics to comparative metagenomics

    PubMed Central

    Rodriguez-Brito, Beltran; Rohwer, Forest; Edwards, Robert A

    2006-01-01

    Background Metagenomics, sequence analyses of genomic DNA isolated directly from the environments, can be used to identify organisms and model community dynamics of a particular ecosystem. Metagenomics also has the potential to identify significantly different metabolic potential in different environments. Results Here we use a statistical method to compare curated subsystems, to predict the physiology, metabolism, and ecology from metagenomes. This approach can be used to identify those subsystems that are significantly different between metagenome sequences. Subsystems that were overrepresented in the Sargasso Sea and Acid Mine Drainage metagenome when compared to non-redundant databases were identified. Conclusion The methodology described herein applies statistics to the comparisons of metabolic potential in metagenomes. This analysis reveals those subsystems that are more, or less, represented in the different environments that are compared. These differences in metabolic potential lead to several testable hypotheses about physiology and metabolism of microbes from these ecosystems. PMID:16549025

  12. Do clinical safety charts improve paramedic key performance indicator results? (A clinical improvement programme evaluation).

    PubMed

    Ebbs, Phillip; Middleton, Paul M; Bonner, Ann; Loudfoot, Allan; Elliott, Peter

    2012-07-01

    Is the Clinical Safety Chart clinical improvement programme (CIP) effective at improving paramedic key performance indicator (KPI) results within the Ambulance Service of New South Wales? The CIP intervention area was compared with the non-intervention area in order to determine whether there was a statistically significant improvement in KPI results. The CIP was associated with a statistically significant improvement in paramedic KPI results within the intervention area. The strategies used within this CIP are recommended for further consideration.

  13. Statistical methodology for the analysis of dye-switch microarray experiments

    PubMed Central

    Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques

    2008-01-01

    Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965

  14. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  15. DEIVA: a web application for interactive visual analysis of differential gene expression profiles.

    PubMed

    Harshbarger, Jayson; Kratz, Anton; Carninci, Piero

    2017-01-07

    Differential gene expression (DGE) analysis is a technique to identify statistically significant differences in RNA abundance for genes or arbitrary features between different biological states. The result of a DGE test is typically further analyzed using statistical software, spreadsheets or custom ad hoc algorithms. We identified a need for a web-based system to share DGE statistical test results, and locate and identify genes in DGE statistical test results with a very low barrier of entry. We have developed DEIVA, a free and open source, browser-based single page application (SPA) with a strong emphasis on being user friendly that enables locating and identifying single or multiple genes in an immediate, interactive, and intuitive manner. By design, DEIVA scales with very large numbers of users and datasets. Compared to existing software, DEIVA offers a unique combination of design decisions that enable inspection and analysis of DGE statistical test results with an emphasis on ease of use.

  16. Statistics usage in the American Journal of Obstetrics and Gynecology: has anything changed?

    PubMed

    Welch, Gerald E; Gabbe, Steven G

    2002-03-01

    Our purpose was to compare statistical listing and usage between articles published in the American Journal of Obstetrics and Gynecology in 1994 with those published in 1999. All papers included in the obstetrics, fetus-placenta-newborn, and gynecology sections and the transactions of societies sections of the January through June 1999 issues of the American Journal of Obstetrics and Gynecology (volume 180, numbers 1 to 6) were reviewed for statistical usage. Each paper was given a rating for the cataloging of applied statistics and a rating for the appropriateness of statistical usage, when possible. These results were compared with the data collected on a similar review of articles published in 1994. Of the 238 available articles, 195 contained statistics and were reviewed. In comparison to the articles published in 1994, there were significantly more articles that completely cataloged applied statistics (74.3% vs 47.4%) (P <.0001), and there was a significant improvement in appropriateness of statistical usage (56.4% vs 30.3%) (P <.0001). Changes in the Instructions to Authors regarding the description of applied statistics and probable changes in the behavior of researchers and Editors have led to an improvement in the quality of statistics in papers published in the American Journal of Obstetrics and Gynecology.

  17. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  18. Comparative efficacy and safety of two 0.025% tretinoin gels: results from a multicenter double-blind, parallel study.

    PubMed

    Lucky, A W; Cullen, S I; Jarratt, M T; Quigley, J W

    1998-04-01

    The addition of polyolprepolymer-2 in tretinoin formulations may reduce tretinoin-induced cutaneous irritation. This study compared the efficacy and safety of a new 0.025% tretinoin gel containing polyolprepolymer-2, its vehicle, and a commercially-available 0.025% tretinoin gel in patients with mild to moderate acne vulgaris. In this 12-week multicenter, double-blind, parallel group study, efficacy was evaluated by objective lesion counts and the investigators' global evaluations. Subjective assessment of cutaneous irritation by the investigators and patients evaluated safety. The efficacy of the two active treatments in this 215 patient study was comparable, and both treatments were statistically significantly more effective than vehicle. When compared with the commercially-available tretinoin gel, the formulation containing polyolprepolymer-2 demonstrated statistically significantly less peeling at days 28, 56, and 84, statistically significantly less dryness by day 84, and statistically significantly less itching at day 14. Irritation scores for the formulation containing polyolprepolymer-2 were numerically lower but not statistically different from those of the commercially-available gel for erythema and burning. The number of cutaneous and noncutaneous adverse events were similar for both active medications. The two 0.025% gels studied demonstrated comparable efficacy. However, the gel formulation containing polyolprepolymer-2 caused significantly less peeling and drying than the commercially-available formulation by day 84 of the study.

  19. On the Utility of Content Analysis in Author Attribution: "The Federalist."

    ERIC Educational Resources Information Center

    Martindale, Colin; McKenzie, Dean

    1995-01-01

    Compares the success of lexical statistics, content analysis, and function words in determining the true author of "The Federalist." The function word approach proved most successful in attributing the papers to James Madison. Lexical statistics contributed nothing, while content analytic measures resulted in some success. (MJP)

  20. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  1. A clinicomicrobiological study to evaluate the efficacy of manual and powered toothbrushes among autistic patients

    PubMed Central

    Vajawat, Mayuri; Deepika, P. C.; Kumar, Vijay; Rajeshwari, P.

    2015-01-01

    Aim: To compare the efficacy of powered toothbrushes in improving gingival health and reducing salivary red complex counts as compared to manual toothbrushes, among autistic individuals. Materials and Methods: Forty autistics was selected. Test group received powered toothbrushes, and control group received manual toothbrushes. Plaque index and gingival index were recorded. Unstimulated saliva was collected for analysis of red complex organisms using polymerase chain reaction. Results: A statistically significant reduction in the plaque scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.002 for controls). This reduction was statistically more significant in the test group (P = 0.024). A statistically significant reduction in the gingival scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.001 for controls). This reduction was statistically more significant in the test group (P = 0.042). No statistically significant reduction in the detection rate of red complex organisms were seen at 4 weeks in both the groups. Conclusion: Powered toothbrushes result in a significant overall improvement in gingival health when constant reinforcement of oral hygiene instructions is given. PMID:26681855

  2. Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods

    PubMed Central

    Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.

    2012-01-01

    Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570

  3. A comparative analysis of the statistical properties of large mobile phone calling networks.

    PubMed

    Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N

    2014-05-30

    Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.

  4. Comparative analysis of the fit of 3-unit implant-supported frameworks cast in nickel-chromium and cobalt-chromium alloys and commercially pure titanium after casting, laser welding, and simulated porcelain firings.

    PubMed

    Tiossi, Rodrigo; Rodrigues, Renata Cristina Silveira; de Mattos, Maria da Glória Chiarello; Ribeiro, Ricardo Faria

    2008-01-01

    This study compared the vertical misfit of 3-unit implant-supported nickel-chromium (Ni-Cr) and cobalt-chromium (Co-Cr) alloy and commercially pure titanium (cpTi) frameworks after casting as 1 piece, after sectioning and laser welding, and after simulated porcelain firings. The results on the tightened side showed no statistically significant differences. On the opposite side, statistically significant differences were found for Co-Cr alloy (118.64 microm [SD: 91.48] to 39.90 microm [SD: 27.13]) and cpTi (118.56 microm [51.35] to 27.87 microm [12.71]) when comparing 1-piece to laser-welded frameworks. With both sides tightened, only Co-Cr alloy showed statistically significant differences after laser welding. Ni-Cr alloy showed the lowest misfit values, though the differences were not statistically significantly different. Simulated porcelain firings revealed no significant differences.

  5. Discriminatory power of water polo game-related statistics at the 2008 Olympic Games.

    PubMed

    Escalante, Yolanda; Saavedra, Jose M; Mansilla, Mirella; Tella, Victor

    2011-02-01

    The aims of this study were (1) to compare water polo game-related statistics by context (winning and losing teams) and sex (men and women), and (2) to identify characteristics discriminating the performances for each sex. The game-related statistics of the 64 matches (44 men's and 20 women's) played in the final phase of the Olympic Games held in Beijing in 2008 were analysed. Unpaired t-tests compared winners and losers and men and women, and confidence intervals and effect sizes of the differences were calculated. The results were subjected to a discriminant analysis to identify the differentiating game-related statistics of the winning and losing teams. The results showed the differences between winning and losing men's teams to be in both defence and offence, whereas in women's teams they were only in offence. In men's games, passing (assists), aggressive play (exclusions), centre position effectiveness (centre shots), and goalkeeper defence (goalkeeper-blocked 5-m shots) predominated, whereas in women's games the play was more dynamic (possessions). The variable that most discriminated performance in men was goalkeeper-blocked shots, and in women shooting effectiveness (shots). These results should help coaches when planning training and competition.

  6. Round Window Application of an Active Middle Ear Implant: A Comparison With Hearing Aid Usage in Japan.

    PubMed

    Iwasaki, Satoshi; Usami, Shin-Ichi; Takahashi, Haruo; Kanda, Yukihiko; Tono, Tetsuya; Doi, Katsumi; Kumakawa, Kozo; Gyo, Kiyofumi; Naito, Yasushi; Kanzaki, Sho; Yamanaka, Noboru; Kaga, Kimitaka

    2017-07-01

    To report on the safety and efficacy of an investigational active middle ear implant (AMEI) in Japan, and to compare results to preoperative results with a hearing aid. Prospective study conducted in Japan in which 23 Japanese-speaking adults suffering from conductive or mixed hearing loss received a VIBRANT SOUNDBRIDGE with implantation at the round window. Postoperative thresholds, speech perception results (word recognition scores, speech reception thresholds, signal-to-noise ratio [SNR]), and quality of life questionnaires at 20 weeks were compared with preoperative results with all patients receiving the same, best available hearing aid (HA). Statistically significant improvements in postoperative AMEI-aided thresholds (1, 2, 4, and 8 kHz) and on the speech reception thresholds and word recognition scores tests, compared with preoperative HA-aided results, were observed. On the SNR, the subjects' mean values showed statistically significant improvement, with -5.7 dB SNR for the AMEI-aided mean and -2.1 dB SNR for the preoperative HA-assisted mean. The APHAB quality of life questionnaire also showed statistically significant improvement with the AMEI. Results with the AMEI applied to the round window exceeded those of the best available hearing aid in speech perception as well as quality of life questionnaires. There were minimal adverse events or changes to patients' residual hearing.

  7. Comparing geological and statistical approaches for element selection in sediment tracing research

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only the Lexys Creek modelling results differing significantly (35%). The statistical model with expanded elemental selection (DFA0.35) differed from the geological model by an average of 30% for all 6 models. Elemental selection for sediment fingerprinting therefore has the potential to impact modeling results. Accordingly is important to incorporate both robust geological and statistical approaches when selecting elements for sediment fingerprinting. For the Baroon Pocket Dam, management should focus on reducing the supply of sediments derived from felsic sources in each of the subcatchments.

  8. Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)

    PubMed Central

    Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal

    2016-01-01

    Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365

  9. [Again review of research design and statistical methods of Chinese Journal of Cardiology].

    PubMed

    Kong, Qun-yu; Yu, Jin-ming; Jia, Gong-xian; Lin, Fan-li

    2012-11-01

    To re-evaluate and compare the research design and the use of statistical methods in Chinese Journal of Cardiology. Summary the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology all over the year of 2011, and compared the result with the evaluation of 2008. (1) There is no difference in the distribution of the design of researches of between the two volumes. Compared with the early volume, the use of survival regression and non-parameter test are increased, while decreased in the proportion of articles with no statistical analysis. (2) The proportions of articles in the later volume are significant lower than the former, such as 6(4%) with flaws in designs, 5(3%) with flaws in the expressions, 9(5%) with the incomplete of analysis. (3) The rate of correction of variance analysis has been increased, so as the multi-group comparisons and the test of normality. The error rate of usage has been decreased form 17% to 25% without significance in statistics due to the ignorance of the test of homogeneity of variance. Many improvements showed in Chinese Journal of Cardiology such as the regulation of the design and statistics. The homogeneity of variance should be paid more attention in the further application.

  10. Optical diagnosis of cervical cancer by higher order spectra and boosting

    NASA Astrophysics Data System (ADS)

    Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Barman, Ritwik; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-03-01

    In this contribution, we report the application of higher order statistical moments using decision tree and ensemble based learning methodology for the development of diagnostic algorithms for optical diagnosis of cancer. The classification results were compared to those obtained with an independent feature extractors like linear discriminant analysis (LDA). The performance and efficacy of these methodology using higher order statistics as a classifier using boosting has higher specificity and sensitivity while being much faster as compared to other time-frequency domain based methods.

  11. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  12. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  13. Cardiac arrest risk standardization using administrative data compared to registry data

    PubMed Central

    Gaieski, David F.; Donnino, Michael W.; Nelson, Joshua I. M.; Mutter, Eric L.; Carr, Brendan G.; Abella, Benjamin S.; Wiebe, Douglas J.

    2017-01-01

    Background Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Methods and results Two risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Conclusions Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA. PMID:28783754

  14. The construction and assessment of a statistical model for the prediction of protein assay data.

    PubMed

    Pittman, J; Sacks, J; Young, S Stanley

    2002-01-01

    The focus of this work is the development of a statistical model for a bioinformatics database whose distinctive structure makes model assessment an interesting and challenging problem. The key components of the statistical methodology, including a fast approximation to the singular value decomposition and the use of adaptive spline modeling and tree-based methods, are described, and preliminary results are presented. These results are shown to compare favorably to selected results achieved using comparitive methods. An attempt to determine the predictive ability of the model through the use of cross-validation experiments is discussed. In conclusion a synopsis of the results of these experiments and their implications for the analysis of bioinformatic databases in general is presented.

  15. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    PubMed

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value

  16. Results of a joint NOAA/NASA sounder simulation study

    NASA Technical Reports Server (NTRS)

    Phillips, N.; Susskind, Joel; Mcmillin, L.

    1988-01-01

    This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.

  17. Methods for detrending success metrics to account for inflationary and deflationary factors*

    NASA Astrophysics Data System (ADS)

    Petersen, A. M.; Penner, O.; Stanley, H. E.

    2011-01-01

    Time-dependent economic, technological, and social factors can artificially inflate or deflate quantitative measures for career success. Here we develop and test a statistical method for normalizing career success metrics across time dependent factors. In particular, this method addresses the long standing question: how do we compare the career achievements of professional athletes from different historical eras? Developing an objective approach will be of particular importance over the next decade as major league baseball (MLB) players from the "steroids era" become eligible for Hall of Fame induction. Some experts are calling for asterisks (*) to be placed next to the career statistics of athletes found guilty of using performance enhancing drugs (PED). Here we address this issue, as well as the general problem of comparing statistics from distinct eras, by detrending the seasonal statistics of professional baseball players. We detrend player statistics by normalizing achievements to seasonal averages, which accounts for changes in relative player ability resulting from a range of factors. Our methods are general, and can be extended to various arenas of competition where time-dependent factors play a key role. For five statistical categories, we compare the probability density function (pdf) of detrended career statistics to the pdf of raw career statistics calculated for all player careers in the 90-year period 1920-2009. We find that the functional form of these pdfs is stationary under detrending. This stationarity implies that the statistical regularity observed in the right-skewed distributions for longevity and success in professional sports arises from both the wide range of intrinsic talent among athletes and the underlying nature of competition. We fit the pdfs for career success by the Gamma distribution in order to calculate objective benchmarks based on extreme statistics which can be used for the identification of extraordinary careers.

  18. Bayesian Statistics in Educational Research: A Look at the Current State of Affairs

    ERIC Educational Resources Information Center

    König, Christoph; van de Schoot, Rens

    2018-01-01

    The ability of a scientific discipline to build cumulative knowledge depends on its predominant method of data analysis. A steady accumulation of knowledge requires approaches which allow researchers to consider results from comparable prior research. Bayesian statistics is especially relevant for establishing a cumulative scientific discipline,…

  19. The Empirical Review of Meta-Analysis Published in Korea

    ERIC Educational Resources Information Center

    Park, Sunyoung; Hong, Sehee

    2016-01-01

    Meta-analysis is a statistical method that is increasingly utilized to combine and compare the results of previous primary studies. However, because of the lack of comprehensive guidelines for how to use meta-analysis, many meta-analysis studies have failed to consider important aspects, such as statistical programs, power analysis, publication…

  20. Comparing the Lifetimes of Two Brands of Batteries

    ERIC Educational Resources Information Center

    Dunn, Peter K.

    2013-01-01

    In this paper, we report a case study that illustrates the importance in interpreting the results from statistical tests, and shows the difference between practical importance and statistical significance. This case study presents three sets of data concerning the performance of two brands of batteries. The data are easy to describe and…

  1. Detection of Person Misfit in Computerized Adaptive Tests with Polytomous Items.

    ERIC Educational Resources Information Center

    van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R.

    2002-01-01

    Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…

  2. The Development and Demonstration of Multiple Regression Models for Operant Conditioning Questions.

    ERIC Educational Resources Information Center

    Fanning, Fred; Newman, Isadore

    Based on the assumption that inferential statistics can make the operant conditioner more sensitive to possible significant relationships, regressions models were developed to test the statistical significance between slopes and Y intercepts of the experimental and control group subjects. These results were then compared to the traditional operant…

  3. Flipped Statistics Class Results: Better Performance than Lecture over One Year Later

    ERIC Educational Resources Information Center

    Winquist, Jennifer R.; Carlson, Keith A.

    2014-01-01

    In this paper, we compare an introductory statistics course taught using a flipped classroom approach to the same course taught using a traditional lecture based approach. In the lecture course, students listened to lecture, took notes, and completed homework assignments. In the flipped course, students read relatively simple chapters and answered…

  4. Comparative evaluation of guided tissue regeneration with use of collagen-based barrier freeze-dried dura mater allograft for mandibular class 2 furcation defects (a comparative controlled clinical study).

    PubMed

    Patel, Sandeep; Kubavat, Ajay; Ruparelia, Brijesh; Agarwal, Arvind; Panda, Anup

    2012-01-01

    The aim of periodontal surgery is complete regeneration. The present study was designed to evaluate and compare clinically soft tissue changes in form of probing pocket depth, gingival shrinkage, attachment level and hard tissue changes in form of horizontal and vertical bone level using resorbable membranes. Twelve subjects with bilateral class 2 furcation defects were selected. After initial phase one treatment, open debridement was performed in control site while freezedried dura mater allograft was used in experimental site. Soft and hard tissue parameters were registered intrasurgically. Nine months reentry ensured better understanding and evaluation of the final outcome of the study. Guided tissue regeneration is a predictable treatment modality for class 2 furcation defect. There was statistically significant reduction in pocket depth as compared to control (p < 0.01). There is statistically significant increase in periodontal attachment level within control and experimental sites showed better results (p < 0.01). For hard tissue parameter, significant defect fill resulted in experimental group, while in control group, less significant defect fill was found in horizontal direction and nonsignificant defect fill was found in vertical direction. The results showed statistically significant improvement in soft and hard tissue parameters and less gingival shrinkage in experimental sites compared to control site. The use of FDDMA in furcation defects helps us to achieve predictable results. This cross-linked collagen membrane has better handling properties and ease of procurement as well as economic viability making it a logical material to be used in regenerative surgeries.

  5. Low-level contrast statistics are diagnostic of invariance of natural textures

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as “different” compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance. PMID:22701419

  6. Efficacy of a radiation absorbing shield in reducing dose to the interventionalist during peripheral endovascular procedures: a single centre pilot study.

    PubMed

    Power, S; Mirza, M; Thakorlal, A; Ganai, B; Gavagan, L D; Given, M F; Lee, M J

    2015-06-01

    This prospective pilot study was undertaken to evaluate the feasibility and effectiveness of using a radiation absorbing shield to reduce operator dose from scatter during lower limb endovascular procedures. A commercially available bismuth shield system (RADPAD) was used. Sixty consecutive patients undergoing lower limb angioplasty were included. Thirty procedures were performed without the RADPAD (control group) and thirty with the RADPAD (study group). Two separate methods were used to measure dose to a single operator. Thermoluminescent dosimeter (TLD) badges were used to measure hand, eye, and unshielded body dose. A direct dosimeter with digital readout was also used to measure eye and unshielded body dose. To allow for variation between control and study groups, dose per unit time was calculated. TLD results demonstrated a significant reduction in median body dose per unit time for the study group compared with controls (p = 0.001), corresponding to a mean dose reduction rate of 65 %. Median eye and hand dose per unit time were also reduced in the study group compared with control group, however, this was not statistically significant (p = 0.081 for eye, p = 0.628 for hand). Direct dosimeter readings also showed statistically significant reduction in median unshielded body dose rate for the study group compared with controls (p = 0.037). Eye dose rate was reduced for the study group but this was not statistically significant (p = 0.142). Initial results are encouraging. Use of the shield resulted in a statistically significant reduction in unshielded dose to the operator's body. Measured dose to the eye and hand of operator were also reduced but did not reach statistical significance in this pilot study.

  7. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures

    PubMed Central

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-01-01

    Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763

  8. Effectiveness of Platelet Rich Plasma and Bone Graft in the Treatment of Intrabony Defects: A Clinico-radiographic Study

    PubMed Central

    Jalaluddin, Mohammad; Mahesh, Jayachandran; Mahesh, Rethi; Jayanti, Ipsita; Faizuddin, Mohamed; Kripal, Krishna; Nazeer, Nazia

    2018-01-01

    Background & Objectives: Periodontal disease is characterized by the presence of gingival inflammation, periodontal pocket formation, loss of connective tissue attachment and alveolar bone around the affected tooth. Different modalities have been employed in the treatment and regeneration of periodontal defects which include the use of bone grafts, PRP and other growth factors.The purpose of this prospective, randomized controlled study was to compare the regenerative efficacy of PRP and bonegraft in intrabony periodontal defects. Methodology: This randomized control trial was carried out in the Department of Periodontics & Oral Implantology, Kalinga Institute of Dental Sciences and Hospital, KIIT University, Bhubaneswar. The study sample included 20 periodontal infrabony defects in 20 patients, 12 males and 8 females. The patients were aged between 25 -45 years(with mean age of 35 years). The 20 sites selected for the study were was randomly divided into 2 groups of 10 sites each. Group A: PRP alone, Group B: Bone Graft. Statistical Anaysis & Results: Statistical Analysis Was Done Using SPSS (Version 18.0): Statistical analysis was done usingpaired ‘t’ tests and ANOVA that revealed a significant reduction ingingival index, plaque index, probing pocket depth and gain in clinical attachment level at various time intervalswithin both the groups. Radiographic evaluation revealed statistically significant defect fill (p<0.001) at the end of 6months within both the groups. However, there was astatistically significant difference seen in group B radiographically, when compared to group A. Conclusion: Both the groups showed promising results in enhancing periodontal regeneration; however the resultswith bonegraftwere comparatively better, although not statistically significant when compared to PRP alone. PMID:29682091

  9. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  10. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography.

    PubMed

    Mangione, Francesca; Meleo, Deborah; Talocco, Marco; Pecci, Raffaella; Pacifici, Luciano; Bedini, Rossella

    2013-01-01

    The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT) system used in dental clinical practice, by comparing it with microCT system as standard reference. Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  11. Wastewater-Based Epidemiology of Stimulant Drugs: Functional Data Analysis Compared to Traditional Statistical Methods.

    PubMed

    Salvatore, Stefania; Bramness, Jørgen Gustav; Reid, Malcolm J; Thomas, Kevin Victor; Harman, Christopher; Røislien, Jo

    2015-01-01

    Wastewater-based epidemiology (WBE) is a new methodology for estimating the drug load in a population. Simple summary statistics and specification tests have typically been used to analyze WBE data, comparing differences between weekday and weekend loads. Such standard statistical methods may, however, overlook important nuanced information in the data. In this study, we apply functional data analysis (FDA) to WBE data and compare the results to those obtained from more traditional summary measures. We analysed temporal WBE data from 42 European cities, using sewage samples collected daily for one week in March 2013. For each city, the main temporal features of two selected drugs were extracted using functional principal component (FPC) analysis, along with simpler measures such as the area under the curve (AUC). The individual cities' scores on each of the temporal FPCs were then used as outcome variables in multiple linear regression analysis with various city and country characteristics as predictors. The results were compared to those of functional analysis of variance (FANOVA). The three first FPCs explained more than 99% of the temporal variation. The first component (FPC1) represented the level of the drug load, while the second and third temporal components represented the level and the timing of a weekend peak. AUC was highly correlated with FPC1, but other temporal characteristic were not captured by the simple summary measures. FANOVA was less flexible than the FPCA-based regression, and even showed concordance results. Geographical location was the main predictor for the general level of the drug load. FDA of WBE data extracts more detailed information about drug load patterns during the week which are not identified by more traditional statistical methods. Results also suggest that regression based on FPC results is a valuable addition to FANOVA for estimating associations between temporal patterns and covariate information.

  12. A Comparison of Forecast Error Generators for Modeling Wind and Load Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ning; Diao, Ruisheng; Hafen, Ryan P.

    2013-07-25

    This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load valuesmore » to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.« less

  13. Differences in pulmonary biochemical and inflammatory responses of rats and guinea pigs resulting from daytime or nighttime, single and repeated exposure to ozone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Bree, L.; Marra, M.; Rombout, P.J.

    1992-10-01

    Rats and guinea pigs were exposed to 0.8 mg ozone (O3)/m3 (approximately 0.4 ppm) for 12 hr during the daytime, 12 hr during the nighttime, or continuously to investigate circadian variation in O3-induced pulmonary toxicity during single and repeated O3 exposures. Biomarkers in bronchoalveolar lavage (BAL) fluid and lung tissues were measured as indicators of biochemical and inflammatory responses. Nighttime O3 exposure of rats resulted in larger increases of protein, albumin, and inflammatory cells in BAL fluid compared to those after daytime O3 exposure and this daytime-nighttime difference was statistically significant (p < 0.05). Single daytime or nighttime O3 exposuremore » of guinea pigs resulted in comparable increases of BAL fluid proteins and inflammatory cells without a daytime-nighttime difference. Nighttime and continuous O3 exposure of rats for 3 days resulted in comparable increases in lung antioxidant enzyme activities, both of which differed statistically from effects from daytime O3 exposures (p < 0.05). Continuous O3 exposure of guinea pigs for 3 days caused, in general, statistically larger increases in lung tissue parameters compared to nighttime O3 exposures (p < 0.05). These results suggest that the extent of O3-induced acute pulmonary biochemical and inflammatory responses is directly related to the level of physical and respiratory activity. For rats, effects from continuous O3 exposure appear to be controlled by the nighttime, physically active period. In guinea pigs, the comparable responses following daytime or nighttime O3 exposure seem in accordance with their random behavioral daily activity pattern. This study supports the view that physical activity-related increases in inhaled dose significantly enhance the pulmonary O3 responses.« less

  14. Objective forensic analysis of striated, quasi-striated and impressed toolmarks

    NASA Astrophysics Data System (ADS)

    Spotts, Ryan E.

    Following the 1993 Daubert v. Merrell Dow Pharmaceuticals, Inc. court case and continuing to the 2010 National Academy of Sciences report, comparative forensic toolmark examination has received many challenges to its admissibility in court cases and its scientific foundations. Many of these challenges deal with the subjective nature in determining whether toolmarks are identifiable. This questioning of current identification methods has created a demand for objective methods of identification - "objective" implying known error rates and statistically reliability. The demand for objective methods has resulted in research that created a statistical algorithm capable of comparing toolmarks to determine their statistical similarity, and thus the ability to separate matching and nonmatching toolmarks. This was expanded to the creation of virtual toolmarking (characterization of a tool to predict the toolmark it will create). The statistical algorithm, originally designed for two-dimensional striated toolmarks, had been successfully applied to striated screwdriver and quasi-striated plier toolmarks. Following this success, a blind study was conducted to validate the virtual toolmarking capability using striated screwdriver marks created at various angles of incidence. Work was also performed to optimize the statistical algorithm by implementing means to ensure the algorithm operations were constrained to logical comparison regions (e.g. the opposite ends of two toolmarks do not need to be compared because they do not coincide with each other). This work was performed on quasi-striated shear cut marks made with pliers - a previously tested, more difficult application of the statistical algorithm that could demonstrate the difference in results due to optimization. The final research conducted was performed with pseudostriated impression toolmarks made with chisels. Impression marks, which are more complex than striated marks, were analyzed using the algorithm to separate matching and nonmatching toolmarks. Results of the conducted research are presented as well as evidence of the primary assumption of forensic toolmark examination; all tools can create identifiably unique toolmarks.

  15. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  16. New heterogeneous test statistics for the unbalanced fixed-effect nested design.

    PubMed

    Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming

    2011-05-01

    When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.

  17. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  18. Results of the 1980 NACUBO Comparative Performance Study and Investment Questionnaire.

    ERIC Educational Resources Information Center

    Dresner, Bruce M.

    The purpose of the annual National Association of College and University Business Officers' (NACUBO) Comparative Performance Study is to aid administrators in evaluating the performance of their investment pools. The 1980 study contains two parts: (1) comparative performance information and related investment performance statistics; and (2) other…

  19. --No Title--

    Science.gov Websites

    Documentation (pdf) Latest statistics (comparing FNMOC raw and bias corrected ensemble forecast) Statistics For September (comparing NCEP20s, NCEP20sb, NAEFS40nb, NAEFS/NUOPC60gb) Statistics For October (comparing NCEP20s, NCEP20sb, NAEFS40nb, NAEFS/NUOPC60gb) Statistics For November (comparing NCEP20s, NCEP20sb

  20. Statistical methods for detecting and comparing periodic data and their application to the nycthemeral rhythm of bodily harm: A population based study

    PubMed Central

    2010-01-01

    Background Animals, including humans, exhibit a variety of biological rhythms. This article describes a method for the detection and simultaneous comparison of multiple nycthemeral rhythms. Methods A statistical method for detecting periodic patterns in time-related data via harmonic regression is described. The method is particularly capable of detecting nycthemeral rhythms in medical data. Additionally a method for simultaneously comparing two or more periodic patterns is described, which derives from the analysis of variance (ANOVA). This method statistically confirms or rejects equality of periodic patterns. Mathematical descriptions of the detecting method and the comparing method are displayed. Results Nycthemeral rhythms of incidents of bodily harm in Middle Franconia are analyzed in order to demonstrate both methods. Every day of the week showed a significant nycthemeral rhythm of bodily harm. These seven patterns of the week were compared to each other revealing only two different nycthemeral rhythms, one for Friday and Saturday and one for the other weekdays. PMID:21059197

  1. A comparator-hypothesis account of biased contingency detection.

    PubMed

    Vadillo, Miguel A; Barberia, Itxaso

    2018-02-12

    Our ability to detect statistical dependencies between different events in the environment is strongly biased by the number of coincidences between them. Even when there is no true covariation between a cue and an outcome, if the marginal probability of either of them is high, people tend to perceive some degree of statistical contingency between both events. The present paper explores the ability of the Comparator Hypothesis to explain the general pattern of results observed in this literature. Our simulations show that this model can account for the biasing effects of the marginal probabilities of cues and outcomes. Furthermore, the overall fit of the Comparator Hypothesis to a sample of experimental conditions from previous studies is comparable to that of the popular Rescorla-Wagner model. These results should encourage researchers to further explore and put to the test the predictions of the Comparator Hypothesis in the domain of biased contingency detection. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Empirical investigation into depth-resolution of Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Piana Agostinetti, N.; Ogaya, X.

    2017-12-01

    We investigate the depth-resolution of MT data comparing reconstructed 1D resistivity profiles with measured resistivity and lithostratigraphy from borehole data. Inversion of MT data has been widely used to reconstruct the 1D fine-layered resistivity structure beneath an isolated Magnetotelluric (MT) station. Uncorrelated noise is generally assumed to be associated to MT data. However, wrong assumptions on error statistics have been proved to strongly bias the results obtained in geophysical inversions. In particular the number of resolved layers at depth strongly depends on error statistics. In this study, we applied a trans-dimensional McMC algorithm for reconstructing the 1D resistivity profile near-by the location of a 1500 m-deep borehole, using MT data. We resolve the MT inverse problem imposing different models for the error statistics associated to the MT data. Following a Hierachical Bayes' approach, we also inverted for the hyper-parameters associated to each error statistics model. Preliminary results indicate that assuming un-correlated noise leads to a number of resolved layers larger than expected from the retrieved lithostratigraphy. Moreover, comparing the inversion of synthetic resistivity data obtained from the "true" resistivity stratification measured along the borehole shows that a consistent number of resistivity layers can be obtained using a Gaussian model for the error statistics, with substantial correlation length.

  3. [The application of the prospective space-time statistic in early warning of infectious disease].

    PubMed

    Yin, Fei; Li, Xiao-Song; Feng, Zi-Jian; Ma, Jia-Qi

    2007-06-01

    To investigate the application of prospective space-time scan statistic in the early stage of detecting infectious disease outbreaks. The prospective space-time scan statistic was tested by mimicking daily prospective analyses of bacillary dysentery data of Chengdu city in 2005 (3212 cases in 102 towns and villages). And the results were compared with that of purely temporal scan statistic. The prospective space-time scan statistic could give specific messages both in spatial and temporal. The results of June indicated that the prospective space-time scan statistic could timely detect the outbreaks that started from the local site, and the early warning message was powerful (P = 0.007). When the merely temporal scan statistic for detecting the outbreak was sent two days later, and the signal was less powerful (P = 0.039). The prospective space-time scan statistic could make full use of the spatial and temporal information in infectious disease data and could timely and effectively detect the outbreaks that start from the local sites. The prospective space-time scan statistic could be an important tool for local and national CDC to set up early detection surveillance systems.

  4. Nonlinear wave chaos: statistics of second harmonic fields.

    PubMed

    Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M

    2017-10-01

    Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.

  5. Testing prediction methods: Earthquake clustering versus the Poisson model

    USGS Publications Warehouse

    Michael, A.J.

    1997-01-01

    Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.

  6. Statistical analysis of flight times for space shuttle ferry flights

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Perlmutter, M.

    1974-01-01

    Markov chain and Monte Carlo analysis techniques are applied to the simulated Space Shuttle Orbiter Ferry flights to obtain statistical distributions of flight time duration between Edwards Air Force Base and Kennedy Space Center. The two methods are compared, and are found to be in excellent agreement. The flights are subjected to certain operational and meteorological requirements, or constraints, which cause eastbound and westbound trips to yield different results. Persistence of events theory is applied to the occurrence of inclement conditions to find their effect upon the statistical flight time distribution. In a sensitivity test, some of the constraints are varied to observe the corresponding changes in the results.

  7. Providing peak river flow statistics and forecasting in the Niger River basin

    NASA Astrophysics Data System (ADS)

    Andersson, Jafet C. M.; Ali, Abdou; Arheimer, Berit; Gustafsson, David; Minoungou, Bernard

    2017-08-01

    Flooding is a growing concern in West Africa. Improved quantification of discharge extremes and associated uncertainties is needed to improve infrastructure design, and operational forecasting is needed to provide timely warnings. In this study, we use discharge observations, a hydrological model (Niger-HYPE) and extreme value analysis to estimate peak river flow statistics (e.g. the discharge magnitude with a 100-year return period) across the Niger River basin. To test the model's capacity of predicting peak flows, we compared 30-year maximum discharge and peak flow statistics derived from the model vs. derived from nine observation stations. The results indicate that the model simulates peak discharge reasonably well (on average + 20%). However, the peak flow statistics have a large uncertainty range, which ought to be considered in infrastructure design. We then applied the methodology to derive basin-wide maps of peak flow statistics and their associated uncertainty. The results indicate that the method is applicable across the hydrologically active part of the river basin, and that the uncertainty varies substantially depending on location. Subsequently, we used the most recent bias-corrected climate projections to analyze potential changes in peak flow statistics in a changed climate. The results are generally ambiguous, with consistent changes only in very few areas. To test the forecasting capacity, we ran Niger-HYPE with a combination of meteorological data sets for the 2008 high-flow season and compared with observations. The results indicate reasonable forecasting capacity (on average 17% deviation), but additional years should also be evaluated. We finish by presenting a strategy and pilot project which will develop an operational flood monitoring and forecasting system based in-situ data, earth observations, modelling, and extreme statistics. In this way we aim to build capacity to ultimately improve resilience toward floods, protecting lives and infrastructure in the region.

  8. Steganalysis based on reducing the differences of image statistical characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  9. Correlation of RNA secondary structure statistics with thermodynamic stability and applications to folding.

    PubMed

    Wu, Johnny C; Gardner, David P; Ozer, Stuart; Gutell, Robin R; Ren, Pengyu

    2009-08-28

    The accurate prediction of the secondary and tertiary structure of an RNA with different folding algorithms is dependent on several factors, including the energy functions. However, an RNA higher-order structure cannot be predicted accurately from its sequence based on a limited set of energy parameters. The inter- and intramolecular forces between this RNA and other small molecules and macromolecules, in addition to other factors in the cell such as pH, ionic strength, and temperature, influence the complex dynamics associated with transition of a single stranded RNA to its secondary and tertiary structure. Since all of the factors that affect the formation of an RNAs 3D structure cannot be determined experimentally, statistically derived potential energy has been used in the prediction of protein structure. In the current work, we evaluate the statistical free energy of various secondary structure motifs, including base-pair stacks, hairpin loops, and internal loops, using their statistical frequency obtained from the comparative analysis of more than 50,000 RNA sequences stored in the RNA Comparative Analysis Database (rCAD) at the Comparative RNA Web (CRW) Site. Statistical energy was computed from the structural statistics for several datasets. While the statistical energy for a base-pair stack correlates with experimentally derived free energy values, suggesting a Boltzmann-like distribution, variation is observed between different molecules and their location on the phylogenetic tree of life. Our statistical energy values calculated for several structural elements were utilized in the Mfold RNA-folding algorithm. The combined statistical energy values for base-pair stacks, hairpins and internal loop flanks result in a significant improvement in the accuracy of secondary structure prediction; the hairpin flanks contribute the most.

  10. Efficacy of a Radiation Absorbing Shield in Reducing Dose to the Interventionalist During Peripheral Endovascular Procedures: A Single Centre Pilot Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power, S.; Mirza, M.; Thakorlal, A.

    PurposeThis prospective pilot study was undertaken to evaluate the feasibility and effectiveness of using a radiation absorbing shield to reduce operator dose from scatter during lower limb endovascular procedures.Materials and MethodsA commercially available bismuth shield system (RADPAD) was used. Sixty consecutive patients undergoing lower limb angioplasty were included. Thirty procedures were performed without the RADPAD (control group) and thirty with the RADPAD (study group). Two separate methods were used to measure dose to a single operator. Thermoluminescent dosimeter (TLD) badges were used to measure hand, eye, and unshielded body dose. A direct dosimeter with digital readout was also used tomore » measure eye and unshielded body dose. To allow for variation between control and study groups, dose per unit time was calculated.ResultsTLD results demonstrated a significant reduction in median body dose per unit time for the study group compared with controls (p = 0.001), corresponding to a mean dose reduction rate of 65 %. Median eye and hand dose per unit time were also reduced in the study group compared with control group, however, this was not statistically significant (p = 0.081 for eye, p = 0.628 for hand). Direct dosimeter readings also showed statistically significant reduction in median unshielded body dose rate for the study group compared with controls (p = 0.037). Eye dose rate was reduced for the study group but this was not statistically significant (p = 0.142).ConclusionInitial results are encouraging. Use of the shield resulted in a statistically significant reduction in unshielded dose to the operator’s body. Measured dose to the eye and hand of operator were also reduced but did not reach statistical significance in this pilot study.« less

  11. An experimental study of the surface elevation probability distribution and statistics of wind-generated waves

    NASA Technical Reports Server (NTRS)

    Huang, N. E.; Long, S. R.

    1980-01-01

    Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.

  12. Bayesian networks and statistical analysis application to analyze the diagnostic test accuracy

    NASA Astrophysics Data System (ADS)

    Orzechowski, P.; Makal, Jaroslaw; Onisko, A.

    2005-02-01

    The computer aided BPH diagnosis system based on Bayesian network is described in the paper. First result are compared to a given statistical method. Different statistical methods are used successfully in medicine for years. However, the undoubted advantages of probabilistic methods make them useful in application in newly created systems which are frequent in medicine, but do not have full and competent knowledge. The article presents advantages of the computer aided BPH diagnosis system in clinical practice for urologists.

  13. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1986-01-01

    The reliability of microfocous X-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 precent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  14. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1985-01-01

    The reliability of microfocus x-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 percent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  15. Unrealistic comparative optimism: An unsuccessful search for evidence of a genuinely motivational bias

    PubMed Central

    Harris, Adam J. L.; de Molière, Laura; Soh, Melinda; Hahn, Ulrike

    2017-01-01

    One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events—they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a ‘severity effect’ whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method. PMID:28278200

  16. Unrealistic comparative optimism: An unsuccessful search for evidence of a genuinely motivational bias.

    PubMed

    Harris, Adam J L; de Molière, Laura; Soh, Melinda; Hahn, Ulrike

    2017-01-01

    One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events-they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a 'severity effect' whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method.

  17. Research Design and Statistical Methods in Indian Medical Journals: A Retrospective Survey

    PubMed Central

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S.; Mayya, Shreemathi S.

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were – study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems. PMID:25856194

  18. Research design and statistical methods in Indian medical journals: a retrospective survey.

    PubMed

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.

  19. A comparative evaluation of subepithelial connective tissue graft (SCTG) versus platelet concentrate graft (PCG) in the treatment of gingival recession using coronally advanced flap technique: A 12-month study

    PubMed Central

    Kumar, G. Naveen Vital; Murthy, K. Raja Venkatesh

    2013-01-01

    Objective: The objective of this study was to clinically evaluate and compare the efficacy of platelet concentrate graft (PCG) with that of subepithelial connective tissue graft (SCTG) using a coronally advanced flap technique in the treatment of gingival recession. Materials and Methods: Twelve patients with a total of 24 gingival recession defects were selected and randomly assigned either to experimental site-A (SCTG) or experimental site-B (PCG). The clinical parameters were recorded at baseline up to 12 months post-operatively and compared. Results: The mean vertical recession depth (VRD) statistically significantly decreased from 2.50 ± 0.48 mm to 0.54 ± 0.50 mm with PCG and from 2.75 ± 0.58 mm to 0.54 ± 0.45 mm with SCTG at 12 months. No statistically significant differences between the treatments were found for VRD and clinical attachment level (CAL), while keratinized tissue width (KTW) gain was statistically significant. Conclusion: Both the SCTG and the PCG group resulted in a significant amount of root coverage. The PCG technique was less invasive and required minimal time and clinical maneuver. It resulted in superior aesthetic outcome and lower post-surgical discomfort at the 12 months follow-up. PMID:24554889

  20. Anticoagulant vs. antiplatelet therapy in patients with cryptogenic stroke and patent foramen ovale: an individual participant data meta-analysis.

    PubMed

    Kent, David M; Dahabreh, Issa J; Ruthazer, Robin; Furlan, Anthony J; Weimar, Christian; Serena, Joaquín; Meier, Bernhard; Mattle, Heinrich P; Di Angelantonio, Emanuele; Paciaroni, Maurizio; Schuchlenz, Herwig; Homma, Shunichi; Lutz, Jennifer S; Thaler, David E

    2015-09-14

    The preferred antithrombotic strategy for secondary prevention in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO) is unknown. We pooled multiple observational studies and used propensity score-based methods to estimate the comparative effectiveness of oral anticoagulation (OAC) compared with antiplatelet therapy (APT). Individual participant data from 12 databases of medically treated patients with CS and PFO were analysed with Cox regression models, to estimate database-specific hazard ratios (HRs) comparing OAC with APT, for both the primary composite outcome [recurrent stroke, transient ischaemic attack (TIA), or death] and stroke alone. Propensity scores were applied via inverse probability of treatment weighting to control for confounding. We synthesized database-specific HRs using random-effects meta-analysis models. This analysis included 2385 (OAC = 804 and APT = 1581) patients with 227 composite endpoints (stroke/TIA/death). The difference between OAC and APT was not statistically significant for the primary composite outcome [adjusted HR = 0.76, 95% confidence interval (CI) 0.52-1.12] or for the secondary outcome of stroke alone (adjusted HR = 0.75, 95% CI 0.44-1.27). Results were consistent in analyses applying alternative weighting schemes, with the exception that OAC had a statistically significant beneficial effect on the composite outcome in analyses standardized to the patient population who actually received APT (adjusted HR = 0.64, 95% CI 0.42-0.99). Subgroup analyses did not detect statistically significant heterogeneity of treatment effects across clinically important patient groups. We did not find a statistically significant difference comparing OAC with APT; our results justify randomized trials comparing different antithrombotic approaches in these patients. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  1. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  2. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  3. Comparing the Persuasiveness of Narrative and Statistical Evidence Using Meta-Analysis.

    ERIC Educational Resources Information Center

    Allen, Mike; Preiss, Raymond W.

    1997-01-01

    Compares the persuasiveness of using statistical versus narrative evidence (case studies or examples) across 15 investigations. Indicates that when comparing messages, statistical evidence is more persuasive than narrative evidence. (PA)

  4. Comparative Financial Statistics for Public Two-Year Colleges: FY 1993 National Sample.

    ERIC Educational Resources Information Center

    Dickmeyer, Nathan; Meeker, Bradley

    This report provides comparative information derived from a national sample of 516 public two-year colleges, highlighting financial statistics for fiscal year, 1992-93. This report provides space for colleges to compare their institutional statistics with national sample medians, quartile data for the national sample, and statistics presented in a…

  5. Assessment of the efficacy and safety profiles of aspirin and acetaminophen with codeine: results from 2 randomized, controlled trials in individuals with tension-type headache and postoperative dental pain.

    PubMed

    Gatoulis, Sergio C; Voelker, Michael; Fisher, Matt

    2012-01-01

    Aspirin is a widely used NSAID that has been extensively studied in numerous conditions. Nonprescription analgesics, such as aspirin, are frequently used for a wide variety of common ailments, including conditions such as dental pain and tension-type headache. We sought to compare the efficacy and safety profiles of aspirin, acetaminophen with codeine, and placebo in the treatment of post-operative dental pain and tension-type headache. These were 2 randomized, double-blind, placebo-controlled, single-dose clinical trials that assigned participants (2:2:1) to receive either aspirin (1000 mg), acetaminophen (300 mg) with codeine (30 mg), or placebo. The primary efficacy end point was the sum of pain intensity differences from baseline (SPID) over 6 hours for the dental pain study and over 4 hours for the tension-type headache study. Other common analgesic measures, in addition to safety, were also evaluated. The results of the dental pain study for aspirin and acetaminophen with codeine suggest statistically significant efficacy for all measures compared with placebo at all time points. Aspirin provided statistically significant efficacy compared with acetaminophen with codeine for SPID(0-4) (P = 0.028). In the tension-type headache study, aspirin and acetaminophen with codeine provided statistically significant efficacy compared with placebo for SPID(0-4) and SPID(0-6) (P < 0.001) and for total pain relief (P < 0.001). There were no significant differences between aspirin and acetaminophen with codeine at any evaluation of SPID (P ≥ 0.070), complete relief (P ≥ 0.179), or time to meaningful relief (P ≥ 0.245). Regarding safety, there were no statistically significant differences between treatment groups in the incidence of adverse events in the dental pain and tension-type headache studies. These 2 randomized, double-blind, placebo-controlled studies demonstrate that treatment with aspirin (1000 mg) provides statistically significant analgesic efficacy compared with placebo use and comparable efficacy with acetaminophen (300 mg) with codeine (30 mg) therapy after impacted third molar extraction and in tension- type headache. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  6. Fundamental frequency and voice perturbation measures in smokers and non-smokers: An acoustic and perceptual study

    NASA Astrophysics Data System (ADS)

    Freeman, Allison

    This research examined the fundamental frequency and perturbation (jitter % and shimmer %) measures in young adult (20-30 year-old) and middle-aged adult (40-55 year-old) smokers and non-smokers; there were 36 smokers and 36 non-smokers. Acoustic analysis was carried out utilizing one task: production of sustained /a/. These voice samples were analyzed utilizing Multi-Dimensional Voice Program (MDVP) software, which provided values for fundamental frequency, jitter %, and shimmer %.These values were analyzed for trends regarding smoking status, age, and gender. Statistical significance was found regarding the fundamental frequency, jitter %, and shimmer % for smokers as compared to non-smokers; smokers were found to have significantly lower fundamental frequency values, and significantly higher jitter % and shimmer % values. Statistical significance was not found regarding fundamental frequency, jitter %, and shimmer % for age group comparisons. With regard to gender, statistical significance was found regarding fundamental frequency; females were found to have statistically higher fundamental frequencies as compared to males. However, the relationships between gender and jitter % and shimmer % lacked statistical significance. These results indicate that smoking negatively affects voice quality. This study also examined the ability of untrained listeners to identify smokers and non-smokers based on their voices. Results of this voice perception task suggest that listeners are not accurately able to identify smokers and non-smokers, as statistical significance was not reached. However, despite a lack of significance, trends in data suggest that listeners are able to utilize voice quality to identify smokers and non-smokers.

  7. DNA Damage and Genetic Instability as Harbingers of Prostate Cancer

    DTIC Science & Technology

    2013-01-01

    incidence of prostate cancer as compared to placebo. Primary analysis of this trial indicated no statistically significant effect of selenium...Identification, isolation, staining, processing, and statistical analysis of slides for ERG and PTEN markers (aim 1) and interpretation of these results...participating in this study being conducted under Investigational New Drug #29829 from the Food and Drug Administration. STANDARD TREATMENT Patients

  8. To Evaluate & Compare Retention of Complete Cast Crown in Natural Teeth Using Different Auxiliary Retentive Features with Two Different Crown Heights - An In Vitro Study.

    PubMed

    Vinaya, Kundapur; Rakshith, Hegde; Prasad D, Krishna; Manoj, Shetty; Sunil, Mankar; Naresh, Shetty

    2015-06-01

    To evaluate the retention of complete cast crowns in teeth with adequate and inadequate crown height and to evaluate the effects of auxiliary retentive features on retention form complete cast crowns. Sixty freshly extracted human premolars. They were divided into 2 major groups depending upon the height of the teeth after the preparation. Group1 (H1): prepared teeth with constant height of 3.5 mm and Group 2 (H2): prepared teeth with constant height of 2.5 mm. Each group is further subdivided into 3 subgroups, depending upon the retentive features incorporated. First sub group were prepared conventionally, second sub group with proximal grooves and third subgroups with proximal boxes preparation. Castings produced in Nickel chromium alloy were cemented with glass ionomer cement and the cemented castings were subjected to tensional forces required to dislodge each cemented casting from its preparation and used for comparison of retentive quality. The data obtained were statistically analyzed using Oneway ANOVA test. The results showed there was statistically significant difference between adequate (H1) and inadequate (H2) group and increase in retention when there was incorporation of retentive features compared to conventional preparations. Incorporation of retentive grooves was statistically significant compared to retention obtained by boxes. Results also showed there was no statistically significant difference between long conventional and short groove. Complete cast crowns on teeth with adequate crown height exhibited greater retention than with inadequate crown height. Proximal grooves provided greater amount of retention when compared with proximal boxes.

  9. Characterization and coating stability evaluation of nickel-titanium orthodontic esthetic wires: an in vivo study.

    PubMed

    Argalji, Nina; Silva, Eduardo Moreira da; Cury-Saramago, Adriana; Mattos, Claudia Trindade

    2017-08-21

    The objective of this study was to compare coating dimensions and surface characteristics of two different esthetic covered nickel-titanium orthodontic rectangular archwires, as-received from the manufacturer and after oral exposure. The study was designed for comparative purposes. Both archwires, as-received from the manufacturer, were observed using a stereomicroscope to measure coating thickness and inner metallic dimensions. The wires were also exposed to oral environment in 11 orthodontic active patients for 21 days. After removing the samples, stereomicroscopy images were captured, coating loss was measured and its percentage was calculated. Three segments of each wire (one as-received and two after oral exposure) were observed using scanning electron microscopy for a qualitative analysis of the labial surface of the wires. The Lilliefors test and independent t-test were applied to verify normality of data and statistical differences between wires, respectively. The significance level adopted was 0.05. The results showed that the differences between the wires while comparing inner height and thickness were statistically significant (p < 0.0001). In average, the most recently launched wire presented a coating thickness twice that of the control wire, which was also a statistically significant difference. The coating loss percentage was also statistically different (p = 0.0346) when the latest launched wire (13.27%) was compared to the control (29.63%). In conclusion, the coating of the most recent wire was thicker and more uniform, whereas the control had a thinner coating on the edges. After oral exposure, both tested wires presented coating loss, but the most recently launched wire exhibited better results.

  10. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  11. Statistics attack on `quantum private comparison with a malicious third party' and its improvement

    NASA Astrophysics Data System (ADS)

    Gu, Jun; Ho, Chih-Yung; Hwang, Tzonelih

    2018-02-01

    Recently, Sun et al. (Quantum Inf Process:14:2125-2133, 2015) proposed a quantum private comparison protocol allowing two participants to compare the equality of their secrets via a malicious third party (TP). They designed an interesting trap comparison method to prevent the TP from knowing the final comparison result. However, this study shows that the malicious TP can use the statistics attack to reveal the comparison result. A simple modification is hence proposed to solve this problem.

  12. Regulatory considerations in the design of comparative observational studies using propensity scores.

    PubMed

    Yue, Lilly Q

    2012-01-01

    In the evaluation of medical products, including drugs, biological products, and medical devices, comparative observational studies could play an important role when properly conducted randomized, well-controlled clinical trials are infeasible due to ethical or practical reasons. However, various biases could be introduced at every stage and into every aspect of the observational study, and consequently the interpretation of the resulting statistical inference would be of concern. While there do exist statistical techniques for addressing some of the challenging issues, often based on propensity score methodology, these statistical tools probably have not been as widely employed in prospectively designing observational studies as they should be. There are also times when they are implemented in an unscientific manner, such as performing propensity score model selection for a dataset involving outcome data in the same dataset, so that the integrity of observational study design and the interpretability of outcome analysis results could be compromised. In this paper, regulatory considerations on prospective study design using propensity scores are shared and illustrated with hypothetical examples.

  13. The statistical average of optical properties for alumina particle cluster in aircraft plume

    NASA Astrophysics Data System (ADS)

    Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin

    2018-04-01

    We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.

  14. Songs as an aid for language acquisition.

    PubMed

    Schön, Daniele; Boyer, Maud; Moreno, Sylvain; Besson, Mireille; Peretz, Isabelle; Kolinsky, Régine

    2008-02-01

    In previous research, Saffran and colleagues [Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-old infants. Science, 274, 1926-1928; Saffran, J. R., Newport, E. L., & Aslin, R. N. (1996). Word segmentation: The role of distributional cues. Journal of Memory and Language, 35, 606-621.] have shown that adults and infants can use the statistical properties of syllable sequences to extract words from continuous speech. They also showed that a similar learning mechanism operates with musical stimuli [Saffran, J. R., Johnson, R. E. K., Aslin, N., & Newport, E. L. (1999). Abstract Statistical learning of tone sequences by human infants and adults. Cognition, 70, 27-52.]. In this work we combined linguistic and musical information and we compared language learning based on speech sequences to language learning based on sung sequences. We hypothesized that, compared to speech sequences, a consistent mapping of linguistic and musical information would enhance learning. Results confirmed the hypothesis showing a strong learning facilitation of song compared to speech. Most importantly, the present results show that learning a new language, especially in the first learning phase wherein one needs to segment new words, may largely benefit of the motivational and structuring properties of music in song.

  15. ArrayVigil: a methodology for statistical comparison of gene signatures using segregated-one-tailed (SOT) Wilcoxon's signed-rank test.

    PubMed

    Khan, Haseeb Ahmad

    2005-01-28

    Due to versatile diagnostic and prognostic fidelity molecular signatures or fingerprints are anticipated as the most powerful tools for cancer management in the near future. Notwithstanding the experimental advancements in microarray technology, methods for analyzing either whole arrays or gene signatures have not been firmly established. Recently, an algorithm, ArraySolver has been reported by Khan for two-group comparison of microarray gene expression data using two-tailed Wilcoxon signed-rank test. Most of the molecular signatures are composed of two sets of genes (hybrid signatures) wherein up-regulation of one set and down-regulation of the other set collectively define the purpose of a gene signature. Since the direction of a selected gene's expression (positive or negative) with respect to a particular disease condition is known, application of one-tailed statistics could be a more relevant choice. A novel method, ArrayVigil, is described for comparing hybrid signatures using segregated-one-tailed (SOT) Wilcoxon signed-rank test and the results compared with integrated-two-tailed (ITT) procedures (SPSS and ArraySolver). ArrayVigil resulted in lower P values than those obtained from ITT statistics while comparing real data from four signatures.

  16. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  17. Signal Statistics and Maximum Likelihood Sequence Estimation in Intensity Modulated Fiber Optic Links Containing a Single Optical Pre-amplifier.

    PubMed

    Alić, Nikola; Papen, George; Saperstein, Robert; Milstein, Laurence; Fainman, Yeshaiahu

    2005-06-13

    Exact signal statistics for fiber-optic links containing a single optical pre-amplifier are calculated and applied to sequence estimation for electronic dispersion compensation. The performance is evaluated and compared with results based on the approximate chi-square statistics. We show that detection in existing systems based on exact statistics can be improved relative to using a chi-square distribution for realistic filter shapes. In contrast, for high-spectral efficiency systems the difference between the two approaches diminishes, and performance tends to be less dependent on the exact shape of the filter used.

  18. Comparison of parameterized nitric acid rainout rates using a coupled stochastic-photochemical tropospheric model

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.; Thompson, Anne M.; Owens, Melody A.; Herwehe, Jerold A.

    1989-01-01

    A major tropospheric loss of soluble species such as nitric acid results from scavenging by water droplets. Several theoretical formulations have been advanced which relate an effective time-independent loss rate for soluble species to statistical properties of precipitation such as the wet fraction and length of a precipitation cycle. In this paper, various 'effective' loss rates that have been proposed are compared with the results of detailed time-dependent model calculations carried out over a seasonal time scale. The model is a stochastic precipitation model coupled to a tropospheric photochemical model. The results of numerous time-dependent seasonal model runs are used to derive numerical values for the nitric acid residence time for several assumed sets of preciptation statistics. These values are then compared with the results obtained by utilizing theoretical 'effective' loss rates in time-independent models.

  19. Effects of vibratory stimulation on sexual response in women with spinal cord injury.

    PubMed

    Sipski, Marca L; Alexander, Craig J; Gomez-Marin, Orlando; Grossbard, Marissa; Rosen, Raymond

    2005-01-01

    Women with spinal cord injuries (SCIs) have predictable alterations in sexual responses. They commonly have a decreased ability to achieve genital sexual arousal. This study determined whether the use of vibratory stimulation would result in increased genital arousal as measured by vaginal pulse amplitude in women with SCIs. Subjects included 46 women with SCIs and 11 nondisabled control subjects. Results revealed vibratory clitoral stimulation resulted in increased vaginal pulse amplitude as compared with manual clitoral stimulation in both SCI and nondisabled subjects; however, these differences were not statistically significant. Subjective levels of arousal were also compared between SCI and nondisabled control subjects. Both vibratory and manual clitoral stimulation resulted in significantly increased arousal levels in both groups of subjects; however, statistically significant differences between the two conditions were only noted in nondisabled subjects. Further studies of the effects of repetitive vibratory stimulation are underway.

  20. Statistical properties of a cloud ensemble - A numerical study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Simpson, Joanne; Soong, Su-Tzai

    1987-01-01

    The statistical properties of cloud ensembles under a specified large-scale environment, such as mass flux by cloud drafts and vertical velocity as well as the condensation and evaporation associated with these cloud drafts, are examined using a three-dimensional numerical cloud ensemble model described by Soong and Ogura (1980) and Tao and Soong (1986). The cloud drafts are classified as active and inactive, and separate contributions to cloud statistics in areas of different cloud activity are then evaluated. The model results compare well with results obtained from aircraft measurements of a well-organized ITCZ rainband that occurred on August 12, 1974, during the Global Atmospheric Research Program's Atlantic Tropical Experiment.

  1. Deciphering the Landauer-Büttiker Transmission Function from Single Molecule Break Junction Experiments

    NASA Astrophysics Data System (ADS)

    Reuter, Matthew; Tschudi, Stephen

    When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.

  2. Modelling the effect of structural QSAR parameters on skin penetration using genetic programming

    NASA Astrophysics Data System (ADS)

    Chung, K. K.; Do, D. Q.

    2010-09-01

    In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.

  3. Effect and safety of early weight-bearing on the outcome after open-wedge high tibial osteotomy: a systematic review and meta-analysis.

    PubMed

    Lee, O-Sung; Ahn, Soyeon; Lee, Yong Seuk

    2017-07-01

    The purpose of this systematic review and meta-analysis was to evaluate the effectiveness and safety of early weight-bearing by comparing clinical and radiological outcomes between early and traditional delayed weight-bearing after OWHTO. A rigorous and systematic approach was used. The methodological quality was also assessed. Results that are possible to be compared in two or more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random-effects model was used to calculate the effect size. Six articles were included in the final analysis. All case groups were composed of early full weight-bearing within 2 weeks. All control groups were composed of late full weight-bearing between 6 weeks and 2 months. Pooled analysis was possible for the improvement in Lysholm score, but there was no statistically significant difference shown between groups. Other clinical results were also similar between groups. Four studies reported mechanical femorotibial angle (mFTA) and this result showed no statistically significant difference between groups in the pooled analysis. Furthermore, early weight-bearing showed more favorable results in some radiologic results (osseointegration and patellar height) and complications (thrombophlebitis and recurrence). Our analysis supports that early full weight-bearing after OWHTO using a locking plate leads to improvement in outcomes and was comparable to the delayed weight-bearing in terms of clinical and radiological outcomes. On the contrary, early weight-bearing was more favorable with respect to some radiologic parameters and complications compared with delayed weight-bearing.

  4. Exploratory Visual Analysis of Statistical Results from Microarray Experiments Comparing High and Low Grade Glioma

    PubMed Central

    Reif, David M.; Israel, Mark A.; Moore, Jason H.

    2007-01-01

    The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA) software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a flexible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occuring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profiles of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM) tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specific gene expression patterns having both statistical and biological significance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the flexibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org. PMID:19390666

  5. Limited privacy protection and poor sensitivity: Is it time to move on from the statistical linkage key-581?

    PubMed

    Randall, Sean M; Ferrante, Anna M; Boyd, James H; Brown, Adrian P; Semmens, James B

    2016-08-01

    The statistical linkage key (SLK-581) is a common tool for record linkage in Australia, due to its ability to provide some privacy protection. However, newer privacy-preserving approaches may provide greater privacy protection, while allowing high-quality linkage. To evaluate the standard SLK-581, encrypted SLK-581 and a newer privacy-preserving approach using Bloom filters, in terms of both privacy and linkage quality. Linkage quality was compared by conducting linkages on Australian health datasets using these three techniques and examining results. Privacy was compared qualitatively in relation to a series of scenarios where privacy breaches may occur. The Bloom filter technique offered greater privacy protection and linkage quality compared to the SLK-based method commonly used in Australia. The adoption of new privacy-preserving methods would allow both greater confidence in research results, while significantly improving privacy protection. © The Author(s) 2016.

  6. [Statistical approach to evaluate the occurrence of out-of acceptable ranges and accuracy for antimicrobial susceptibility tests in inter-laboratory quality control program].

    PubMed

    Ueno, Tamio; Matuda, Junichi; Yamane, Nobuhisa

    2013-03-01

    To evaluate the occurrence of out-of acceptable ranges and accuracy of antimicrobial susceptibility tests, we applied a new statistical tool to the Inter-Laboratory Quality Control Program established by the Kyushu Quality Control Research Group. First, we defined acceptable ranges of minimum inhibitory concentration (MIC) for broth microdilution tests and inhibitory zone diameter for disk diffusion tests on the basis of Clinical and Laboratory Standards Institute (CLSI) M100-S21. In the analysis, more than two out-of acceptable range results in the 20 tests were considered as not allowable according to the CLSI document. Of the 90 participating laboratories, 46 (51%) experienced one or more occurrences of out-of acceptable range results. Then, a binomial test was applied to each participating laboratory. The results indicated that the occurrences of out-of acceptable range results in the 11 laboratories were significantly higher when compared to the CLSI recommendation (allowable rate < or = 0.05). The standard deviation indices(SDI) were calculated by using reported results, mean and standard deviation values for the respective antimicrobial agents tested. In the evaluation of accuracy, mean value from each laboratory was statistically compared with zero using a Student's t-test. The results revealed that 5 of the 11 above laboratories reported erroneous test results that systematically drifted to the side of resistance. In conclusion, our statistical approach has enabled us to detect significantly higher occurrences and source of interpretive errors in antimicrobial susceptibility tests; therefore, this approach can provide us with additional information that can improve the accuracy of the test results in clinical microbiology laboratories.

  7. Semiquantitative determination of mesophilic, aerobic microorganisms in cocoa products using the Soleris NF-TVC method.

    PubMed

    Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer

    2014-01-01

    The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.

  8. Disutility analysis of oil spills: graphs and trends.

    PubMed

    Ventikos, Nikolaos P; Sotiropoulos, Foivos S

    2014-04-15

    This paper reports the results of an analysis of oil spill cost data assembled from a worldwide pollution database that mainly includes data from the International Oil Pollution Compensation Fund. The purpose of the study is to analyze the conditions of marine pollution accidents and the factors that impact the costs of oil spills worldwide. The accidents are classified into categories based on their characteristics, and the cases are compared using charts to show how the costs are affected under all conditions. This study can be used as a helpful reference for developing a detailed statistical model that is capable of reliably and realistically estimating the total costs of oil spills. To illustrate the differences identified by this statistical analysis, the results are compared with the results of previous studies, and the findings are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. How Statisticians Speak Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redus, K.S.

    2007-07-01

    The foundation of statistics deals with (a) how to measure and collect data and (b) how to identify models using estimates of statistical parameters derived from the data. Risk is a term used by the statistical community and those that employ statistics to express the results of a statistically based study. Statistical risk is represented as a probability that, for example, a statistical model is sufficient to describe a data set; but, risk is also interpreted as a measure of worth of one alternative when compared to another. The common thread of any risk-based problem is the combination of (a)more » the chance an event will occur, with (b) the value of the event. This paper presents an introduction to, and some examples of, statistical risk-based decision making from a quantitative, visual, and linguistic sense. This should help in understanding areas of radioactive waste management that can be suitably expressed using statistical risk and vice-versa. (authors)« less

  10. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  11. Statistical Approaches to Adjusting Weights for Dependent Arms in Network Meta-analysis.

    PubMed

    Su, Yu-Xuan; Tu, Yu-Kang

    2018-05-22

    Network meta-analysis compares multiple treatments in terms of their efficacy and harm by including evidence from randomized controlled trials. Most clinical trials use parallel design, where patients are randomly allocated to different treatments and receive only one treatment. However, some trials use within person designs such as split-body, split-mouth and cross-over designs, where each patient may receive more than one treatment. Data from treatment arms within these trials are no longer independent, so the correlations between dependent arms need to be accounted for within the statistical analyses. Ignoring these correlations may result in incorrect conclusions. The main objective of this study is to develop statistical approaches to adjusting weights for dependent arms within special design trials. In this study, we demonstrate the following three approaches: the data augmentation approach, the adjusting variance approach, and the reducing weight approach. These three methods could be perfectly applied in current statistic tools such as R and STATA. An example of periodontal regeneration was used to demonstrate how these approaches could be undertaken and implemented within statistical software packages, and to compare results from different approaches. The adjusting variance approach can be implemented within the network package in STATA, while reducing weight approach requires computer software programming to set up the within-study variance-covariance matrix. This article is protected by copyright. All rights reserved.

  12. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics

    PubMed Central

    Chen, Wenan; Larrabee, Beth R.; Ovsyannikova, Inna G.; Kennedy, Richard B.; Haralambieva, Iana H.; Poland, Gregory A.; Schaid, Daniel J.

    2015-01-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. PMID:25948564

  13. Surgical Treatment for Discogenic Low-Back Pain: Lumbar Arthroplasty Results in Superior Pain Reduction and Disability Level Improvement Compared With Lumbar Fusion

    PubMed Central

    2007-01-01

    Background The US Food and Drug Administration approved the Charité artificial disc on October 26, 2004. This approval was based on an extensive analysis and review process; 20 years of disc usage worldwide; and the results of a prospective, randomized, controlled clinical trial that compared lumbar artificial disc replacement to fusion. The results of the investigational device exemption (IDE) study led to a conclusion that clinical outcomes following lumbar arthroplasty were at least as good as outcomes from fusion. Methods The author performed a new analysis of the Visual Analog Scale pain scores and the Oswestry Disability Index scores from the Charité artificial disc IDE study and used a nonparametric statistical test, because observed data distributions were not normal. The analysis included all of the enrolled subjects in both the nonrandomized and randomized phases of the study. Results Subjects from both the treatment and control groups improved from the baseline situation (P < .001) at all follow-up times (6 weeks to 24 months). Additionally, these pain and disability levels with artificial disc replacement were superior (P < .05) to the fusion treatment at all follow-up times including 2 years. Conclusions The a priori statistical plan for an IDE study may not adequately address the final distribution of the data. Therefore, statistical analyses more appropriate to the distribution may be necessary to develop meaningful statistical conclusions from the study. A nonparametric statistical analysis of the Charité artificial disc IDE outcomes scores demonstrates superiority for lumbar arthroplasty versus fusion at all follow-up time points to 24 months. PMID:25802574

  14. [Comparison of the effect of different diagnostic criteria of subclinical hypothyroidism and positive TPO-Ab on pregnancy outcomes].

    PubMed

    He, Yiping; He, Tongqiang; Wang, Yanxia; Xu, Zhao; Xu, Yehong; Wu, Yiqing; Ji, Jing; Mi, Yang

    2014-11-01

    To explore the effect of different diagnositic criteria of subclinical hypothyroidism using thyroid stimulating hormone (TSH) and positive thyroid peroxidase antibodies (TPO-Ab) on the pregnancy outcomes. 3 244 pregnant women who had their antenatal care and delivered in Child and Maternity Health Hospital of Shannxi Province August from 2011 to February 2013 were recruited prospectively. According to the standard of American Thyroid Association (ATA), pregnant women with normal serum free thyroxine (FT4) whose serum TSH level> 2.50 mU/L were diagnosed as subclinical hypothyroidism in pregnancy (foreign standard group). According to the Guideline of Diagnosis and Therapy of Prenatal and Postpartum Thyroid Disease made by Chinese Society of Endocrinology and Chinese Society of Perinatal Medicine in 2012, pregnant women with serum TSH level> 5.76 mU/L, and normal FT4 were diagnosed as subclinical hypothyroidism in pregnancy(national standard group). Pregnant women with subclinical hypothyroidism whose serum TSH levels were between 2.50-5.76 mU/L were referred as the study observed group; and pregnant women with serum TSH level< 2.50 mU/L and negative TPO- Ab were referred as the control group. Positive TPO-Ab results and the pregnancy outcomes were analyzed. (1) There were 635 cases in the foreign standard group, with the incidence of 19.57% (635/3 244). And there were 70 cases in the national standard group, with the incidence of 2.16% (70/3 244). There were statistically significant difference between the two groups (P < 0.01). There were 565 cases in the study observed group, with the incidence of 17.42% (565/3 244). There was statistically significant difference (P < 0.01) when compared with the national standard group; while there was no statistically significant difference (P > 0.05) when compared with the foreign standard group. (2) Among the 3 244 cases, 402 cases had positive TPO-Ab. 318 positive cases were in the foreign standard group, and the incidence of subclinical hypothyroidism was 79.10% (318/402). There were 317 negative cases in the foreign standard group, with the incidence of 11.15% (317/2 842). The difference was statistically significant (P < 0.01) between them. In the national standard group, 46 cases had positive TPO-Ab, with the incidence of 11.44% (46/402), and 24 cases had negative result, with the incidence of 0.84% (24/2 842). There were statistically significant difference (P < 0.01) between them. In the study observed group, 272 cases were TPO-Ab positive, with the incidence of 67.66% (272/402), and 293 cases were negative, with the incidence of 10.31% (293/2 842), the difference was statistically significant (P < 0.01). (3) The incidence of miscarriage, premature delivery, gestational hypertension disease, gestational diabetes mellitus(GDM)in the foreign standard group had statistically significant differences (P < 0.05) when compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. And the incidence of miscarriage, premature delivery, gestational hypertension disease, GDM in the national standard group had statistical significant difference (P < 0.05) compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. This study observed group of pregnant women's abortion, gestational hypertension disease, GDM incidence respectively compared with control group, the difference had statistical significance (P < 0.05); but in preterm labor, placental abruption, and fetal distress incidence, there were no statistically significant difference (P > 0.05). (4) The incidence of miscarriage, premature delivery, gestational hypertension disease, GDM, placental abruption, fetal distress in the TPO-Ab positive cases of the national standard group showed an increase trend when compared with TPO-Ab negative cases, with no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases of the study observed group had statistical significance difference (P < 0.05) when compared with TPO-Ab negative cases; while the incidence of miscarriage, premature birth, placental abruption, fetal distress had no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases had statistically significance difference when compared with TPO-Ab negtive cases of foreign standard group (P < 0.05). (1) The incidence of subclinical hypothyroidism is rather high during early pregnancy and can lead to adverse pregnancy outcome. (2) Positive TPO-Ab result has important predictive value of the thyroid dysfunction and GDM. (3) Relatively, the ATA standard of diagnosis (serum TSH level> 2.50 mU/L) is safer for the antenatal care; the national standard (serum TSH level> 5.76 mU/L) is not conducive to pregnancy management.

  15. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    PubMed

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  16. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic

    PubMed Central

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set–proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646

  17. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  18. Assessment of chamber pressure oscillations in the Shuttle SRB

    NASA Technical Reports Server (NTRS)

    Mathes, H. B.

    1980-01-01

    Combustion stability evaluations of the Shuttle solid propellant booster motor are reviewed. Measurement of the amplitude and frequency of low level chamber pressure oscillations which have been detected in motor firings, are discussed and a statistical analysis of the data is presented. Oscillatory data from three recent motor firings are shown and the results are compared with statistical predictions which are based on earlier motor firings.

  19. Wavelet analysis in ecology and epidemiology: impact of statistical tests

    PubMed Central

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-01-01

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the ‘beta-surrogate’ method. PMID:24284892

  20. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    PubMed

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  1. TVT-Exact and midurethral sling (SLING-IUFT) operative procedures: a randomized study

    PubMed Central

    Aniulis, Povilas; Skaudickas, Darijus

    2015-01-01

    Objectives The aim of the study is to compare results, effectiveness and complications of TVT exact and midurethral sling (SLING-IUFT) operations in the treatment of female stress urinary incontinence (SUI). Methods A single center nonblind, randomized study of women with SUI who were randomized to TVT-Exact and SLING-IUFT was performed by one surgeon from April 2009 to April 2011. SUI was diagnosed on coughing and Valsalva test and urodynamics (cystometry and uroflowmetry) were assessed before operation and 1 year after surgery. This was a prospective randomized study. The follow up period was 12 months. 76 patients were operated using the TVT-Exact operation and 78 patients – using the SLING-IUFT operation. There was no statistically significant differences between groups for BMI, parity, menopausal status and prolapsed stage (no patients had cystocele greater than stage II). Results Mean operative time was significantly shorter in the SLING-IUFT group (19 ± 5.6 min.) compared with the TVT-Exact group (27 ± 7.1 min.). There were statistically significant differences in the effectiveness of both procedures: TVT-Exact – at 94.5% and SLING-IUFT – at 61.2% after one year. Hospital stay was statistically significantly shorter in the SLING-IUFT group (1. 2 ± 0.5 days) compared with the TVT-Exact group (3.5 ± 1.5 days). Statistically significantly fewer complications occurred in the SLING-IUFT group. Conclusion the TVT-Exact and SLING-IUFT operations are both effective for surgical treatment of female stress urinary incontinence. The SLING-IUFT involved a shorter operation time and lower complications rate., the TVT-Exact procedure had statistically significantly more complications than the SLING-IUFT operation, but a higher effectiveness. PMID:28352711

  2. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  3. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  4. Effects of statistical learning on the acquisition of grammatical categories through Qur'anic memorization: A natural experiment.

    PubMed

    Zuhurudeen, Fathima Manaar; Huang, Yi Ting

    2016-03-01

    Empirical evidence for statistical learning comes from artificial language tasks, but it is unclear how these effects scale up outside of the lab. The current study turns to a real-world test case of statistical learning where native English speakers encounter the syntactic regularities of Arabic through memorization of the Qur'an. This unique input provides extended exposure to the complexity of a natural language, with minimal semantic cues. Memorizers were asked to distinguish unfamiliar nouns and verbs based on their co-occurrence with familiar pronouns in an Arabic language sample. Their performance was compared to that of classroom learners who had explicit knowledge of pronoun meanings and grammatical functions. Grammatical judgments were more accurate in memorizers compared to non-memorizers. No effects of classroom experience were found. These results demonstrate that real-world exposure to the statistical properties of a natural language facilitates the acquisition of grammatical categories. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model.

    PubMed

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.

  6. Suggestions for presenting the results of data analyses

    USGS Publications Warehouse

    Anderson, David R.; Link, William A.; Johnson, Douglas H.; Burnham, Kenneth P.

    2001-01-01

    We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentists methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management.

  7. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  8. An Open Label Clinical Trial of a Peptide Treatment Serum and Supporting Regimen Designed to Improve the Appearance of Aging Facial Skin.

    PubMed

    Draelos, Zoe Diana; Kononov, Tatiana; Fox, Theresa

    2016-09-01

    A 14-week single-center clinical usage study was conducted to test the efficacy of a peptide treatment serum and supporting skincare regimen in 29 women with mild to moderately photodamaged facial skin. The peptide treatment serum contained gamma-aminobutyric acid (GABA) and various peptides with neurotransmitter inhibiting and cell signaling properties. It was hypothesized that the peptide treatment serum would ameliorate eye and facial expression lines including crow's feet and forehead lines. The efficacy of the supporting skincare regimen was also evaluated. An expert investigator examined the subjects at rest and at maximum smile. Additionally, the subjects completed self-assessment questionnaires. At week 14, the expert investigator found a statistically significant improvement in facial lines, facial wrinkles, eye lines, and eye wrinkles at rest when compared to baseline results. The expert investigator also found statistically significant improvement at week 14 in facial lines, eye lines, and eye wrinkles when compared to baseline results at maximum smile. In addition, there was continued highly statistically significant improvement in smoothness, softness, firmness, radiance, luminosity, and overall appearance at rest when compared to baseline results at the 14-week time point. The test regimen was well perceived by the subjects for efficacy and product attributes. The products were well tolerated with no adverse events.

    J Drugs Dermatol. 2016;15(9):1100-1106.

  9. Radiographic comparison of different concentrations of recombinant human bone morphogenetic protein with allogenic bone compared with the use of 100% mineralized cancellous bone allograft in maxillary sinus grafting.

    PubMed

    Froum, Stuart J; Wallace, Stephen; Cho, Sang-Choon; Khouly, Ismael; Rosenberg, Edwin; Corby, Patricia; Froum, Scott; Mascarenhas, Patrick; Tarnow, Dennis P

    2014-01-01

    The purpose of this study was to radiographically evaluate, then analyze, bone height, volume, and density with reference to percentage of vital bone after maxillary sinuses were grafted using two different doses of recombinant human bone morphogenetic protein 2/acellular collagen sponge (rhBMP-2/ACS) combined with mineralized cancellous bone allograft (MCBA) and a control sinus grafted with MCBA only. A total of 18 patients (36 sinuses) were used for analysis of height and volume measurements, having two of three graft combinations (one in each sinus): (1) control, MCBA only; (2) test 1, MCBA + 5.6 mL of rhBMP-2/ACS (containing 8.4 mg of rhBMP-2); and (3) test 2, MCBA + 2.8 mL of rhBMP-2/ACS (containing 4.2 mg of rhBMP-2). The study was completed with 16 patients who also had bilateral cores removed 6 to 9 months following sinus augmentation. A computer software system was used to evaluate 36 computed tomography scans. Two time points where selected for measurements of height: The results indicated that height of the grafted sinus was significantly greater in the treatment groups compared with the control. However, by the second time point, there were no statistically significant differences. Three weeks post-surgery bone volume measurements showed similar statistically significant differences between test and controls. However, prior to core removal, test group 1 with the greater dose of rhBMP-2 showed a statistically significant greater increase compared with test group 2 and the control. There was no statistically significant difference between the latter two groups. All three groups had similar volume and shrinkage. Density measurements varied from the above results, with the control showing statistically significant greater density at both time points. By contrast, the density increase over time in both rhBMP groups was similar and statistically higher than in the control group. There were strong associations between height and volume in all groups and between volume and new vital bone only in the control group. There were no statistically significant relationships observed between height and bone density or between volume and bone density for any parameter measured. More cases and monitoring of the future survival of implants placed in these augmented sinuses are needed to verify these results.

  10. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  11. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  12. The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison

    PubMed Central

    Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth

    2006-01-01

    Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497

  13. 1999 Customer Satisfaction Survey Report: How Do We Measure Up?

    ERIC Educational Resources Information Center

    Salvucci, Sameena; Parker, Albert C. E.; Cash, R. William; Thurgood, Lori

    2001-01-01

    Summarizes results of a 1999 survey regarding the satisfaction of various groups with publications, databases, and services of the National Center for Education Statistics. Groups studied were federal, state, and local policymakers; academic researchers; and journalists. Compared 1999 results with 1997 results. (Author/SLD)

  14. Explorations in statistics: hypothesis tests and P values.

    PubMed

    Curran-Everett, Douglas

    2009-06-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This second installment of Explorations in Statistics delves into test statistics and P values, two concepts fundamental to the test of a scientific null hypothesis. The essence of a test statistic is that it compares what we observe in the experiment to what we expect to see if the null hypothesis is true. The P value associated with the magnitude of that test statistic answers this question: if the null hypothesis is true, what proportion of possible values of the test statistic are at least as extreme as the one I got? Although statisticians continue to stress the limitations of hypothesis tests, there are two realities we must acknowledge: hypothesis tests are ingrained within science, and the simple test of a null hypothesis can be useful. As a result, it behooves us to explore the notions of hypothesis tests, test statistics, and P values.

  15. Identification of natural images and computer-generated graphics based on statistical and textural features.

    PubMed

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  16. Sex-specific substance abuse treatment for female healthcare professionals: implications.

    PubMed

    Koos, Erin; Brand, Michael; Rojas, Julio; Li, Ji

    2014-01-01

    Gender plays a significant role in the development and treatment of substance abuse disorders. Sex-specific treatment for girls and women has recurrently proven more effective, with better outcomes than traditional treatment. Research on impaired healthcare professionals (HCPs) has largely focused on men, garnering little attention for women and sex differences. With the increasing numbers of female HCPs, it is imperative to identify potential sex differences that may have implications for treatment. Our study compared a convenience sample of male and female HCPs with substance abuse disorders treated in an outpatient program to identify sex differences that may have implications for treatment. Our sample consisted of 96 HCPs (54 men, 42 women) and 17 non-healthcare professional (N-HCP) women. All of the participants were evaluated using the program's clinical interview and the Personality Assessment Inventory (PAI). Chart review data contained categorical variables, qualitative variables, diagnoses, and psychological test scores. A second analysis was conducted through two separate comparisons: the PAI results of comparing impaired female HCPs with impaired male HCPs and the PAI results of comparing impaired female HCPs with impaired female N-HCPs. Statistically significant differences indicated more male participants received prior treatment and more intensive treatment than female participants. More female subjects reported being diagnosed as having a comorbid psychiatric condition and taking psychotropic medications. Several statistically significant differences in the PAI scores were found. Among female HCPs, elevations were found in anxiety, depression, paranoia, and borderline personality disorder. Substantive differences, although not statistically significant, were elevations in somatic complaints and anxiety disorders in female HCPs. In the comparison of female HCPs and N-HCPs, the only statistically significant difference was the significantly higher anxiety score of N-HCPs. The results indicate greater differences between female HCPs and male HCPs than between female HCPs and N-HCPs.

  17. Potential for added value in precipitation simulated by high-resolution nested Regional Climate Models and observations

    NASA Astrophysics Data System (ADS)

    di Luca, Alejandro; de Elía, Ramón; Laprise, René

    2012-03-01

    Regional Climate Models (RCMs) constitute the most often used method to perform affordable high-resolution regional climate simulations. The key issue in the evaluation of nested regional models is to determine whether RCM simulations improve the representation of climatic statistics compared to the driving data, that is, whether RCMs add value. In this study we examine a necessary condition that some climate statistics derived from the precipitation field must satisfy in order that the RCM technique can generate some added value: we focus on whether the climate statistics of interest contain some fine spatial-scale variability that would be absent on a coarser grid. The presence and magnitude of fine-scale precipitation variance required to adequately describe a given climate statistics will then be used to quantify the potential added value (PAV) of RCMs. Our results show that the PAV of RCMs is much higher for short temporal scales (e.g., 3-hourly data) than for long temporal scales (16-day average data) due to the filtering resulting from the time-averaging process. PAV is higher in warm season compared to cold season due to the higher proportion of precipitation falling from small-scale weather systems in the warm season. In regions of complex topography, the orographic forcing induces an extra component of PAV, no matter the season or the temporal scale considered. The PAV is also estimated using high-resolution datasets based on observations allowing the evaluation of the sensitivity of changing resolution in the real climate system. The results show that RCMs tend to reproduce relatively well the PAV compared to observations although showing an overestimation of the PAV in warm season and mountainous regions.

  18. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  19. Statistical primer: how to deal with missing data in scientific research?

    PubMed

    Papageorgiou, Grigorios; Grant, Stuart W; Takkenberg, Johanna J M; Mokhles, Mostafa M

    2018-05-10

    Missing data are a common challenge encountered in research which can compromise the results of statistical inference when not handled appropriately. This paper aims to introduce basic concepts of missing data to a non-statistical audience, list and compare some of the most popular approaches for handling missing data in practice and provide guidelines and recommendations for dealing with and reporting missing data in scientific research. Complete case analysis and single imputation are simple approaches for handling missing data and are popular in practice, however, in most cases they are not guaranteed to provide valid inferences. Multiple imputation is a robust and general alternative which is appropriate for data missing at random, surpassing the disadvantages of the simpler approaches, but should always be conducted with care. The aforementioned approaches are illustrated and compared in an example application using Cox regression.

  20. Comparative evaluation of topographical data of dental implant surfaces applying optical interferometry and scanning electron microscopy.

    PubMed

    Kournetas, N; Spintzyk, S; Schweizer, E; Sawada, T; Said, F; Schmid, P; Geis-Gerstorfer, J; Eliades, G; Rupp, F

    2017-08-01

    Comparability of topographical data of implant surfaces in literature is low and their clinical relevance often equivocal. The aim of this study was to investigate the ability of scanning electron microscopy and optical interferometry to assess statistically similar 3-dimensional roughness parameter results and to evaluate these data based on predefined criteria regarded relevant for a favorable biological response. Four different commercial dental screw-type implants (NanoTite Certain Prevail, TiUnite Brånemark Mk III, XiVE S Plus and SLA Standard Plus) were analyzed by stereo scanning electron microscopy and white light interferometry. Surface height, spatial and hybrid roughness parameters (Sa, Sz, Ssk, Sku, Sal, Str, Sdr) were assessed from raw and filtered data (Gaussian 50μm and 5μm cut-off-filters), respectively. Data were statistically compared by one-way ANOVA and Tukey-Kramer post-hoc test. For a clinically relevant interpretation, a categorizing evaluation approach was used based on predefined threshold criteria for each roughness parameter. The two methods exhibited predominantly statistical differences. Dependent on roughness parameters and filter settings, both methods showed variations in rankings of the implant surfaces and differed in their ability to discriminate the different topographies. Overall, the analyses revealed scale-dependent roughness data. Compared to the pure statistical approach, the categorizing evaluation resulted in much more similarities between the two methods. This study suggests to reconsider current approaches for the topographical evaluation of implant surfaces and to further seek after proper experimental settings. Furthermore, the specific role of different roughness parameters for the bioresponse has to be studied in detail in order to better define clinically relevant, scale-dependent and parameter-specific thresholds and ranges. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  2. Efficacy of lycopene in the treatment of gingivitis: a randomised, placebo-controlled clinical trial.

    PubMed

    Chandra, Rampalli Viswa; Prabhuji, M L Venkatesh; Roopa, D Adinarayana; Ravirajan, Sandhya; Kishore, Hadal C

    2007-01-01

    The aim of the present study was to compare the effect of systemically administered lycopene (LycoRed) as a monotherapy and as an adjunct to scaling and root planing in gingivitis patients. Twenty systemically healthy patients showing clinical signs of gingivitis were involved in a randomised, double-blind, parallel, split-mouth study. The subjects were randomly distributed between the two treatment groups: experimental group (n = 10), 8 mg lycopene/day for 2 weeks; and controls (n = 10), placebo for 2 weeks. Quadrant allocation within each group was randomised with two quadrants treated with oral prophylaxis (OP) and two quadrants not receiving any form of treatment (non-OP). Bleeding index (SBI) and non-invasive measures of plaque (PI) and gingivitis (GI) were assessed at baseline, 1 and 2 weeks. Salivary uric acid levels were also measured. All the treatment groups demonstrated statistically significant reductions in the GI, SBI and PI. Treatment with OP-lycopene resulted in a statistically significant decrease in GI when compared with OP-placebo (p < 0.05) and non-OP-placebo (p < 0.01). Treatment with non-OP-lycopene resulted in a statistically significant decrease in GI when compared with non-OP-placebo (p < 0.01). The OP-lycopene group showed a statistically significant reduction in SBI values when compared with the non-OP-lycopene group (p < 0.05) and the non-OP-placebo group (p < 0.001). There was a strong negative correlation between the salivary uric acid levels and the percentage reduction in GI at 1 and 2 weeks in the OP-lycopene group (r = -0.852 and -0.802 respectively) and in the non-OP-lycopene group (r = -0.640 and -0.580 respectively). The results presented in this study suggest that lycopene shows great promise as a treatment modality in gingivitis. The possibility of obtaining an additive effect by combining routine oral prophylaxis with lycopene is also an exciting possibility, which deserves further study.

  3. How to Interpret Thyroid Biopsy Results: A Three-Year Retrospective Interventional Radiology Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oppenheimer, Jason D., E-mail: j-oppenheimer@md.northwestern.edu; Kasuganti, Deepa; Nayar, Ritu

    2010-08-15

    Results of thyroid biopsy determine whether thyroid nodule resection is appropriate and the extent of thyroid surgery. At our institution we use 20/22-gauge core biopsy (CBx) in conjunction with fine-needle aspiration (FNA) to decrease the number of passes and improve adequacy. Occasionally, both ultrasound (US)-guided FNA and CBx yield unsatisfactory specimens. To justify clinical recommendations for these unsatisfactory thyroid biopsies, we compare rates of malignancy at surgical resection for unsatisfactory biopsy results against definitive biopsy results. We retrospectively reviewed a database of 1979 patients who had a total of 2677 FNA and 663 CBx performed by experienced interventional radiologists undermore » US guidance from 2003 to 2006 at a tertiary-care academic center. In 451 patients who had surgery following biopsy, Fisher's exact test was used to compare surgical malignancy rates between unsatisfactory and malignant biopsy cohorts as well as between unsatisfactory and benign biopsy cohorts. We defined statistical significance at P = 0.05. We reported an overall unsatisfactory thyroid biopsy rate of 3.7% (100/2677). A statistically significant higher rate of surgically proven malignancies was found in malignant biopsy patients compared to unsatisfactory biopsy patients (P = 0.0001). The incidence of surgically proven malignancy in unsatisfactory biopsy patients was not significantly different from that in benign biopsy patients (P = 0.8625). In conclusion, an extremely low incidence of malignancy was associated with both benign and unsatisfactory thyroid biopsy results. The difference in incidence between these two groups was not statistically significant. Therefore, patients with unsatisfactory biopsy specimens can be reassured and counseled accordingly.« less

  4. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  5. Statistical Analysis of CFD Solutions From the Fifth AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.

    2013-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using a common grid sequence and multiple turbulence models for the June 2012 fifth Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was the Common Research Model subsonic transport wing-body previously used for the 4th Drag Prediction Workshop. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.

  6. Statistical comparison of coherent structures in fully developed turbulent pipe flow with and without drag reduction

    NASA Astrophysics Data System (ADS)

    Sogaro, Francesca; Poole, Robert; Dennis, David

    2014-11-01

    High-speed stereoscopic particle image velocimetry has been performed in fully developed turbulent pipe flow at moderate Reynolds numbers with and without a drag-reducing additive (an aqueous solution of high molecular weight polyacrylamide). Three-dimensional large and very large-scale motions (LSM and VLSM) are extracted from the flow fields by a detection algorithm and the characteristics for each case are statistically compared. The results show that the three-dimensional extent of VLSMs in drag reduced (DR) flow appears to increase significantly compared to their Newtonian counterparts. A statistical increase in azimuthal extent of DR VLSM is observed by means of two-point spatial autocorrelation of the streamwise velocity fluctuation in the radial-azimuthal plane. Furthermore, a remarkable increase in length of these structures is observed by three-dimensional two-point spatial autocorrelation. These results are accompanied by an analysis of the swirling strength in the flow field that shows a significant reduction in strength and number of the vortices for the DR flow. The findings suggest that the damping of the small scales due to polymer addition results in the undisturbed development of longer flow structures.

  7. Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.

    PubMed

    Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz

    2017-01-01

    Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.

  8. Endurance and failure characteristics of modified Vasco X-2, CBS 600 and AISI 9310 spur gears. [aircraft construction materials

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.; Zaretsky, E. V.

    1980-01-01

    Gear endurance tests and rolling-element fatigue tests were conducted to compare the performance of spur gears made from AISI 9310, CBS 600 and modified Vasco X-2 and to compare the pitting fatigue lives of these three materials. Gears manufactured from CBS 600 exhibited lives longer than those manufactured from AISI 9310. However, rolling-element fatigue tests resulted in statistically equivalent lives. Modified Vasco X-2 exhibited statistically equivalent lives to AISI 9310. CBS 600 and modified Vasco X-2 gears exhibited the potential of tooth fracture occurring at a tooth surface fatigue pit. Case carburization of all gear surfaces for the modified Vasco X-2 gears results in fracture at the tips of the gears.

  9. Statistical significance of the rich-club phenomenon in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2008-04-01

    We propose that the rich-club phenomenon in complex networks should be defined in the spirit of bootstrapping, in which a null model is adopted to assess the statistical significance of the rich-club detected. Our method can serve as a definition of the rich-club phenomenon and is applied to analyze three real networks and three model networks. The results show significant improvement compared with previously reported results. We report a dilemma with an exceptional example, showing that there does not exist an omnipotent definition for the rich-club phenomenon.

  10. Inclusion probability for DNA mixtures is a subjective one-sided match statistic unrelated to identification information

    PubMed Central

    Perlin, Mark William

    2015-01-01

    Background: DNA mixtures of two or more people are a common type of forensic crime scene evidence. A match statistic that connects the evidence to a criminal defendant is usually needed for court. Jurors rely on this strength of match to help decide guilt or innocence. However, the reliability of unsophisticated match statistics for DNA mixtures has been questioned. Materials and Methods: The most prevalent match statistic for DNA mixtures is the combined probability of inclusion (CPI), used by crime labs for over 15 years. When testing 13 short tandem repeat (STR) genetic loci, the CPI-1 value is typically around a million, regardless of DNA mixture composition. However, actual identification information, as measured by a likelihood ratio (LR), spans a much broader range. This study examined probability of inclusion (PI) mixture statistics for 517 locus experiments drawn from 16 reported cases and compared them with LR locus information calculated independently on the same data. The log(PI-1) values were examined and compared with corresponding log(LR) values. Results: The LR and CPI methods were compared in case examples of false inclusion, false exclusion, a homicide, and criminal justice outcomes. Statistical analysis of crime laboratory STR data shows that inclusion match statistics exhibit a truncated normal distribution having zero center, with little correlation to actual identification information. By the law of large numbers (LLN), CPI-1 increases with the number of tested genetic loci, regardless of DNA mixture composition or match information. These statistical findings explain why CPI is relatively constant, with implications for DNA policy, criminal justice, cost of crime, and crime prevention. Conclusions: Forensic crime laboratories have generated CPI statistics on hundreds of thousands of DNA mixture evidence items. However, this commonly used match statistic behaves like a random generator of inclusionary values, following the LLN rather than measuring identification information. A quantitative CPI number adds little meaningful information beyond the analyst's initial qualitative assessment that a person's DNA is included in a mixture. Statistical methods for reporting on DNA mixture evidence should be scientifically validated before they are relied upon by criminal justice. PMID:26605124

  11. Robustness of S1 statistic with Hodges-Lehmann for skewed distributions

    NASA Astrophysics Data System (ADS)

    Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping

    2016-10-01

    Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.

  12. A global estimate of the Earth's magnetic crustal thickness

    NASA Astrophysics Data System (ADS)

    Vervelidou, Foteini; Thébault, Erwan

    2014-05-01

    The Earth's lithosphere is considered to be magnetic only down to the Curie isotherm. Therefore the Curie isotherm can, in principle, be estimated by analysis of magnetic data. Here, we propose such an analysis in the spectral domain by means of a newly introduced regional spatial power spectrum. This spectrum is based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism (Thébault et al., 2006). We briefly discuss its properties and its relationship with the Spherical Harmonic spatial power spectrum. This relationship allows us to adapt any theoretical expression of the lithospheric field power spectrum expressed in Spherical Harmonic degrees to the regional formulation. We compared previously published statistical expressions (Jackson, 1994 ; Voorhies et al., 2002) to the recent lithospheric field models derived from the CHAMP and airborne measurements and we finally developed a new statistical form for the power spectrum of the Earth's magnetic lithosphere that we think provides more consistent results. This expression depends on the mean magnetization, the mean crustal thickness and a power law value that describes the amount of spatial correlation of the sources. In this study, we make a combine use of the R-SCHA surface power spectrum and this statistical form. We conduct a series of regional spectral analyses for the entire Earth. For each region, we estimate the R-SCHA surface power spectrum of the NGDC-720 Spherical Harmonic model (Maus, 2010). We then fit each of these observational spectra to the statistical expression of the power spectrum of the Earth's lithosphere. By doing so, we estimate the large wavelengths of the magnetic crustal thickness on a global scale that are not accessible directly from the magnetic measurements due to the masking core field. We then discuss these results and compare them to the results we obtained by conducting a similar spectral analysis, but this time in the cartesian coordinates, by means of a published statistical expression (Maus et al., 1997). We also compare our results to crustal thickness global maps derived by means of additional geophysical data (Purucker et al., 2002).

  13. Input respiratory impedance in mice: comparison between the flow-based and the wavetube method to perform the forced oscillation technique.

    PubMed

    Mori, V; Oliveira, M A; Vargas, M H M; da Cunha, A A; de Souza, R G; Pitrez, P M; Moriya, H T

    2017-06-01

    Objective and approach: In this study, we estimated the constant phase model (CPM) parameters from the respiratory impedance of male BALB/c mice by performing the forced oscillation technique (FOT) in a control group (n  =  8) and in a murine model of asthma (OVA) (n  =  10). Then, we compared the results obtained by two different methods, using a commercial equipment (flexiVent-flexiWare 7.X; SCIREQ, Montreal, Canada) (FXV) and a wavetube method equipment (Sly et al 2003 J. Appl. Physiol. 94 1460-6) (WVT). We believe that the results from different methods may not be comparable. First, we compared the results performing a two-way analysis of variance (ANOVA) for the resistance, elastance and tissue damping. We found statistically significant differences in all CPM parameters, except for resistance, when comparing Control and OVA groups. When comparing devices, we found statistically significant differences in resistance, while differences in elastance were not observed. For tissue damping, the results from WVT were observed to be higher than those from FXV. Finally, when comparing the relative variation between the CPM parameters of the Control and OVA groups in both devices, no significant differences were observed for all parameters. We then conclude that this assessment can compensate the effect of using different cannulas. Furthermore, tissue damping differences between groups can be compensated, since bronchoconstrictors were not used. Therefore, we believe that relative variations in the results between groups can be a comparing parameter when using different equipment without bronchoconstrictor administration.

  14. Estimation of Mouse Organ Locations Through Registration of a Statistical Mouse Atlas With Micro-CT Images

    PubMed Central

    Stout, David B.; Chatziioannou, Arion F.

    2012-01-01

    Micro-CT is widely used in preclinical studies of small animals. Due to the low soft-tissue contrast in typical studies, segmentation of soft tissue organs from noncontrast enhanced micro-CT images is a challenging problem. Here, we propose an atlas-based approach for estimating the major organs in mouse micro-CT images. A statistical atlas of major trunk organs was constructed based on 45 training subjects. The statistical shape model technique was used to include inter-subject anatomical variations. The shape correlations between different organs were described using a conditional Gaussian model. For registration, first the high-contrast organs in micro-CT images were registered by fitting the statistical shape model, while the low-contrast organs were subsequently estimated from the high-contrast organs using the conditional Gaussian model. The registration accuracy was validated based on 23 noncontrast-enhanced and 45 contrast-enhanced micro-CT images. Three different accuracy metrics (Dice coefficient, organ volume recovery coefficient, and surface distance) were used for evaluation. The Dice coefficients vary from 0.45 ± 0.18 for the spleen to 0.90 ± 0.02 for the lungs, the volume recovery coefficients vary from for the liver to 1.30 ± 0.75 for the spleen, the surface distances vary from 0.18 ± 0.01 mm for the lungs to 0.72 ± 0.42 mm for the spleen. The registration accuracy of the statistical atlas was compared with two publicly available single-subject mouse atlases, i.e., the MOBY phantom and the DIGIMOUSE atlas, and the results proved that the statistical atlas is more accurate than the single atlases. To evaluate the influence of the training subject size, different numbers of training subjects were used for atlas construction and registration. The results showed an improvement of the registration accuracy when more training subjects were used for the atlas construction. The statistical atlas-based registration was also compared with the thin-plate spline based deformable registration, commonly used in mouse atlas registration. The results revealed that the statistical atlas has the advantage of improving the estimation of low-contrast organs. PMID:21859613

  15. A Study of the NASS-CDS System for Injury/Fatality Rates of Occupants in Various Restraints and A Discussion of Alternative Presentation Methods

    PubMed Central

    Stucki, Sheldon Lee; Biss, David J.

    2000-01-01

    An analysis was performed using the National Automotive Sampling System Crashworthiness Data System (NASS-CDS) database to compare the injury/fatality rates of variously restrained driver occupants as compared to unrestrained driver occupants in the total database of drivers/frontals, and also by Delta-V. A structured search of the NASS-CDS was done using the SAS® statistical analysis software to extract the data for this analysis and the SUDAAN software package was used to arrive at statistical significance indicators. In addition, this paper goes on to investigate different methods for presenting results of accident database searches including significance results; a risk versus Delta-V format for specific exposures; and, a percent cumulative injury versus Delta-V format to characterize injury trends. These alternative analysis presentation methods are then discussed by example using the present study results. PMID:11558105

  16. Cluster detection methods applied to the Upper Cape Cod cancer data.

    PubMed

    Ozonoff, Al; Webster, Thomas; Vieira, Veronica; Weinberg, Janice; Ozonoff, David; Aschengrau, Ann

    2005-09-15

    A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM) method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.

  17. Detection of Clostridium difficile infection clusters, using the temporal scan statistic, in a community hospital in southern Ontario, Canada, 2006-2011.

    PubMed

    Faires, Meredith C; Pearl, David L; Ciccotelli, William A; Berke, Olaf; Reid-Smith, Richard J; Weese, J Scott

    2014-05-12

    In hospitals, Clostridium difficile infection (CDI) surveillance relies on unvalidated guidelines or threshold criteria to identify outbreaks. This can result in false-positive and -negative cluster alarms. The application of statistical methods to identify and understand CDI clusters may be a useful alternative or complement to standard surveillance techniques. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting CDI clusters and determine if there are significant differences in the rate of CDI cases by month, season, and year in a community hospital. Bacteriology reports of patients identified with a CDI from August 2006 to February 2011 were collected. For patients detected with CDI from March 2010 to February 2011, stool specimens were obtained. Clostridium difficile isolates were characterized by ribotyping and investigated for the presence of toxin genes by PCR. CDI clusters were investigated using a retrospective temporal scan test statistic. Statistically significant clusters were compared to known CDI outbreaks within the hospital. A negative binomial regression model was used to identify associations between year, season, month and the rate of CDI cases. Overall, 86 CDI cases were identified. Eighteen specimens were analyzed and nine ribotypes were classified with ribotype 027 (n = 6) the most prevalent. The temporal scan statistic identified significant CDI clusters at the hospital (n = 5), service (n = 6), and ward (n = 4) levels (P ≤ 0.05). Three clusters were concordant with the one C. difficile outbreak identified by hospital personnel. Two clusters were identified as potential outbreaks. The negative binomial model indicated years 2007-2010 (P ≤ 0.05) had decreased CDI rates compared to 2006 and spring had an increased CDI rate compared to the fall (P = 0.023). Application of the temporal scan statistic identified several clusters, including potential outbreaks not detected by hospital personnel. The identification of time periods with decreased or increased CDI rates may have been a result of specific hospital events. Understanding the clustering of CDIs can aid in the interpretation of surveillance data and lead to the development of better early detection systems.

  18. Exploring Robust Methods for Evaluating Treatment and Comparison Groups in Chronic Care Management Programs

    PubMed Central

    Hamar, Brent; Bradley, Chastity; Gandy, William M.; Harrison, Patricia L.; Sidney, James A.; Coberley, Carter R.; Rula, Elizabeth Y.; Pope, James E.

    2013-01-01

    Abstract Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance. (Population Health Management 2013;16:35–45) PMID:22788834

  19. Exploring robust methods for evaluating treatment and comparison groups in chronic care management programs.

    PubMed

    Wells, Aaron R; Hamar, Brent; Bradley, Chastity; Gandy, William M; Harrison, Patricia L; Sidney, James A; Coberley, Carter R; Rula, Elizabeth Y; Pope, James E

    2013-02-01

    Evaluation of chronic care management (CCM) programs is necessary to determine the behavioral, clinical, and financial value of the programs. Financial outcomes of members who are exposed to interventions (treatment group) typically are compared to those not exposed (comparison group) in a quasi-experimental study design. However, because member assignment is not randomized, outcomes reported from these designs may be biased or inefficient if study groups are not comparable or balanced prior to analysis. Two matching techniques used to achieve balanced groups are Propensity Score Matching (PSM) and Coarsened Exact Matching (CEM). Unlike PSM, CEM has been shown to yield estimates of causal (program) effects that are lowest in variance and bias for any given sample size. The objective of this case study was to provide a comprehensive comparison of these 2 matching methods within an evaluation of a CCM program administered to a large health plan during a 2-year time period. Descriptive and statistical methods were used to assess the level of balance between comparison and treatment members pre matching. Compared with PSM, CEM retained more members, achieved better balance between matched members, and resulted in a statistically insignificant Wald test statistic for group aggregation. In terms of program performance, the results showed an overall higher medical cost savings among treatment members matched using CEM compared with those matched using PSM (-$25.57 versus -$19.78, respectively). Collectively, the results suggest CEM is a viable alternative, if not the most appropriate matching method, to apply when evaluating CCM program performance.

  20. Clinical evaluation of selected Yogic procedures in individuals with low back pain

    PubMed Central

    Pushpika Attanayake, A. M.; Somarathna, K. I. W. K.; Vyas, G. H.; Dash, S. C.

    2010-01-01

    The present study has been conducted to evaluate selected yogic procedures on individuals with low back pain. The understanding of back pain as one of the commonest clinical presentations during clinical practice made the path to the present study. It has also been calculated that more than three-quarters of the world's population experience back pain at some time in their lives. Twelve patients were selected and randomly divided into two groups, viz., group A yogic group and group B control group. Advice for life style and diet was given for all the patients. The effect of the therapy was assessed subjectively and objectively. Particular scores drawn for yogic group and control group were individually analyzed before and after treatment and the values were compared using standard statistical protocols. Yogic intervention revealed 79% relief in both subjective and objective parameters (i.e., 7 out of 14 parameters showed statistically highly significant P < 0.01 results, while 4 showed significant results P < 0.05). Comparative effect of yogic group and control group showed 79% relief in both subjective and objective parameters. (i.e., total 6 out of 14 parameters showed statistically highly significant (P < 0.01) results, while 5 showed significant results (P < 0.05). PMID:22131719

  1. U. S. Acquisition Cost Reduction and Avoidance Due to Foreign Military Sales

    DTIC Science & Technology

    2016-05-25

    delinquencies resulting in forfeiture, regulatory non-compliance, and possible misunderstandings, misrepresentations that might cause a business deal to fail...Knoema. (2015, September 26). Ongoing Armed Conflicts, 2014-2015 - knoema.com. Retrieved September 26, 2015, from Free data, statistics , analysis...States-of-America/Ease-of-Doing- Business?compareTo=GB,NL,IT,RU,FR Knoema. (2016, January 17). World and regional statistics , national data, maps

  2. Comparative study of dental cephalometric patterns of Japanese-Brazilian, Caucasian and Mongoloid patients

    PubMed Central

    Sathler, Renata; Pinzan, Arnaldo; Fernandes, Thais Maria Freire; de Almeida, Renato Rodrigues; Henriques, José Fernando Castanha

    2014-01-01

    Introduction The objective of this study was to identify the patterns of dental variables of adolescent Japanese-Brazilian descents with normal occlusion, and also to compare them with a similar Caucasian and Mongoloid sample. Methods Lateral cephalometric radiographs were used to compare the groups: Caucasian (n = 40), Japanese-Brazilian (n = 32) and Mongoloid (n = 33). The statistical tests used were one-way ANOVA and ANCOVA. The cephalometric measurements used followed the analyses of Steiner, Tweed and McNamara Jr. Results Statistical differences (P < 0.05) indicated a smaller interincisal angle and overbite for the Japanese-Brazilian sample, when compared to the Caucasian sample, although with similar values to the Mongoloid group. Conclusion The dental patterns found for the Japanese-Brazilian descents were, in general, more similar to those of the Mongoloid sample. PMID:25279521

  3. Statistics of Low-Mass Companions to Stars: Implications for Their Origin

    NASA Technical Reports Server (NTRS)

    Stepinski, T. F.; Black, D. C.

    2001-01-01

    One of the more significant results from observational astronomy over the past few years has been the detection, primarily via radial velocity studies, of low-mass companions (LMCs) to solar-like stars. The commonly held interpretation of these is that the majority are "extrasolar planets" whereas the rest are brown dwarfs, the distinction made on the basis of apparent discontinuity in the distribution of M sin i for LMCs as revealed by a histogram. We report here results from statistical analysis of M sin i, as well as of the orbital elements data for available LMCs, to rest the assertion that the LMCs population is heterogeneous. The outcome is mixed. Solely on the basis of the distribution of M sin i a heterogeneous model is preferable. Overall, we find that a definitive statement asserting that LMCs population is heterogeneous is, at present, unjustified. In addition we compare statistics of LMCs with a comparable sample of stellar binaries. We find a remarkable statistical similarity between these two populations. This similarity coupled with marked populational dissimilarity between LMCs and acknowledged planets motivates us to suggest a common origin hypothesis for LMCs and stellar binaries as an alternative to the prevailing interpretation. We discuss merits of such a hypothesis and indicate a possible scenario for the formation of LMCs.

  4. A novel comparator featured with input data characteristic

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaobo; Ye, Desheng; Xu, Xiangmin; Zheng, Shuai

    2016-03-01

    Two types of low-power asynchronous comparators featured with input data statistical characteristic are proposed in this article. The asynchronous ripple comparator stops comparing at the first unequal bit but delivers the result to the least significant bit. The pre-stop asynchronous comparator can completely stop comparing and obtain results immediately. The proposed and contrastive comparators were implemented in SMIC 0.18 μm process with different bit widths. Simulation shows that the proposed pre-stop asynchronous comparator features the lowest power consumption, shortest average propagation delay and highest area efficiency among the comparators. Data path of low-density parity check decoder using the proposed pre-stop asynchronous comparators are most power efficient compared with other data paths with synthesised, clock gating and bitwise competition logic comparators.

  5. Image statistics for surface reflectance perception.

    PubMed

    Sharan, Lavanya; Li, Yuanzhen; Motoyoshi, Isamu; Nishida, Shin'ya; Adelson, Edward H

    2008-04-01

    Human observers can distinguish the albedo of real-world surfaces even when the surfaces are viewed in isolation, contrary to the Gelb effect. We sought to measure this ability and to understand the cues that might underlie it. We took photographs of complex surfaces such as stucco and asked observers to judge their diffuse reflectance by comparing them to a physical Munsell scale. Their judgments, while imperfect, were highly correlated with the true reflectance. The judgments were also highly correlated with certain image statistics, such as moment and percentile statistics of the luminance and subband histograms. When we digitally manipulated these statistics in an image, human judgments were correspondingly altered. Moreover, linear combinations of such statistics allow a machine vision system (operating within the constrained world of single surfaces) to estimate albedo with an accuracy similar to that of human observers. Taken together, these results indicate that some simple image statistics have a strong influence on the judgment of surface reflectance.

  6. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  7. Evaluation of Root Canal Preparation Using Rotary System and Hand Instruments Assessed by Micro-Computed Tomography

    PubMed Central

    Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond

    2015-01-01

    Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929

  8. Dentascan – Is the Investment Worth the Hype ???

    PubMed Central

    Shah, Monali A; Shah, Sneha S; Dave, Deepak

    2013-01-01

    Background: Open Bone Measurement (OBM) and Bone Sounding (BS) are most reliable but invasive clinical methods for Alveolar Bone Level (ABL) assessment, causing discomfort to the patient. Routinely, IOPAs & OPGs are the commonest radiographic techniques used, which tend to underestimate bone loss and obscure buccal/lingual defects. Novel technique like dentascan (CBCT) eliminates this limitation by giving images in 3 planes – sagittal, coronal and axial. Aim: To compare & correlate non-invasive 3D radiographic technique of Dentascan with BS & OBM, and IOPA and OPG, in assessing the ABL. Settings and Design: Cross-sectional diagnostic study. Material and Methods: Two hundred and five sites were subjected to clinical and radiographic diagnostic techniques. Relative distance between the alveolar bone crest and reference wire was measured. All the measurements were compared and tested against the OBM. Statistical Analysis: Student’s t-test, ANOVA, Pearson correlation coefficient. Results: There is statistically significant difference between dentascan and OBM, only BS showed agreement with OBM (p < 0.05). Dentascan weakly correlated with OBM & BS lingually.Rest all techniques showed statistically significant difference between them (p= 0.00). Conclusion: Within the limitations of this study, only BS seems to be comparable with OBM with no superior result of Dentascan over the conventional techniques, except for lingual measurements. PMID:24551722

  9. Data processing of qualitative results from an interlaboratory comparison for the detection of “Flavescence dorée” phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology

    PubMed Central

    Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of “Flavescence dorée” (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes’ theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods. PMID:28384335

  10. Data processing of qualitative results from an interlaboratory comparison for the detection of "Flavescence dorée" phytoplasma: How the use of statistics can improve the reliability of the method validation process in plant pathology.

    PubMed

    Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise

    2017-01-01

    A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods.

  11. Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.

    PubMed

    Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L

    2012-12-01

    Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.

  12. Comparative efficacy of golimumab, infliximab, and adalimumab for moderately to severely active ulcerative colitis: a network meta-analysis accounting for differences in trial designs.

    PubMed

    Thorlund, Kristian; Druyts, Eric; Toor, Kabirraaj; Mills, Edward J

    2015-05-01

    To conduct a network meta-analysis (NMA) to establish the comparative efficacy of infliximab, adalimumab and golimumab for the treatment of moderately to severely active ulcerative colitis (UC). A systematic literature search identified five randomized controlled trials for inclusion in the NMA. One trial assessed golimumab, two assessed infliximab and two assessed adalimumab. Outcomes included clinical response, clinical remission, mucosal healing, sustained clinical response and sustained clinical remission. Innovative methods were used to allow inclusion of the golimumab trial data given the alternative design of this trial (i.e., two-stage re-randomization). After induction, no statistically significant differences were found between golimumab and adalimumab or between golimumab and infliximab. Infliximab was statistically superior to adalimumab after induction for all outcomes and treatment ranking suggested infliximab as the superior treatment for induction. Golimumab and infliximab were associated with similar efficacy for achieving maintained clinical remission and sustained clinical remission, whereas adalimumab was not significantly better than placebo for sustained clinical remission. Golimumab and infliximab were also associated with similar efficacy for achieving maintained clinical response, sustained clinical response and mucosal healing. Finally, golimumab 50 and 100 mg was statistically superior to adalimumab for clinical response and sustained clinical response, and golimumab 100 mg was also statistically superior to adalimumab for mucosal healing. The results of our NMA suggest that infliximab was statistically superior to adalimumab after induction, and that golimumab was statistically superior to adalimumab for sustained outcomes. Golimumab and infliximab appeared comparable in efficacy.

  13. Statistical Energy Analysis (SEA) and Energy Finite Element Analysis (EFEA) Predictions for a Floor-Equipped Composite Cylinder

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.

    2011-01-01

    Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.

  14. Effect of Abdominoplasty in the Lipid Profile of Patients with Dyslipidemia

    PubMed Central

    Ramos-Gallardo, Guillermo; Pérez Verdin, Ana; Fuentes, Miguel; Godínez Gutiérrez, Sergio; Ambriz-Plascencia, Ana Rosa; González-García, Ignacio; Gómez-Fonseca, Sonia Mericia; Madrigal, Rosalio; González-Reynoso, Luis Iván; Figueroa, Sandra; Toscano Igartua, Xavier; Jiménez Gutierrez, Déctor Francisco

    2013-01-01

    Introduction. Dyslipidemia like other chronic degenerative diseases is pandemic in Latin America and around the world. A lot of patients asking for body contouring surgery can be sick without knowing it. Objective. Observe the lipid profile of patients with dyslipidemia, before and three months after an abdominoplasty. Methods. Patients candidate to an abdominoplasty without morbid obesity were followed before and three months after the surgery. We compared the lipid profile, glucose, insulin, and HOMA (cardiovascular risk marker) before and three months after the surgery. We used Student's t test to compare the results. A P value less than 0.05 was considered as significant. Results. Twenty-six patients were observed before and after the surgery. At the third month, we found only statistical differences in LDL and triglyceride values (P 0.04 and P 0.03). The rest of metabolic values did not reach statistical significance. Conclusion. In this group of patients with dyslipidemia, at the third month, only LDL and triglyceride values reached statistical significances. There is no significant change in glucose, insulin, HOMA, cholesterol, VLDL, or HDL. PMID:23956856

  15. A New Scoring System to Predict the Risk for High-risk Adenoma and Comparison of Existing Risk Calculators.

    PubMed

    Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J

    2017-04-01

    Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.

  16. Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms

    NASA Astrophysics Data System (ADS)

    Berkson, Emily E.; Messinger, David W.

    2016-05-01

    Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.

  17. Evidence-based pathology in its second decade: toward probabilistic cognitive computing.

    PubMed

    Marchevsky, Alberto M; Walts, Ann E; Wick, Mark R

    2017-03-01

    Evidence-based pathology advocates using a combination of best available data ("evidence") from the literature and personal experience for the diagnosis, estimation of prognosis, and assessment of other variables that impact individual patient care. Evidence-based pathology relies on systematic reviews of the literature, evaluation of the quality of evidence as categorized by evidence levels and statistical tools such as meta-analyses, estimates of probabilities and odds, and others. However, it is well known that previously "statistically significant" information usually does not accurately forecast the future for individual patients. There is great interest in "cognitive computing" in which "data mining" is combined with "predictive analytics" designed to forecast future events and estimate the strength of those predictions. This study demonstrates the use of IBM Watson Analytics software to evaluate and predict the prognosis of 101 patients with typical and atypical pulmonary carcinoid tumors in which Ki-67 indices have been determined. The results obtained with this system are compared with those previously reported using "routine" statistical software and the help of a professional statistician. IBM Watson Analytics interactively provides statistical results that are comparable to those obtained with routine statistical tools but much more rapidly, with considerably less effort and with interactive graphics that are intuitively easy to apply. It also enables analysis of natural language variables and yields detailed survival predictions for patient subgroups selected by the user. Potential applications of this tool and basic concepts of cognitive computing are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.

    PubMed

    Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J

    2015-07-01

    Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf. Copyright © 2015 by the Genetics Society of America.

  19. Potential errors and misuse of statistics in studies on leakage in endodontics.

    PubMed

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  20. Comparing long term impact on ovarian reserve between laparoscopic ovarian cystectomy and open laprotomy for ovarian endometrioma

    PubMed Central

    2013-01-01

    Objective To compare the long term impact on ovarian reserve between laparoscopic ovarian cystectomy with bipolar electrocoagulation and laparotomic cystectomy with suturing for ovarian endometrotic cyst. Patient and method(s) 121 patients with benign ovarian endometroitic cysts were randomised to either laparoscopic ovarian cystectomy using bipolar electrocoagulation (61 patients) or laparotomic ovarian cystectomy using sutures (60 patients). Serum follicle-stimulating hormone, Antimullerian hormon, Basal antral follicle Count, mean ovarian diameter, and ovarian stromal blood flow velocity were measured at 6, 12 and 18 months after surgery and compared in both groups. Result(s) A statistically significant increase of serum FSH was found in the laproscopic bipolar group at 6-, 12 and 18-month postoperativly compared to open laparotomy suture group. Also, a statistically significant decrease of the mean AMH value occurred in laproscopic bipolar group at 6-, 12 and 18-month follow- up compared to open laparotomy suture group. Basal antral follicle number, mean ovarian diameter and peak systolic velocity were significantly decreased during the 6-, 12,18 -month follow-up in laproscopic bipolar group compared to open laparotomy suture group. Conclusion(s) After laproscopic ovarian cystecomy for endometrioma all pareameter of ovarian reseve are significantly decreased on long term follow up as compared to open laprotomy. PMID:24180348

  1. Offset Stream Technology Test-Summary of Results

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.; Bridges, James E.; Henderson, Brenda

    2007-01-01

    Statistical jet noise prediction codes that accurately predict spectral directivity for both cold and hot jets are highly sought both in industry and academia. Their formulation, whether based upon manipulations of the Navier-Stokes equations or upon heuristic arguments, require substantial experimental observation of jet turbulence statistics. Unfortunately, the statistics of most interest involve the space-time correlation of flow quantities, especially velocity. Until the last 10 years, all turbulence statistics were made with single-point probes, such as hotwires or laser Doppler anemometry. Particle image velocimetry (PIV) brought many new insights with its ability to measure velocity fields over large regions of jets simultaneously; however, it could not measure velocity at rates higher than a few fields per second, making it unsuitable for obtaining temporal spectra and correlations. The development of time-resolved PIV, herein called TR-PIV, has removed this limitation, enabling measurement of velocity fields at high resolution in both space and time. In this paper, ground-breaking results from the application of TR-PIV to single-flow hot jets are used to explore the impact of heat on turbulent statistics of interest to jet noise models. First, a brief summary of validation studies is reported, undertaken to show that the new technique produces the same trusted results as hotwire at cold, low-speed jets. Second, velocity spectra from cold and hot jets are compared to see the effect of heat on the spectra. It is seen that heated jets possess 10 percent more turbulence intensity compared to the unheated jets with the same velocity. The spectral shapes, when normalized using Strouhal scaling, are insensitive to temperature if the stream-wise location is normalized relative to the potential core length. Similarly, second order velocity correlations, of interest in modeling of jet noise sources, are also insensitive to temperature as well.

  2. Statistical colour models: an automated digital image analysis method for quantification of histological biomarkers.

    PubMed

    Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad

    2016-04-27

    Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.

  3. The Global Error Assessment (GEA) model for the selection of differentially expressed genes in microarray data.

    PubMed

    Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan

    2004-11-01

    Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.

  4. Analysis of corneal endothelial cell density and morphology after laser in situ keratomileusis using two types of femtosecond lasers

    PubMed Central

    Tomita, Minoru; Waring, George O; Watabe, Miyuki

    2012-01-01

    Purpose To compare two different femtosecond lasers used for flap creation during laser-assisted in situ keratomileusis (LASIK) surgery in terms of their effects on the corneal endothelium. Methods We performed LASIK surgery on 254 eyes of 131 patients using IntraLase FS60 (Abbott Medical Optics, Inc, Irvine, CA; IntraLase group) and 254 eyes of 136 patients using Femto LDV (Ziemer Group AG, Port, Switzerland; LDV group) for corneal flap creation. The mean cell density, coefficient of variation, and hexagonality of the corneal endothelial cells were determined and the results were statistically compared. Results There were no statistically significant differences in the corneal morphology between pre and post LASIK results in each group, nor were there significant differences between the results of both groups at 3 months post LASIK. Conclusions Both IntraLase FS60 and Ziemer Femto LDV are able to create flaps without significant adverse effects on the corneal endothelial morphology through 3 months after LASIK surgery. PMID:23055680

  5. One-dimensional turbulence modeling of a turbulent counterflow flame with comparison to DNS

    DOE PAGES

    Jozefik, Zoltan; Kerstein, Alan R.; Schmidt, Heiko; ...

    2015-06-01

    The one-dimensional turbulence (ODT) model is applied to a reactant-to-product counterflow configuration and results are compared with DNS data. The model employed herein solves conservation equations for momentum, energy, and species on a one dimensional (1D) domain corresponding to the line spanning the domain between nozzle orifice centers. The effects of turbulent mixing are modeled via a stochastic process, while the Kolmogorov and reactive length and time scales are explicitly resolved and a detailed chemical kinetic mechanism is used. Comparisons between model and DNS results for spatial mean and root-mean-square (RMS) velocity, temperature, and major and minor species profiles aremore » shown. The ODT approach shows qualitatively and quantitatively reasonable agreement with the DNS data. Scatter plots and statistics conditioned on temperature are also compared for heat release rate and all species. ODT is able to capture the range of results depicted by DNS. As a result, conditional statistics show signs of underignition.« less

  6. Hypertension screening

    NASA Technical Reports Server (NTRS)

    Foulke, J. M.

    1975-01-01

    An attempt was made to measure the response to an announcement of hypertension screening at the Goddard Space Center, to compare the results to those of previous statistics. Education and patient awareness of the problem were stressed.

  7. A Quantitative Comparative Study of Blended and Traditional Models in the Secondary Advanced Placement Statistics Classroom

    ERIC Educational Resources Information Center

    Owens, Susan T.

    2017-01-01

    Technology is becoming an integral tool in the classroom and can make a positive impact on how the students learn. This quantitative comparative research study examined gender-based differences among secondary Advanced Placement (AP) Statistic students comparing Educational Testing Service (ETS) College Board AP Statistic examination scores…

  8. Comparative Financial Statistics for Public Two-Year Colleges: FY 1991 National Sample.

    ERIC Educational Resources Information Center

    Dickmeyer, Nathan; Cirino, Anna Marie

    This report provides comparative financial information derived from a national sample of 503 public two-year colleges. The report includes space for colleges to compare their institutional statistics with data provided on national sample medians; quartile data for the national sample; and statistics presented in various formats, including tables,…

  9. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827

  10. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  11. Blended Learning Versus Traditional Lecture in Introductory Nursing Pathophysiology Courses.

    PubMed

    Blissitt, Andrea Marie

    2016-04-01

    Currently, many undergraduate nursing courses use blended-learning course formats with success; however, little evidence exists that supports the use of blended formats in introductory pathophysiology courses. The purpose of this study was to compare the scores on pre- and posttests and course satisfaction between traditional and blended course formats in an introductory nursing pathophysiology course. This study used a quantitative, quasi-experimental, nonrandomized control group, pretest-posttest design. Analysis of covariance compared pre- and posttest scores, and a t test for independent samples compared students' reported course satisfaction of the traditional and blended course formats. Results indicated that the differences in posttest scores were not statistically significant between groups. Students in the traditional group reported statistically significantly higher satisfaction ratings than students in the blended group. The results of this study support the need for further research of using blended learning in introductory pathophysiology courses in undergraduate baccalaureate nursing programs. Further investigation into how satisfaction is affected by course formats is needed. Copyright 2016, SLACK Incorporated.

  12. Short-term Forecasting of the Prevalence of Trachoma: Expert Opinion, Statistical Regression, versus Transmission Models

    PubMed Central

    Liu, Fengchen; Porco, Travis C.; Amza, Abdou; Kadri, Boubacar; Nassirou, Baido; West, Sheila K.; Bailey, Robin L.; Keenan, Jeremy D.; Solomon, Anthony W.; Emerson, Paul M.; Gambhir, Manoj; Lietman, Thomas M.

    2015-01-01

    Background Trachoma programs rely on guidelines made in large part using expert opinion of what will happen with and without intervention. Large community-randomized trials offer an opportunity to actually compare forecasting methods in a masked fashion. Methods The Program for the Rapid Elimination of Trachoma trials estimated longitudinal prevalence of ocular chlamydial infection from 24 communities treated annually with mass azithromycin. Given antibiotic coverage and biannual assessments from baseline through 30 months, forecasts of the prevalence of infection in each of the 24 communities at 36 months were made by three methods: the sum of 15 experts’ opinion, statistical regression of the square-root-transformed prevalence, and a stochastic hidden Markov model of infection transmission (Susceptible-Infectious-Susceptible, or SIS model). All forecasters were masked to the 36-month results and to the other forecasts. Forecasts of the 24 communities were scored by the likelihood of the observed results and compared using Wilcoxon’s signed-rank statistic. Findings Regression and SIS hidden Markov models had significantly better likelihood than community expert opinion (p = 0.004 and p = 0.01, respectively). All forecasts scored better when perturbed to decrease Fisher’s information. Each individual expert’s forecast was poorer than the sum of experts. Interpretation Regression and SIS models performed significantly better than expert opinion, although all forecasts were overly confident. Further model refinements may score better, although would need to be tested and compared in new masked studies. Construction of guidelines that rely on forecasting future prevalence could consider use of mathematical and statistical models. PMID:26302380

  13. Short-term Forecasting of the Prevalence of Trachoma: Expert Opinion, Statistical Regression, versus Transmission Models.

    PubMed

    Liu, Fengchen; Porco, Travis C; Amza, Abdou; Kadri, Boubacar; Nassirou, Baido; West, Sheila K; Bailey, Robin L; Keenan, Jeremy D; Solomon, Anthony W; Emerson, Paul M; Gambhir, Manoj; Lietman, Thomas M

    2015-08-01

    Trachoma programs rely on guidelines made in large part using expert opinion of what will happen with and without intervention. Large community-randomized trials offer an opportunity to actually compare forecasting methods in a masked fashion. The Program for the Rapid Elimination of Trachoma trials estimated longitudinal prevalence of ocular chlamydial infection from 24 communities treated annually with mass azithromycin. Given antibiotic coverage and biannual assessments from baseline through 30 months, forecasts of the prevalence of infection in each of the 24 communities at 36 months were made by three methods: the sum of 15 experts' opinion, statistical regression of the square-root-transformed prevalence, and a stochastic hidden Markov model of infection transmission (Susceptible-Infectious-Susceptible, or SIS model). All forecasters were masked to the 36-month results and to the other forecasts. Forecasts of the 24 communities were scored by the likelihood of the observed results and compared using Wilcoxon's signed-rank statistic. Regression and SIS hidden Markov models had significantly better likelihood than community expert opinion (p = 0.004 and p = 0.01, respectively). All forecasts scored better when perturbed to decrease Fisher's information. Each individual expert's forecast was poorer than the sum of experts. Regression and SIS models performed significantly better than expert opinion, although all forecasts were overly confident. Further model refinements may score better, although would need to be tested and compared in new masked studies. Construction of guidelines that rely on forecasting future prevalence could consider use of mathematical and statistical models. Clinicaltrials.gov NCT00792922.

  14. Early-Stage Estimated Value of Blend Sign on the Prognosis of Patients with Intracerebral Hemorrhage

    PubMed Central

    Zhou, Ningquan; Wang, Chao

    2018-01-01

    Background and Purpose This study aimed to explore the relationship between blend sign and prognosis of patients with intracerebral hemorrhage (ICH). Methods Between January 2014 and December 2016, the results of cranial computed tomography imaging within 24 h after the onset of symptoms from 275 patients with ICH were retrospectively analyzed. The patients with or without blend sign were compared to observe and analyze the difference in coagulation function abnormality, rebleeding, mortality, and bad prognosis rates in the early stages. Results Of the 275 patients with ICH, 47 patients had Blend Sign I (17.09%) and 17 patients had Blend Sign II (6.18%). The coagulation function abnormality rate had no statistical difference among Blend Sign I, Blend Sign II, and conventional groups (P > 0.05). In the Blend Sign I group, the rebleeding rate was 4.26%, bad prognosis rate was 25.53%, and mortality rate was 6.38%, which were not statistically significantly different compared with those in the conventional group (P > 0.05). The rebleeding rate in the Blend Sign II group was 47.06%, bad prognosis rate was 82.35%, and mortality rate was 47.06%, which were statistically significantly different compared with those in the conventional and Blend Sign I groups (P < 0.05). Conclusions For the patients associated with Blend Sign I, the prognosis was equivalent to that in the conventional group, with no statistically significant difference. The rebleeding, bad prognosis, and mortality rates were higher in the Blend Sign II group than in the conventional group and deserved more attention.

  15. Qualitative Literature Review of the Prevalence of Depression in Medical Students Compared to Students in Non-medical Degrees.

    PubMed

    Bacchi, Stephen; Licinio, Julio

    2015-06-01

    The purpose of this study is to review studies published in English between 1 January 2000 and 16 June 2014, in peer-reviewed journals, that have assessed the prevalence of depression, comparing medical students and non-medical students with a single evaluation method. The databases PubMed, Medline, EMBASE, PsycINFO, and Scopus were searched for eligible articles. Searches used combinations of the Medical Subject Headings medical student and depression. Titles and abstracts were reviewed to determine eligibility before full-text articles were retrieved, which were then also reviewed. Twelve studies met eligibility criteria. Non-medical groups surveyed included dentistry, business, humanities, nursing, pharmacy, and architecture students. One study found statistically significant results suggesting that medical students had a higher prevalence of depression than groups of non-medical students; five studies found statistically significant results indicating that the prevalence of depression in medical students was less than that in groups of non-medical students; four studies found no statistically significant difference, and two studies did not report on the statistical significance of their findings. One study was longitudinal, and 11 studies were cross-sectional. While there are limitations to these comparisons, in the main, the reviewed literature suggests that medical students have similar or lower rates of depression compared to certain groups of non-medical students. A lack of longitudinal studies meant that potential common underlying causes could not be discerned, highlighting the need for further research in this area. The high rates of depression among medical students indicate the continuing need for interventions to reduce depression.

  16. Categorical data processing for real estate objects valuation using statistical analysis

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.

    2018-05-01

    Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.

  17. Adaptive statistical pattern classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.

    1975-01-01

    A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.

  18. Transfer of SIMNET Training in the Armor Officer Basic Course

    DTIC Science & Technology

    1991-01-01

    group correctly performed more tasks in the posttest , but the difference was not statistically significant for these small samples. Gains from pretest ...to posttest were not compared statistically, but the field-trained group showed little average gain. Based on these results and other supporting data...that serve as a control group , and (b) SIMNET classes after the change that serve as a treatment group . The comparison is termed quasi - experimental

  19. Reproducible detection of disease-associated markers from gene expression data.

    PubMed

    Omae, Katsuhiro; Komori, Osamu; Eguchi, Shinto

    2016-08-18

    Detection of disease-associated markers plays a crucial role in gene screening for biological studies. Two-sample test statistics, such as the t-statistic, are widely used to rank genes based on gene expression data. However, the resultant gene ranking is often not reproducible among different data sets. Such irreproducibility may be caused by disease heterogeneity. When we divided data into two subsets, we found that the signs of the two t-statistics were often reversed. Focusing on such instability, we proposed a sign-sum statistic that counts the signs of the t-statistics for all possible subsets. The proposed method excludes genes affected by heterogeneity, thereby improving the reproducibility of gene ranking. We compared the sign-sum statistic with the t-statistic by a theoretical evaluation of the upper confidence limit. Through simulations and applications to real data sets, we show that the sign-sum statistic exhibits superior performance. We derive the sign-sum statistic for getting a robust gene ranking. The sign-sum statistic gives more reproducible ranking than the t-statistic. Using simulated data sets we show that the sign-sum statistic excludes hetero-type genes well. Also for the real data sets, the sign-sum statistic performs well in a viewpoint of ranking reproducibility.

  20. Low-Level Contrast Statistics of Natural Images Can Modulate the Frequency of Event-Related Potentials (ERP) in Humans.

    PubMed

    Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali

    2016-01-01

    Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3-7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.

  1. Low-Level Contrast Statistics of Natural Images Can Modulate the Frequency of Event-Related Potentials (ERP) in Humans

    PubMed Central

    Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali

    2016-01-01

    Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3–7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception. PMID:28018197

  2. Neural network approaches versus statistical methods in classification of multisource remote sensing data

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.

    1990-01-01

    Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.

  3. Statistical tests to compare motif count exceptionalities

    PubMed Central

    Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent

    2007-01-01

    Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349

  4. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    NASA Astrophysics Data System (ADS)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-01

    This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.

  5. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE PAGES

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    2016-09-21

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  6. Determination of errors in derived magnetic field directions in geosynchronous orbit: results from a statistical approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yue; Cunningham, Gregory; Henderson, Michael

    Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less

  7. Numerical study of axial turbulent flow over long cylinders

    NASA Technical Reports Server (NTRS)

    Neves, J. C.; Moin, P.; Moser, R. D.

    1991-01-01

    The effects of transverse curvature are investigated by means of direct numerical simulations of turbulent axial flow over cylinders. Two cases of Reynolds number of about 3400 and layer-thickness-to-cylinder-radius ratios of 5 and 11 were simulated. All essential turbulence scales were resolved in both calculations, and a large number of turbulence statistics were computed. The results are compared with the plane channel results of Kim et al. (1987) and with experiments. With transverse curvature the skin friction coefficient increases and the turbulence statistics, when scaled with wall units, are lower than in the plane channel. The momentum equation provides a scaling that collapses the cylinder statistics, and allows the results to be interpreted in light of the plane channel flow. The azimuthal and radial length scales of the structures in the flow are of the order of the cylinder diameter. Boomerang-shaped structures with large spanwise length scales were observed in the flow.

  8. Determination of apparent coupling factors for adhesive bonded acrylic plates using SEAL approach

    NASA Astrophysics Data System (ADS)

    Pankaj, Achuthan. C.; Shivaprasad, M. V.; Murigendrappa, S. M.

    2018-04-01

    Apparent coupling loss factors (CLF) and velocity responses has been computed for two lap joined adhesive bonded plates using finite element and experimental statistical energy analysis like approach. A finite element model of the plates has been created using ANSYS software. The statistical energy parameters have been computed using the velocity responses obtained from a harmonic forced excitation analysis. Experiments have been carried out for two different cases of adhesive bonded joints and the results have been compared with the apparent coupling factors and velocity responses obtained from finite element analysis. The results obtained from the studies signify the importance of modeling of adhesive bonded joints in computation of the apparent coupling factors and its further use in computation of energies and velocity responses using statistical energy analysis like approach.

  9. Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement.

    PubMed

    Popplewell, Michael; Reizes, John; Zaslawski, Chris

    2018-05-31

    Fleiss' Kappa (FK) has been commonly, but incorrectly, employed as the "standard" for evaluating chance-removed inter-rater agreement with ordinal data. This practice may lead to misleading conclusions in inter-rater agreement research. An example is presented that demonstrates the conditions where FK produces inappropriate results, compared with Gwet's AC2, which is proposed as a more appropriate statistic. A novel format for recording a Chinese Medical (CM) diagnoses, called the Diagnostic System of Oriental Medicine (DSOM), was used to record and compare patient diagnostic data, which, unlike the contemporary CM diagnostic format, allows agreement by chance to be considered when evaluating patient data obtained with unrestricted diagnostic options available to diagnosticians. Five CM practitioners diagnosed 42 subjects drawn from an open population. Subjects' diagnoses were recorded using the DSOM format. All the available data were initially used to evaluate agreement. Then, the subjects were sorted into three groups to demonstrate the effects of differing data marginality on the calculated chance-removed agreement. Agreement between the practitioners for each subject was evaluated with linearly weighted simple agreement, FK and Gwet's AC2. In all cases, overall agreement was much lower with FK than Gwet's AC2. Larger differences occurred when the data were more free marginal. Inter-rater agreement determined with FK statistics is unlikely to be correct unless it can be shown that the data from which agreement is determined are, in fact, fixed marginal. It follows that results obtained on agreement between practitioners with FK are probably incorrect. It is shown that inter-rater agreement evaluated with AC2 statistic is an appropriate measure when fixed marginal data are neither expected nor guaranteed. The AC2 statistic should be used as the standard statistical approach for determining agreement between practitioners.

  10. [The growth behavior of mouse fibroblasts on intraocular lens surface of various silicone and PMMA materials].

    PubMed

    Kammann, J; Kreiner, C F; Kaden, P

    1994-08-01

    Experience with intraocular lenses (IOL) made of PMMA dates back ca. 40 years, while silicone IOLs have been in use for only about 10 years. The biocompatibility of PMMA and silicone caoutchouc was tested in a comparative study investigating the growth of mouse fibroblasts on different IOL materials. Spectrophotometric determination of protein synthesis and liquid scintillation counting of DNA synthesis were carried out. The spreading of cells was planimetrically determined, and the DNA synthesis of individual cells in direct contact with the test sample was tested. The results showed that the biocompatibility of silicone lenses made of purified caoutchouc is comparable with that of PMMA lenses; there is no statistically significant difference. However, impurities arising during material synthesis result in a statistically significant inhibition of cell growth on the IOL surfaces.

  11. Efficacy of Curcuma for Treatment of Osteoarthritis

    PubMed Central

    Perkins, Kimberly; Sahy, William; Beckett, Robert D.

    2016-01-01

    The objective of this review is to identify, summarize, and evaluate clinical trials to determine the efficacy of curcuma in the treatment of osteoarthritis. A literature search for interventional studies assessing efficacy of curcuma was performed, resulting in 8 clinical trials. Studies have investigated the effect of curcuma on pain, stiffness, and functionality in patients with knee osteoarthritis. Curcuma-containing products consistently demonstrated statistically significant improvement in osteoarthritis-related endpoints compared with placebo, with one exception. When compared with active control, curcuma-containing products were similar to nonsteroidal anti-inflammatory drugs, and potentially to glucosamine. While statistical significant differences in outcomes were reported in a majority of studies, the small magnitude of effect and presence of major study limitations hinder application of these results. Further rigorous studies are needed prior to recommending curcuma as an effective alternative therapy for knee osteoarthritis. PMID:26976085

  12. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    PubMed Central

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  13. Sealing ability of root-end filling materials.

    PubMed

    Amezcua, Octávio; Gonzalez, Álvaro Cruz; Borges, Álvaro Henrique; Bandeca, Matheus Coelho; Estrela, Cyntia Rodrigues de Araújo; Estrela, Carlos

    2015-03-01

    The aim of this research was to compare the apical sealing ability of different root-end filling materials (SuperEBA(®), ProRoot MTA(®), thermoplasticized gutta-percha + AH-Plus(®), thermoplasticized RealSeal(®)), by means of microbial indicators. Thus, 50 human single-rooted teeth were employed, which were shaped until size 5 0, retro - prepared with ultrasonic tips and assigned to 4 groups, retro-filled with each material or controls. A platform was employed, which was split in two halves: upper chamber-where the microbial suspension containing the biological indicators was introduced (E. faecalis + S. aureus + P. aeruginosa + B. subtilis + C. albicans); and a lower chamber containing the culture medium brain, heart influsion, where 3 mm of the apical region of teeth were kept immersed. Lectures were made daily for 60 days, using the turbidity of the culture medium as indicative of microbial contamination. Statistical analyses were carried out at 5% level of significance. The results showed microbial leakage at least in some specimens in all of the groups. RealSeal(®) has more microbial leakage, statistically significant, compared to ProRoot(®) MTA and SuperEBA(®). No significant differences were observed when compared ProRoot(®) MTA and SuperEBA(®). The gutta-percha + AH Plus results showed no statistically significant differences when compared with the other groups. All the tested materials showed microbial leakage. Root-end fillings with Super-EBA or MTA had the lowest bacterial filtration and RealSeal shows highest bacterial filtration.

  14. Clinical evaluation of subepithelial connective tissue graft and guided tissue regeneration for treatment of Miller’s class 1 gingival recession (comparative, split mouth, six months study)

    PubMed Central

    Bhavsar, Neeta-V.; Dulani, Kirti; Trivedi, Rahul

    2014-01-01

    Objectives: The present study aims to clinically compare and evaluate subepithelial connective tissue graft and the GTR based root coverage in treatment of Miller’s Class I gingival recession. Study Design: 30 patients with at least one pair of Miller’s Class I gingival recession were treated either with Subepithelial connective tissue graft (Group A) or Guided tissue regeneration (Group B). Clinical parameters monitored included recession RD, width of keratinized gingiva (KG), probing depth (PD), clinical attachment level (CAL), attached gingiva (AG), residual probing depth (RPD) and % of Root coverage(%RC). Measurements were taken at baseline, three months and six months. A standard surgical procedure was used for both Group A and Group B. Data were recorded and statistical analysis was done for both intergroup and intragroup. Results: At end of six months % RC obtained were 84.47% (Group A) and 81.67% (Group B). Both treatments resulted in statistically significant improvement in clinical parameters. When compared, no statistically significant difference was found between both groups except in RPD, where it was significantly greater in Group A. Conclusions: GTR technique has advantages over subepithelial connective tissue graft for shallow Miller’s Class I defects and this procedure can be used to avoid patient discomfort and reduce treatment time. Key words:Collagen membrane, comparative split mouth study, gingival recession, subepithelial connective tissue graft, guided tissue regeneration (GTR). PMID:25136420

  15. Comparison of the Cellient(™) automated cell block system and agar cell block method.

    PubMed

    Kruger, A M; Stevens, M W; Kerley, K J; Carter, C D

    2014-12-01

    To compare the Cellient(TM) automated cell block system with the agar cell block method in terms of quantity and quality of diagnostic material and morphological, histochemical and immunocytochemical features. Cell blocks were prepared from 100 effusion samples using the agar method and Cellient system, and routinely sectioned and stained for haematoxylin and eosin and periodic acid-Schiff with diastase (PASD). A preliminary immunocytochemical study was performed on selected cases (27/100 cases). Sections were evaluated using a three-point grading system to compare a set of morphological parameters. Statistical analysis was performed using Fisher's exact test. Parameters assessing cellularity, presence of single cells and definition of nuclear membrane, nucleoli, chromatin and cytoplasm showed a statistically significant improvement on Cellient cell blocks compared with agar cell blocks (P < 0.05). No significant difference was seen for definition of cell groups, PASD staining or the intensity or clarity of immunocytochemical staining. A discrepant immunocytochemistry (ICC) result was seen in 21% (13/63) of immunostains. The Cellient technique is comparable with the agar method, with statistically significant results achieved for important morphological features. It demonstrates potential as an alternative cell block preparation method which is relevant for the rapid processing of fine needle aspiration samples, malignant effusions and low-cellularity specimens, where optimal cell morphology and architecture are essential. Further investigation is required to optimize immunocytochemical staining using the Cellient method. © 2014 John Wiley & Sons Ltd.

  16. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  17. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  18. Validation of survey information on smoking and alcohol consumption against import statistics, Greenland 1993-2010.

    PubMed

    Bjerregaard, Peter; Becker, Ulrik

    2013-01-01

    Questionnaires are widely used to obtain information on health-related behaviour, and they are more often than not the only method that can be used to assess the distribution of behaviour in subgroups of the population. No validation studies of reported consumption of tobacco or alcohol have been published from circumpolar indigenous communities. The purpose of the study is to compare information on the consumption of tobacco and alcohol obtained from 3 population surveys in Greenland with import statistics. Estimates of consumption of cigarettes and alcohol using several different survey instruments in cross-sectional population studies from 1993-1994, 1999-2001 and 2005-2010 were compared with import statistics from the same years. For cigarettes, survey results accounted for virtually the total import. Alcohol consumption was significantly under-reported with reporting completeness ranging from 40% to 51% for different estimates of habitual weekly consumption in the 3 study periods. Including an estimate of binge drinking increased the estimated total consumption to 78% of the import. Compared with import statistics, questionnaire-based population surveys capture the consumption of cigarettes well in Greenland. Consumption of alcohol is under-reported, but asking about binge episodes in addition to the usual intake considerably increased the reported intake in this population and made it more in agreement with import statistics. It is unknown to what extent these findings at the population level can be inferred to population subgroups.

  19. The suitability of using death certificates as a data source for cancer mortality assessment in Turkey

    PubMed Central

    Ulus, Tumer; Yurtseven, Eray; Cavdar, Sabanur; Erginoz, Ethem; Erdogan, M. Sarper

    2012-01-01

    Aim To compare the quality of the 2008 cancer mortality data of the Istanbul Directorate of Cemeteries (IDC) with the 2008 data of International Agency for Research on Cancer (IARC) and Turkish Statistical Institute (TUIK), and discuss the suitability of using this databank for estimations of cancer mortality in the future. Methods We used 2008 and 2010 death records of the IDC and compared it to TUIK and IARC data. Results According to the WHO statistics, in Turkey in 2008 there were 67 255 estimated cancer deaths. As the population of Turkey was 71 517 100, the cancer mortality rate was 9.4 per 10 000. According to the IDC statistics, the cancer mortality rate in Istanbul in 2008 was 5.97 per 10 000. Conclusion IDC estimates were higher than WHO estimates probably because WHO bases its estimates on a sample group and because of the restrictions of IDC data collection method. Death certificates could be a reliable and accurate data source for mortality statistics if the problems of data collection are solved. PMID:23100210

  20. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  1. Comparative Financial Statistics for Public Two-Year Colleges: FY 1992 National Sample.

    ERIC Educational Resources Information Center

    Dickmeyer, Nathan; Cirino, Anna Marie

    This report, the 15th in an annual series, provides comparative information derived from a national sample of 544 public two-year colleges, highlighting financial statistics for fiscal year 1991-92. The report offers space for colleges to compare their institutional statistics with data provided on national sample medians; quartile data for the…

  2. Comparative Financial Statistics for Public Two-Year Colleges: FY 1991 Peer Groups Sample.

    ERIC Educational Resources Information Center

    Dickmeyer, Nathan; Cirino, Anna Marie

    Comparative financial information, derived from two national surveys of 503 public two-year colleges, is presented in this report for fiscal year (FY) 1990-91. The report includes statistics for the national sample and six peer groups, space for colleges to compare their institutional statistics with national and peer groups, and tables, bar…

  3. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  4. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    PubMed

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  5. To compare the gingival melanin repigmentation after diode laser application and surgical removal.

    PubMed

    Mahajan, Gaurav; Kaur, Harjit; Jain, Sanjeev; Kaur, Navnit; Sehgal, Navneet Kaur; Gautam, Aditi

    2017-01-01

    The aim of the present study is to compare the gingival melanin repigmentation after diode laser application and surgical removal done by scraping with Kirkland knife. This study was a randomized split-mouth study where 10 patients presenting with unattractive, diffuse, dark brown to black gingival discoloration on the facial aspect of the maxillary gingiva were treated by diode laser application and surgical removal and followed up for 3-, 6-, and 9-month intervals. The results showed a statistically significant difference in repigmentation between the groups at the interval of 3 months ( P = 0.040), but the difference was statistically not significant at 6 months ( P = 0.118) and 9 months ( P = 0.146). On surgically treated sites, all cases showed repigmentation of the gingiva, but in laser treated, there were two individuals which did not show repigmentation of the gingiva even at the end of 9-month observation time. The incidence of repigmentation was slightly less in laser-treated sites as compared to surgical depigmentation although the difference was statistically significant only up to 3 months.

  6. Effect of Different Ceramic Crown Preparations on Tooth Structure Loss: An In Vitro Study

    NASA Astrophysics Data System (ADS)

    Ebrahimpour, Ashkan

    Objective: To quantify and compare the amount of tooth-structure reduction following the full-coverage preparations for crown materials of porcelain-fused-to-metal, lithium disilicate glass-ceramic and yttria-stabilized tetragonal zirconia polycrystalline for three tooth morphologies. Methods: Groups of resin teeth of different morphologies were individually weighed to high precision, then prepared following the preparation guidelines. The teeth were re-weighed after preparation and the amount of structural reduction was calculated. Statistical analyses were performed to find out if there was a significant difference among the groups. Results: Amount of tooth reduction for zirconia crown preparations was the lowest and statistically different compared with the other two materials. No statistical significance was found between the amount of reduction for porcelain-fused-to-metal and lithium disilicate glass-ceramic crowns. Conclusion: Within the limitations of this study, more tooth structure can be saved when utilizing zirconia full-coverage restorations compared with lithium disilicate glass-ceramic and porcelain-fused-to-metal crowns in maxillary central incisors, first premolars and first molars.

  7. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.

    PubMed

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-06-01

    The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.

  8. Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks

    PubMed Central

    Bock, Joel R.; Maewal, Akhilesh; Gough, David A.

    2012-01-01

    Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507

  9. Randomized clinical trial of two resin-modified glass ionomer materials: 1-year results.

    PubMed

    Perdigão, J; Dutra-Corrêa, M; Saraceni, S H C; Ciaramicoli, M T; Kiyan, V H

    2012-01-01

    With institutional review board approval, 33 patients who needed restoration of noncarious cervical lesions (NCCL) were enrolled in this study. A total of 92 NCCL were selected and randomly assigned to three groups: (1) Ambar (FGM), a two-step etch-and-rinse adhesive (control), combined with the nanofilled composite resin Filtek Supreme Plus (FSP; 3M ESPE); (2) Fuji II LC (GC America), a traditional resin-modified glass ionomer (RMGIC) restorative material; (3) Ketac Nano (3M ESPE), a nanofilled RMGIC restorative material. Restorations were evaluated at six months and one year using modified United States Public Health Service parameters. At six months after initial placement, 84 restorations (a 91.3% recall rate) were evaluated. At one year, 78 restorations (a 84.8% recall rate) were available for evaluation. The six month and one year overall retention rates were 93.1% and 92.6%, respectively, for Ambar/FSP; 100% and 100%, respectively, for Fuji II LC; and 100% and 100%, respectively, for Ketac Nano with no statistical difference between any pair of groups at each recall. Sensitivity to air decreased for all three adhesive materials from the preoperative to the postoperative stage, but the difference was not statistically significant. For Ambar/FSP, there were no statistical differences for any of the parameters from baseline to six months and from baseline to one year. For Fuji II LC, surface texture worsened significantly from baseline to six months and from baseline to one year. For Ketac Nano, enamel marginal staining increased significantly from baseline to one year and from six months to one year. Marginal adaptation was statistically worse at one year compared with baseline only for Ketac Nano. When parameters were compared for materials at each recall, Ketac Nano resulted in significantly worse color match than any of the other two materials at any evaluation period. At one year, Ketac Nano resulted in significantly worse marginal adaptation than the other two materials and worse marginal staining than Fuji II LC. Surface texture was statistically worse for Fuji II LC compared with the other two materials at all evaluation periods. The one-year retention rate was statistically similar for the three adhesive materials. Nevertheless, enamel marginal deficiencies and color mismatch were more prevalent for Ketac Nano. Surface texture of Fuji II LC restorations deteriorated quickly.

  10. Neuroimaging study of sex differences in the neuropathology of cocaine abuse.

    PubMed

    Li, Chiang-shan Ray; Kemp, Kathleen; Milivojevic, Verica; Sinha, Rajita

    2005-09-01

    Female and male substance abusers differ in their disease patterns and clinical outcomes. An important question in addiction neuroscience thus concerns the neural substrates underlying these sex differences. This article aims to examine what is known of the neural mechanisms involved in the sex differences between substance abusers. We reviewed neuroimaging studies that addressed sex differences in cerebral perfusion deficits after chronic cocaine use and in regional brain activation during pharmacologic challenge and cue-induced craving. We also present results from a preliminary study in which cocaine-dependent men and women participated in script-guided imagery of stress- and drug cue-related situations while blood oxygenation level-dependent signals of their brain were acquired in a 1.5T scanner. Spatial pre-processing and statistical analysis of brain images were performed. Regional brain activation was compared between stress and drug cue trials in men versus women. The results of our study showed greater activation in the left uncus and right claustrum (both, statistical threshold of P = 0.01, uncorrected; extent = 10 voxels) in men (n = 5) during drug cue trials compared with stress trials. No brain regions showed greater activation during stress trials compared with drug cue trials. In contrast, women (n = 6) showed greater activation in the right medial and superior frontal gyri during stress trials compared with drug cue trials at the same statistical threshold. No brain regions showed more activation during drug cue trials than during stress trials. The studies reviewed underscore the need to consider sex-related factors in examining the neuropathology of cocaine addiction. Our preliminary results also suggest important sex differences in the effect of stress- and drug cue-associated brain activation in individuals with cocaine use disorder.

  11. pROC: an open-source package for R and S+ to analyze and compare ROC curves.

    PubMed

    Robin, Xavier; Turck, Natacha; Hainard, Alexandre; Tiberti, Natalia; Lisacek, Frédérique; Sanchez, Jean-Charles; Müller, Markus

    2011-03-17

    Receiver operating characteristic (ROC) curves are useful tools to evaluate classifiers in biomedical and bioinformatics applications. However, conclusions are often reached through inconsistent use or insufficient statistical analysis. To support researchers in their ROC curves analysis we developed pROC, a package for R and S+ that contains a set of tools displaying, analyzing, smoothing and comparing ROC curves in a user-friendly, object-oriented and flexible interface. With data previously imported into the R or S+ environment, the pROC package builds ROC curves and includes functions for computing confidence intervals, statistical tests for comparing total or partial area under the curve or the operating points of different classifiers, and methods for smoothing ROC curves. Intermediary and final results are visualised in user-friendly interfaces. A case study based on published clinical and biomarker data shows how to perform a typical ROC analysis with pROC. pROC is a package for R and S+ specifically dedicated to ROC analysis. It proposes multiple statistical tests to compare ROC curves, and in particular partial areas under the curve, allowing proper ROC interpretation. pROC is available in two versions: in the R programming language or with a graphical user interface in the S+ statistical software. It is accessible at http://expasy.org/tools/pROC/ under the GNU General Public License. It is also distributed through the CRAN and CSAN public repositories, facilitating its installation.

  12. Assessing socioeconomic vulnerability to dengue fever in Cali, Colombia: statistical vs expert-based modeling

    PubMed Central

    2013-01-01

    Background As a result of changes in climatic conditions and greater resistance to insecticides, many regions across the globe, including Colombia, have been facing a resurgence of vector-borne diseases, and dengue fever in particular. Timely information on both (1) the spatial distribution of the disease, and (2) prevailing vulnerabilities of the population are needed to adequately plan targeted preventive intervention. We propose a methodology for the spatial assessment of current socioeconomic vulnerabilities to dengue fever in Cali, a tropical urban environment of Colombia. Methods Based on a set of socioeconomic and demographic indicators derived from census data and ancillary geospatial datasets, we develop a spatial approach for both expert-based and purely statistical-based modeling of current vulnerability levels across 340 neighborhoods of the city using a Geographic Information System (GIS). The results of both approaches are comparatively evaluated by means of spatial statistics. A web-based approach is proposed to facilitate the visualization and the dissemination of the output vulnerability index to the community. Results The statistical and the expert-based modeling approach exhibit a high concordance, globally, and spatially. The expert-based approach indicates a slightly higher vulnerability mean (0.53) and vulnerability median (0.56) across all neighborhoods, compared to the purely statistical approach (mean = 0.48; median = 0.49). Both approaches reveal that high values of vulnerability tend to cluster in the eastern, north-eastern, and western part of the city. These are poor neighborhoods with high percentages of young (i.e., < 15 years) and illiterate residents, as well as a high proportion of individuals being either unemployed or doing housework. Conclusions Both modeling approaches reveal similar outputs, indicating that in the absence of local expertise, statistical approaches could be used, with caution. By decomposing identified vulnerability “hotspots” into their underlying factors, our approach provides valuable information on both (1) the location of neighborhoods, and (2) vulnerability factors that should be given priority in the context of targeted intervention strategies. The results support decision makers to allocate resources in a manner that may reduce existing susceptibilities and strengthen resilience, and thus help to reduce the burden of vector-borne diseases. PMID:23945265

  13. Comparability of Computer- and Paper-Administered Multiple-Choice Tests for K-12 Populations: A Synthesis

    ERIC Educational Resources Information Center

    Kingston, Neal M.

    2009-01-01

    There have been many studies of the comparability of computer-administered and paper-administered tests. Not surprisingly (given the variety of measurement and statistical sampling issues that can affect any one study) the results of such studies have not always been consistent. Moreover, the quality of computer-based test administration systems…

  14. A Study Comparing Fifth Grade Student Achievement in Mathematics in Departmentalized and Non-Departmentalized Settings

    ERIC Educational Resources Information Center

    Nelson, Karen Ann

    2014-01-01

    The purpose of this quantitative, causal-comparative study was to examine the application of the teaching and learning theory of social constructivism in order to determine if mathematics instruction provided in a departmentalized classroom setting at the fifth grade level resulted in a statistically significant difference in student achievement…

  15. Pisces did not have increased heart failure: data-driven comparisons of binary proportions between levels of a categorical variable can result in incorrect statistical significance levels.

    PubMed

    Austin, Peter C; Goldwasser, Meredith A

    2008-03-01

    We examined the impact on statistical inference when a chi(2) test is used to compare the proportion of successes in the level of a categorical variable that has the highest observed proportion of successes with the proportion of successes in all other levels of the categorical variable combined. Monte Carlo simulations and a case study examining the association between astrological sign and hospitalization for heart failure. A standard chi(2) test results in an inflation of the type I error rate, with the type I error rate increasing as the number of levels of the categorical variable increases. Using a standard chi(2) test, the hospitalization rate for Pisces was statistically significantly different from that of the other 11 astrological signs combined (P=0.026). After accounting for the fact that the selection of Pisces was based on it having the highest observed proportion of heart failure hospitalizations, subjects born under the sign of Pisces no longer had a significantly higher rate of heart failure hospitalization compared to the other residents of Ontario (P=0.152). Post hoc comparisons of the proportions of successes across different levels of a categorical variable can result in incorrect inferences.

  16. A comparative evaluation of microleakage of three different newer direct composite resins using a self etching primer in class V cavities: An in vitro study

    PubMed Central

    Hegde, Mithra N; Vyapaka, Pallavi; Shetty, Shishir

    2009-01-01

    Aims/Objectives: The aim of this in vitro study is to study, measure and compare the microleakage in three different newer direct composite resins using a self-etch adhesive bonding system in class V cavities by fluorescent dye penetration technique. Materials and Methods: Class V cavities were prepared on 45 human maxillary premolar teeth. On all specimens, one coat of G-Bond (GC Japan) applied and light cured. Teeth are then equally divided into 3 groups of 15 samples each. Filtek Z350 (3M ESPE), Ceram X duo (Dentsply Asia) and Synergy D6 (Coltene/Whaledent) resin composites were placed on samples of Groups I, II and III, respectively, in increments and light cured. After polishing the restorations, the specimens were suspended in Rhodamine 6G fluorescent dye for 48 h. The teeth were then sectioned longitudinally and observed for the extent of microleakage under the florescent microscope. Statistical Analysis Used: The results were subjected to statistical analysis using Kruskal Wallis and Mann–Whitney U Test. Results: Results showed no statistically significant difference among three groups tested. Conclusions: None of the materials tested was able to completely eliminate the microleakage in class V cavities. PMID:20543926

  17. Women victims of intentional homicide in Italy: New insights comparing Italian trends to German and U.S. trends, 2008-2014.

    PubMed

    Terranova, Claudio; Zen, Margherita

    2018-01-01

    National statistics on female homicide could be a useful tool to evaluate the phenomenon and plan adequate strategies to prevent and reduce this crime. The aim of the study is to contribute to the analysis of intentional female homicides in Italy by comparing Italian trends to German and United States trends from 2008 to 2014. This is a population study based on data deriving primarily from national and European statistical institutes, from the U.S. Federal Bureau of Investigation's Uniform Crime Reporting and from the National Center for Health Statistics. Data were analyzed in relation to trends and age by Chi-square test, Student's t-test and linear regression. Results show that female homicides, unlike male homicides, remained stable in the three countries. Regression analysis showed a higher risk for female homicide in all age groups in the U.S. Middle-aged women result at higher risk, and the majority of murdered women are killed by people they know. These results confirm previous findings and suggest the need to focus also in Italy on preventive strategies to reduce those precipitating factors linked to violence and present in the course of a relationship or within the family. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  18. Multivariate statistical model for 3D image segmentation with application to medical images.

    PubMed

    John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O

    2003-12-01

    In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).

  19. Effect of higher order nonlinearity, directionality and finite water depth on wave statistics: Comparison of field data and numerical simulations

    NASA Astrophysics Data System (ADS)

    Fernández, Leandro; Monbaliu, Jaak; Onorato, Miguel; Toffoli, Alessandro

    2014-05-01

    This research is focused on the study of nonlinear evolution of irregular wave fields in water of arbitrary depth by comparing field measurements and numerical simulations.It is now well accepted that modulational instability, known as one of the main mechanisms for the formation of rogue waves, induces strong departures from Gaussian statistics. However, whereas non-Gaussian properties are remarkable when wave fields follow one direction of propagation over an infinite water depth, wave statistics only weakly deviate from Gaussianity when waves spread over a range of different directions. Over finite water depth, furthermore, wave instability attenuates overall and eventually vanishes for relative water depths as low as kh=1.36 (where k is the wavenumber of the dominant waves and h the water depth). Recent experimental results, nonetheless, seem to indicate that oblique perturbations are capable of triggering and sustaining modulational instability even if kh<1.36. In this regard, the aim of this research is to understand whether the combined effect of directionality and finite water depth has a significant effect on wave statistics and particularly on the occurrence of extremes. For this purpose, numerical experiments have been performed solving the Euler equation of motion with the Higher Order Spectral Method (HOSM) and compared with data of short crested wave fields for different sea states observed at the Lake George (Australia). A comparative analysis of the statistical properties (i.e. density function of the surface elevation and its statistical moments skewness and kurtosis) between simulations and in-situ data provides a confrontation between the numerical developments and real observations in field conditions.

  20. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  1. Estimating procedure times for surgeries by determining location parameters for the lognormal model.

    PubMed

    Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H

    2004-05-01

    We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.

  2. A Comparative Analysis of Results Using Three Leadership Style Measurement Instruments.

    DTIC Science & Technology

    The objective of this research was to determine if there was conceptual similarity among leadership styles as measured by the results of the Ohio...hybrid statistic was developed here to measure the degree of association among the leadership styles recorded by each individual respondent. The

  3. The Response of Higher Education to Women's Inequality.

    ERIC Educational Resources Information Center

    Rae, Judith

    The status of academic women is compared with that of men to determine whether disciminating practices and resulting inequality for women continue to exist. Current scientific periodicals, monographs, and books were searched, and the most recent statistics are presented. Results are discussed in terms of admissions, enrollment and degrees earned,…

  4. Statistical characterization of portal images and noise from portal imaging systems.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge

    2013-06-01

    In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.

  5. Statistical Analysis of CFD Solutions from the Fourth AIAA Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.

    2010-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from the U.S., Europe, Asia, and Russia using a variety of grid systems and turbulence models for the June 2009 4th Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was a new subsonic transport model, the Common Research Model, designed using a modern approach for the wing and included a horizontal tail. The fourth workshop focused on the prediction of both absolute and incremental drag levels for wing-body and wing-body-horizontal tail configurations. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with earlier workshops using the statistical framework.

  6. Structure-guided statistical textural distinctiveness for salient region detection in natural images.

    PubMed

    Scharfenberger, Christian; Wong, Alexander; Clausi, David A

    2015-01-01

    We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.

  7. Statistical Design in Isothermal Aging of Polyimide Resins

    NASA Technical Reports Server (NTRS)

    Sutter, James K.; Jobe, Marcus; Crane, Elizabeth A.

    1995-01-01

    Recent developments in research on polyimides for high temperature applications have led to the synthesis of many new polymers. Among the criteria that determines their thermal oxidative stability, isothermal aging is one of the most important. Isothermal aging studies require that many experimental factors are controlled to provide accurate results. In this article we describe a statistical plan that compares the isothermal stability of several polyimide resins, while minimizing the variations inherent in high-temperature aging studies.

  8. Statistical analysis of sperm sorting

    NASA Astrophysics Data System (ADS)

    Koh, James; Marcos, Marcos

    2017-11-01

    The success rate of assisted reproduction depends on the proportion of morphologically normal sperm. It is possible to use an external field for manipulation and sorting. Depending on their morphology, the extent of response varies. Due to the wide distribution in sperm morphology even among individuals, the resulting distribution of kinematic behaviour, and consequently the feasibility of sorting, should be analysed statistically. In this theoretical work, Resistive Force Theory and Slender Body Theory will be applied and compared. Full name is Marcos.

  9. Assessing effects of a semi-customized experimental cervical pillow on symptomatic adults with chronic neck pain with and without headache

    PubMed Central

    Erfanian, Parham; Tenzif, Siamak; Guerriero, Rocco C

    2004-01-01

    Objective To determine the effects of a semi-customized experimental cervical pillow on symptomatic adults with chronic neck pain (with and without headache) during a four week study. Design A randomized controlled trial. Sample size Thirty-six adults were recruited for the trial, and randomly assigned to experimental or non-experimental groups of 17 and 19 participants respectively. Subjects Adults with chronic biomechanical neck pain who were recruited from the Canadian Memorial Chiropractic College (CMCC) Walk-in Clinic. Outcome measures Subjective findings were assessed using a mail-in self-report daily pain diary, and the CMCC Neck Disability Index (NDI). Statistical analysis Using repeated measure analysis of variance weekly NDI scores, average weekly AM and PM pain scores between the experimental and non-experimental groups were compared throughout the study. Results The experimental group had statistically significant lower NDI scores (p < 0.05) than the non-experimental group. The average weekly AM scores were lower and statistically significant (p < 0.05) in the experimental group. The PM scores in the experimental group were lower but not statistically significant than the other group. Conclusions The study results show that compared to conventional pillows, this experimental semi-customized cervical pillow was effective in reducing low-level neck pain intensity, especially in the morning following its use in a 4 week long study. PMID:17549216

  10. Analysis of Lagrangian stretching in turbulent channel flow using a database task-parallel particle tracking approach

    NASA Astrophysics Data System (ADS)

    Meneveau, Charles; Johnson, Perry; Hamilton, Stephen; Burns, Randal

    2016-11-01

    An intrinsic property of turbulent flows is the exponential deformation of fluid elements along Lagrangian paths. The production of enstrophy by vorticity stretching follows from a similar mechanism in the Lagrangian view, though the alignment statistics differ and viscosity prevents unbounded growth. In this paper, the stretching properties of fluid elements and vorticity along Lagrangian paths are studied in a channel flow at Reτ = 1000 and compared with prior, known results from isotropic turbulence. To track Lagrangian paths in a public database containing Direct Numerical Simulation (DNS) results, the task-parallel approach previously employed in the isotropic database is extended to the case of flow in a bounded domain. It is shown that above 100 viscous units from the wall, stretching statistics are equal to their isotropic values, in support of the local isotropy hypothesis. Normalized by dissipation rate, the stretching in the buffer layer and below is less efficient due to less favorable alignment statistics. The Cramér function characterizing cumulative Lagrangian stretching statistics shows that overall the channel flow has about half of the stretching per unit dissipation compared with isotropic turbulence. Supported by a National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1232825, and by National Science Foundation Grants CBET-1507469, ACI-1261715, OCI-1244820 and by JHU IDIES.

  11. Long-term Results of an Analytical Assessment of Student Compounded Preparations

    PubMed Central

    Roark, Angie M.; Anksorus, Heidi N.

    2014-01-01

    Objective. To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students’ compounded preparations were analyzed. Methods. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. Results. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. Conclusion. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy. PMID:26056402

  12. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  13. 40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...

  14. 40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...

  15. Distinguishing Positive Selection From Neutral Evolution: Boosting the Performance of Summary Statistics

    PubMed Central

    Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas

    2011-01-01

    Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556

  16. What is too much variation? The null hypothesis in small-area analysis.

    PubMed Central

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-01-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results. PMID:2312306

  17. What is too much variation? The null hypothesis in small-area analysis.

    PubMed

    Diehr, P; Cain, K; Connell, F; Volinn, E

    1990-02-01

    A small-area analysis (SAA) in health services research often calculates surgery rates for several small areas, compares the largest rate to the smallest, notes that the difference is large, and attempts to explain this discrepancy as a function of service availability, physician practice styles, or other factors. SAAs are often difficult to interpret because there is little theoretical basis for determining how much variation would be expected under the null hypothesis that all of the small areas have similar underlying surgery rates and that the observed variation is due to chance. We developed a computer program to simulate the distribution of several commonly used descriptive statistics under the null hypothesis, and used it to examine the variability in rates among the counties of the state of Washington. The expected variability when the null hypothesis is true is surprisingly large, and becomes worse for procedures with low incidence, for smaller populations, when there is variability among the populations of the counties, and when readmissions are possible. The characteristics of four descriptive statistics were studied and compared. None was uniformly good, but the chi-square statistic had better performance than the others. When we reanalyzed five journal articles that presented sufficient data, the results were usually statistically significant. Since SAA research today is tending to deal with low-incidence events, smaller populations, and measures where readmissions are possible, more research is needed on the distribution of small-area statistics under the null hypothesis. New standards are proposed for the presentation of SAA results.

  18. Comparative evaluation of terminalia chebula extract mouthwash and chlorhexidine mouthwash on plaque and gingival inflammation - 4-week randomised control trial.

    PubMed

    Gupta, Devanand; Gupta, Rajendra Kumar; Bhaskar, Dara John; Gupta, Vipul

    2015-01-01

    The present study was conducted to assess the effectiveness of Terminalia chebula on plaque and gingival inflammation and compare it with the gold standard chlorhexidine (CHX 0.2%) and distilled water as control (placebo). A double-blind randomised control trial was conducted among undergraduate students who volunteered. They were randomly allocated into three study groups: 1) Terminalia chebula mouthwash (n = 30); 2) chlorhexidine (active control) (n = 30); 3) distilled water (placebo) (n = 30). Assessment was carried out according to plaque score and gingival score. Statistical analysis was carried out to compare the effect of both mouthwashes. ANOVA and post-hoc LSD tests were performed using SPSS version 17 with p ≤ 0.05 considered statistically significant. Our result showed that Terminalia chebula mouthrinse is as effective as chlorhexidine in reducing dental plaque and gingival inflammation. The results demonstrated a significant reduction of gingival bleeding and plaque indices in both groups over a period of 15 and 30 days as compared to the placebo. The results of the present study indicate that Terminalia chebula may prove to be an effective mouthwash. Terminalia chebula extract mouthrinse can be used as an alternative to chlorhexidine mouthrinse as it has similar properties without the side-effects of the latter.

  19. The extended statistical analysis of toxicity tests using standardised effect sizes (SESs): a comparison of nine published papers.

    PubMed

    Festing, Michael F W

    2014-01-01

    The safety of chemicals, drugs, novel foods and genetically modified crops is often tested using repeat-dose sub-acute toxicity tests in rats or mice. It is important to avoid misinterpretations of the results as these tests are used to help determine safe exposure levels in humans. Treated and control groups are compared for a range of haematological, biochemical and other biomarkers which may indicate tissue damage or other adverse effects. However, the statistical analysis and presentation of such data poses problems due to the large number of statistical tests which are involved. Often, it is not clear whether a "statistically significant" effect is real or a false positive (type I error) due to sampling variation. The author's conclusions appear to be reached somewhat subjectively by the pattern of statistical significances, discounting those which they judge to be type I errors and ignoring any biomarker where the p-value is greater than p = 0.05. However, by using standardised effect sizes (SESs) a range of graphical methods and an over-all assessment of the mean absolute response can be made. The approach is an extension, not a replacement of existing methods. It is intended to assist toxicologists and regulators in the interpretation of the results. Here, the SES analysis has been applied to data from nine published sub-acute toxicity tests in order to compare the findings with those of the author's. Line plots, box plots and bar plots show the pattern of response. Dose-response relationships are easily seen. A "bootstrap" test compares the mean absolute differences across dose groups. In four out of seven papers where the no observed adverse effect level (NOAEL) was estimated by the authors, it was set too high according to the bootstrap test, suggesting that possible toxicity is under-estimated.

  20. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  1. Adaptive Statistical Iterative Reconstruction-Applied Ultra-Low-Dose CT with Radiography-Comparable Radiation Dose: Usefulness for Lung Nodule Detection

    PubMed Central

    Yoon, Hyun Jung; Hwang, Hye Sun; Moon, Jung Won; Lee, Kyung Soo

    2015-01-01

    Objective To assess the performance of adaptive statistical iterative reconstruction (ASIR)-applied ultra-low-dose CT (ULDCT) in detecting small lung nodules. Materials and Methods Thirty patients underwent both ULDCT and standard dose CT (SCT). After determining the reference standard nodules, five observers, blinded to the reference standard reading results, independently evaluated SCT and both subsets of ASIR- and filtered back projection (FBP)-driven ULDCT images. Data assessed by observers were compared statistically. Results Converted effective doses in SCT and ULDCT were 2.81 ± 0.92 and 0.17 ± 0.02 mSv, respectively. A total of 114 lung nodules were detected on SCT as a standard reference. There was no statistically significant difference in sensitivity between ASIR-driven ULDCT and SCT for three out of the five observers (p = 0.678, 0.735, < 0.01, 0.038, and < 0.868 for observers 1, 2, 3, 4, and 5, respectively). The sensitivity of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT in three out of the five observers (p < 0.01 for three observers, and p = 0.064 and 0.146 for two observers). In jackknife alternative free-response receiver operating characteristic analysis, the mean values of figure-of-merit (FOM) for FBP, ASIR-driven ULDCT, and SCT were 0.682, 0.772, and 0.821, respectively, and there were no significant differences in FOM values between ASIR-driven ULDCT and SCT (p = 0.11), but the FOM value of FBP-driven ULDCT was significantly lower than that of ASIR-driven ULDCT and SCT (p = 0.01 and 0.00). Conclusion Adaptive statistical iterative reconstruction-driven ULDCT delivering a radiation dose of only 0.17 mSv offers acceptable sensitivity in nodule detection compared with SCT and has better performance than FBP-driven ULDCT. PMID:26357505

  2. Extreme value statistics for two-dimensional convective penetration in a pre-main sequence star

    NASA Astrophysics Data System (ADS)

    Pratt, J.; Baraffe, I.; Goffrey, T.; Constantino, T.; Viallet, M.; Popov, M. V.; Walder, R.; Folini, D.

    2017-08-01

    Context. In the interior of stars, a convectively unstable zone typically borders a zone that is stable to convection. Convective motions can penetrate the boundary between these zones, creating a layer characterized by intermittent convective mixing, and gradual erosion of the density and temperature stratification. Aims: We examine a penetration layer formed between a central radiative zone and a large convection zone in the deep interior of a young low-mass star. Using the Multidimensional Stellar Implicit Code (MUSIC) to simulate two-dimensional compressible stellar convection in a spherical geometry over long times, we produce statistics that characterize the extent and impact of convective penetration in this layer. Methods: We apply extreme value theory to the maximal extent of convective penetration at any time. We compare statistical results from simulations which treat non-local convection, throughout a large portion of the stellar radius, with simulations designed to treat local convection in a small region surrounding the penetration layer. For each of these situations, we compare simulations of different resolution, which have different velocity magnitudes. We also compare statistical results between simulations that radiate energy at a constant rate to those that allow energy to radiate from the stellar surface according to the local surface temperature. Results: Based on the frequency and depth of penetrating convective structures, we observe two distinct layers that form between the convection zone and the stable radiative zone. We show that the probability density function of the maximal depth of convective penetration at any time corresponds closely in space with the radial position where internal waves are excited. We find that the maximal penetration depth can be modeled by a Weibull distribution with a small shape parameter. Using these results, and building on established scalings for diffusion enhanced by large-scale convective motions, we propose a new form for the diffusion coefficient that may be used for one-dimensional stellar evolution calculations in the large Péclet number regime. These results should contribute to the 321D link.

  3. Underestimates of unintentional firearm fatalities: comparing Supplementary Homicide Report data with the National Vital Statistics System

    PubMed Central

    Barber, C; Hemenway, D; Hochstadt, J; Azrael, D

    2002-01-01

    Objective: A growing body of evidence suggests that the nation's vital statistics system undercounts unintentional firearm deaths that are not self inflicted. This issue was examined by comparing how unintentional firearm injuries identified in police Supplementary Homicide Report (SHR) data were coded in the National Vital Statistics System. Methods: National Vital Statistics System data are based on death certificates and divide firearm fatalities into six subcategories: homicide, suicide, accident, legal intervention, war operations, and undetermined. SHRs are completed by local police departments as part of the FBI's Uniform Crime Reports program. The SHR divides homicides into two categories: "murder and non-negligent manslaughter" (type A) and "negligent manslaughter" (type B). Type B shooting deaths are those that are inflicted by another person and that a police investigation determined were inflicted unintentionally, as in a child killing a playmate after mistaking a gun for a toy. In 1997, the SHR classified 168 shooting victims this way. Using probabilistic matching, 140 of these victims were linked to their death certificate records. Results: Among the 140 linked cases, 75% were recorded on the death certificate as homicides and only 23% as accidents. Conclusion: Official data from the National Vital Statistics System almost certainly undercount firearm accidents when the victim is shot by another person. PMID:12226128

  4. Perinatal outcomes in women over 40 years of age compared to those of other gestations

    PubMed Central

    Canhaço, Evandro Eduardo; Bergamo, Angela Mendes; Lippi, Umberto Gazi; Lopes, Reginaldo Guedes Coelho

    2015-01-01

    Objective To clarify if older pregnant women were more likely to have adverse perinatal outcomes when compared to women at an ideal age to have a child. Methods The groups were divided according to age groups: under 20 years, ≥20 to <40 years, and ≥40 years. Results During the period from January 1st, 2008, to December 31st, 2008, there were 76 births from patients younger than 20 years and 91 births from patients aged 40 years or over. To form a third group with intermediate age, the data of 92 patients aged 20 to 40 years were obtained, totaling 259 patients. Patients aged 40 or older had a statistically greater number of cesarean sections and less use of forceps or normal deliveries (p<0.001). The use of spinal anesthesia was statistically higher among those aged 40 years or more (p<0.001). The frequency of male newborns was statistically higher in older patients, a group with statistically fewer first pregnancies (p<0.001). The frequency of premature newborns was statistically higher in patients aged 40 years or more (p=0.004). Conclusion It is crucial to give priority to aged women, so that prenatal care will be appropriate, minimizing maternal complications and improving perinatal outcomes in this unique group. PMID:25993070

  5. Fast, Statistical Model of Surface Roughness for Ion-Solid Interaction Simulations and Efficient Code Coupling

    NASA Astrophysics Data System (ADS)

    Drobny, Jon; Curreli, Davide; Ruzic, David; Lasa, Ane; Green, David; Canik, John; Younkin, Tim; Blondel, Sophie; Wirth, Brian

    2017-10-01

    Surface roughness greatly impacts material erosion, and thus plays an important role in Plasma-Surface Interactions. Developing strategies for efficiently introducing rough surfaces into ion-solid interaction codes will be an important step towards whole-device modeling of plasma devices and future fusion reactors such as ITER. Fractal TRIDYN (F-TRIDYN) is an upgraded version of the Monte Carlo, BCA program TRIDYN developed for this purpose that includes an explicit fractal model of surface roughness and extended input and output options for file-based code coupling. Code coupling with both plasma and material codes has been achieved and allows for multi-scale, whole-device modeling of plasma experiments. These code coupling results will be presented. F-TRIDYN has been further upgraded with an alternative, statistical model of surface roughness. The statistical model is significantly faster than and compares favorably to the fractal model. Additionally, the statistical model compares well to alternative computational surface roughness models and experiments. Theoretical links between the fractal and statistical models are made, and further connections to experimental measurements of surface roughness are explored. This work was supported by the PSI-SciDAC Project funded by the U.S. Department of Energy through contract DOE-DE-SC0008658.

  6. Statistics of voids in hierarchical universes

    NASA Technical Reports Server (NTRS)

    Fry, J. N.

    1986-01-01

    As one alternative to the N-point galaxy correlation function statistics, the distribution of holes or the probability that a volume of given size and shape be empty of galaxies can be considered. The probability of voids resulting from a variety of hierarchical patterns of clustering is considered, and these are compared with the results of numerical simulations and with observations. A scaling relation required by the hierarchical pattern of higher order correlation functions is seen to be obeyed in the simulations, and the numerical results show a clear difference between neutrino models and cold-particle models; voids are more likely in neutrino universes. Observational data do not yet distinguish but are close to being able to distinguish between models.

  7. Routine hospital data – is it good enough for trials? An example using England’s Hospital Episode Statistics in the SHIFT trial of Family Therapy vs. Treatment as Usual in adolescents following self-harm

    PubMed Central

    Graham, Elizabeth; Cottrell, David; Farrin, Amanda

    2018-01-01

    Background: Use of routine data sources within clinical research is increasing and is endorsed by the National Institute for Health Research to increase trial efficiencies; however there is limited evidence for its use in clinical trials, especially in relation to self-harm. One source of routine data, Hospital Episode Statistics, is collated and distributed by NHS Digital and contains details of admissions, outpatient, and Accident and Emergency attendances provided periodically by English National Health Service hospitals. We explored the reliability and accuracy of Hospital Episode Statistics, compared to data collected directly from hospital records, to assess whether it would provide complete, accurate, and reliable means of acquiring hospital attendances for self-harm – the primary outcome for the SHIFT (Self-Harm Intervention: Family Therapy) trial evaluating Family Therapy for adolescents following self-harm. Methods: Participant identifiers were linked to Hospital Episode Statistics Accident and Emergency, and Admissions data, and episodes combined to describe participants’ complete hospital attendance. Attendance data were initially compared to data previously gathered by trial researchers from pre-identified hospitals. Final comparison was conducted of subsequent attendances collected through Hospital Episode Statistics and researcher follow-up. Consideration was given to linkage rates; number and proportion of attendances retrieved; reliability of Accident and Emergency, and Admissions data; percentage of self-harm episodes recorded and coded appropriately; and percentage of required data items retrieved. Results: Participants were first linked to Hospital Episode Statistics with an acceptable match rate of 95%, identifying a total of 341 complete hospital attendances, compared to 139 reported by the researchers at the time. More than double the proportion of Hospital Episode Statistics Accident and Emergency episodes could not be classified in relation to self-harm (75%) compared to 34.9% of admitted episodes, and of overall attendances, 18% were classified as self-harm related and 20% not related, while ambiguity or insufficient information meant 62% were unclassified. Of 39 self-harm-related attendances reported by the researchers, Hospital Episode Statistics identified 24 (62%) as self-harm related while 15 (38%) were unclassified. Based on final data received, 1490 complete hospital attendances were identified and comparison to researcher follow-up found Hospital Episode Statistics underestimated the number of self-harm attendances by 37.2% (95% confidence interval 32.6%–41.9%). Conclusion: Advantages of routine data collection via NHS Digital included the acquisition of more comprehensive and timely trial outcome data, identifying more than double the number of hospital attendances than researchers. Disadvantages included ambiguity in the classification of self-harm relatedness. Our resulting primary outcome data collection strategy used routine data to identify hospital attendances supplemented by targeted researcher data collection for attendances requiring further self-harm classification. PMID:29498542

  8. The efficacy of tamsulosin in lower ureteral calculi

    PubMed Central

    Griwan, M.S.; Singh, Santosh Kumar; Paul, Himanshu; Pawar, Devendra Singh; Verma, Manish

    2010-01-01

    Context: There has been a paradigm shift in the management of ureteral calculi in the last decade with the introduction of new less invasive methods, such as ureterorenoscopy and extracorporeal shock wave lithotripsy (ESWL). Aims: Recent studies have reported excellent results with medical expulsive therapy (MET) for distal ureteral calculi, both in terms of stone expulsion and control of ureteral colic pain. Settings and Design: We conducted a comparative study in between watchful waiting and MET with tamsulosin. Materials and Methods: We conducted a comparative study in between watchful waiting (Group I) and MET with tamsulosin (Group II) in 60 patients, with a follow up of 28 days. Statistical Analysis: Independent 't' test and chi-square test. Results: Group II showed a statistically significant advantage in terms of the stone expulsion rate. The mean number of episodes of pain, mean days to stone expulsion and mean amount of analgesic dosage used were statistically significantly lower in Group II (P value is 0.007, 0.01 and 0.007, respectively) as compared to Group I. Conclusions: It is concluded that MET should be considered for uncomplicated distal ureteral calculi before ureteroscopy or extracorporeal lithotripsy. Tamsulosin has been found to increase and hasten stone expulsion rates, decrease acute attacks by acting as a spasmolytic, reduces mean days to stone expulsion and decreases analgesic dose usage. PMID:20882156

  9. Clinical study of the Erlanger silver catheter--data management and biometry.

    PubMed

    Martus, P; Geis, C; Lugauer, S; Böswald, M; Guggenbichler, J P

    1999-01-01

    The clinical evaluation of venous catheters for catheter-induced infections must conform to a strict biometric methodology. The statistical planning of the study (target population, design, degree of blinding), data management (database design, definition of variables, coding), quality assurance (data inspection at several levels) and the biometric evaluation of the Erlanger silver catheter project are described. The three-step data flow included: 1) primary data from the hospital, 2) relational database, 3) files accessible for statistical evaluation. Two different statistical models were compared: analyzing the first catheter only of a patient in the analysis (independent data) and analyzing several catheters from the same patient (dependent data) by means of the generalized estimating equations (GEE) method. The main result of the study was based on the comparison of both statistical models.

  10. A comparison of InVivoStat with other statistical software packages for analysis of data generated from animal experiments.

    PubMed

    Clark, Robin A; Shoaib, Mohammed; Hewitt, Katherine N; Stanford, S Clare; Bate, Simon T

    2012-08-01

    InVivoStat is a free-to-use statistical software package for analysis of data generated from animal experiments. The package is designed specifically for researchers in the behavioural sciences, where exploiting the experimental design is crucial for reliable statistical analyses. This paper compares the analysis of three experiments conducted using InVivoStat with other widely used statistical packages: SPSS (V19), PRISM (V5), UniStat (V5.6) and Statistica (V9). We show that InVivoStat provides results that are similar to those from the other packages and, in some cases, are more advanced. This investigation provides evidence of further validation of InVivoStat and should strengthen users' confidence in this new software package.

  11. Speckle noise reduction of 1-look SAR imagery

    NASA Technical Reports Server (NTRS)

    Nathan, Krishna S.; Curlander, John C.

    1987-01-01

    Speckle noise is inherent to synthetic aperture radar (SAR) imagery. Since the degradation of the image due to this noise results in uncertainties in the interpretation of the scene and in a loss of apparent resolution, it is desirable to filter the image to reduce this noise. In this paper, an adaptive algorithm based on the calculation of the local statistics around a pixel is applied to 1-look SAR imagery. The filter adapts to the nonstationarity of the image statistics since the size of the blocks is very small compared to that of the image. The performance of the filter is measured in terms of the equivalent number of looks (ENL) of the filtered image and the resulting resolution degradation. The results are compared to those obtained from different techniques applied to similar data. The local adaptive filter (LAF) significantly increases the ENL of the final image. The associated loss of resolution is also lower than that for other commonly used speckle reduction techniques.

  12. An alternative way to evaluate chemistry-transport model variability

    NASA Astrophysics Data System (ADS)

    Menut, Laurent; Mailler, Sylvain; Bessagnet, Bertrand; Siour, Guillaume; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik

    2017-03-01

    A simple and complementary model evaluation technique for regional chemistry transport is discussed. The methodology is based on the concept that we can learn about model performance by comparing the simulation results with observational data available for time periods other than the period originally targeted. First, the statistical indicators selected in this study (spatial and temporal correlations) are computed for a given time period, using colocated observation and simulation data in time and space. Second, the same indicators are used to calculate scores for several other years while conserving the spatial locations and Julian days of the year. The difference between the results provides useful insights on the model capability to reproduce the observed day-to-day and spatial variability. In order to synthesize the large amount of results, a new indicator is proposed, designed to compare several error statistics between all the years of validation and to quantify whether the period and area being studied were well captured by the model for the correct reasons.

  13. Statistical Analysis of CFD Solutions from the 6th AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Morrison, Joseph H.

    2017-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N- version test of a collection of Reynolds-averaged Navier-Stokes computational uid dynam- ics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using both common and custom grid sequencees as well as multiple turbulence models for the June 2016 6th AIAA CFD Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic con guration for this workshop was the Common Research Model subsonic transport wing- body previously used for both the 4th and 5th Drag Prediction Workshops. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.

  14. Assessment of oral health parameters among students attending special schools of Mangalore city.

    PubMed

    Peter, Tom; Cherian, Deepthi Anna; Peter, Tim

    2017-01-01

    The aim of the study was to assess the oral health status and treatment needs and correlation between dental caries susceptibility and salivary pH, buffering capacity and total antioxidant capacity among students attending special schools of Mangalore city. In this study 361 subjects in the age range of 12-18 years were divided into normal ( n = 84), physically challenged ( n = 68), and mentally challenged ( n = 209) groups. Their oral health status and treatment needs were recorded using the modified WHO oral health assessment proforma. Saliva was collected to estimate the salivary parameters. Statistical analysis was done using Statistical Package for Social Sciences version 17. Chicago. On examining, the dentition status of the study subjects, the mean number of decayed teeth was 1.57 for the normal, 2.54 for the physically challenged and 4.41 for the mentally challenged study subjects. These results were highly statistically significant ( P < 0.001). The treatment needs of the study subjects revealed that the mean number of teeth requiring pulp care and restoration were 1 for the normal, 0.12 for the physically challenged, and 1.21 for the mentally challenged study subjects. These results were highly statistically significant ( P < 0.001). The mean salivary pH and buffering capacity were found to be lowest among the mentally challenged subjects. Physically challenged group had the lowest mean total antioxidant capacity among the study subjects. Among the study subjects, normal students had the highest mean salivary pH, buffering capacity, and total antioxidant capacity. These results were highly statistically significant ( P < 0.001). This better dentition status of the normal compared to the physically and mentally challenged study subjects could be due to their improved quality of oral health practices. The difference in the treatment needs could be due to the higher prevalence of untreated dental caries and also due to the neglected oral health care among the mentally challenged study subjects. The salivary pH and buffering capacity were comparatively lower among the physically and mentally challenged study subjects which could contribute to their increased caries experience compared to the normal study subjects. However, further studies are needed to establish a more conclusive result on the total anti-oxidant capacity of the saliva and dental caries.

  15. A Comparison of Delinquent Prostitutes and Delinquent Non-Prostitutes on Self-Concept.

    ERIC Educational Resources Information Center

    Bour, Daria S.; And Others

    1984-01-01

    Compared social and demographic statistics and self-concept in 50 delinquent females (25 prostitutes and 25 nonprostitutes). Results indicated early sexual intercourse and a positive physical self-image were related to prostitution. (JAC)

  16. Data Comparability and Public Policy: New Interest in Public Library Data. Papers Presented at Meetings of the American Statistical Association. Working Paper Series.

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The four papers contained in this volume were presented at the August 1994 meetings of the American Statistical Association as a session titled, "Public Policy and Data Comparability: New Interest in Public Library Data." The first paper, "Public Library Statistics: Two Systems Compared" (Mary Jo Lynch), describes two systems…

  17. Extractive-spectrophotometric determination of disopyramide and irbesartan in their pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.

    2007-04-01

    Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.

  18. Efficacy of Curcuma for Treatment of Osteoarthritis.

    PubMed

    Perkins, Kimberly; Sahy, William; Beckett, Robert D

    2017-01-01

    The objective of this review is to identify, summarize, and evaluate clinical trials to determine the efficacy of curcuma in the treatment of osteoarthritis. A literature search for interventional studies assessing efficacy of curcuma was performed, resulting in 8 clinical trials. Studies have investigated the effect of curcuma on pain, stiffness, and functionality in patients with knee osteoarthritis. Curcuma-containing products consistently demonstrated statistically significant improvement in osteoarthritis-related endpoints compared with placebo, with one exception. When compared with active control, curcuma-containing products were similar to nonsteroidal anti-inflammatory drugs, and potentially to glucosamine. While statistical significant differences in outcomes were reported in a majority of studies, the small magnitude of effect and presence of major study limitations hinder application of these results. Further rigorous studies are needed prior to recommending curcuma as an effective alternative therapy for knee osteoarthritis. © The Author(s) 2016.

  19. Cryotherapy and ankle motion in chronic venous disorders

    PubMed Central

    Kelechi, Teresa J.; Mueller, Martina; Zapka, Jane G.; King, Dana E.

    2013-01-01

    This study compared ankle range of motion (AROM) including dorsiflexion, plantar flexion, inversion and eversion, and venous refill time (VRT) in leg skin inflamed by venous disorders, before and after a new cryotherapy ulcer prevention treatment. Fifty-seven-individuals participated in the randomized clinical trial; 28 in the experimental group and 29 received usual care only. Results revealed no statistically significant differences between the experimental and usual care groups although AROM measures in the experimental group showed a consistent, non-clinically relevant decrease compared to the usual care group except for dorsiflexion. Within treatment group comparisons of VRT results showed a statistically significant increase in both dorsiflexion and plantar flexion for patients with severe VRT in the experimental group (6.9 ± 6.8; p = 0.002 and 5.8 ± 12.6; p = 0.02, respectively). Cryotherapy did not further restrict already compromised AROM, and in some cases, there were minor improvements. PMID:23516043

  20. Comparison of clinical outcomes in decompression and fusion versus decompression only in patients with ossification of the posterior longitudinal ligament: a meta-analysis.

    PubMed

    Mehdi, Syed K; Alentado, Vincent J; Lee, Bryan S; Mroz, Thomas E; Benzel, Edward C; Steinmetz, Michael P

    2016-06-01

    OBJECTIVE Ossification of the posterior longitudinal ligament (OPLL) is a pathological calcification or ossification of the PLL, predominantly occurring in the cervical spine. Although surgery is often necessary for patients with symptomatic neurological deterioration, there remains controversy with regard to the optimal surgical treatment. In this systematic review and meta-analysis, the authors identified differences in complications and outcomes after anterior or posterior decompression and fusion versus after decompression alone for the treatment of cervical myelopathy due to OPLL. METHODS A MEDLINE, SCOPUS, and Web of Science search was performed for studies reporting complications and outcomes after decompression and fusion or after decompression alone for patients with OPLL. A meta-analysis was performed to calculate effect summary mean values, 95% CIs, Q statistics, and I(2) values. Forest plots were constructed for each analysis group. RESULTS Of the 2630 retrieved articles, 32 met the inclusion criteria. There was no statistically significant difference in the incidence of excellent and good outcomes and of fair and poor outcomes between the decompression and fusion and the decompression-only cohorts. However, the decompression and fusion cohort had a statistically significantly higher recovery rate (63.2% vs 53.9%; p < 0.0001), a higher final Japanese Orthopaedic Association score (14.0 vs 13.5; p < 0.0001), and a lower incidence of OPLL progression (< 1% vs 6.3%; p < 0.0001) compared with the decompression-only cohort. There was no statistically significant difference in the incidence of complications between the 2 cohorts. CONCLUSIONS This study represents the only comprehensive review of outcomes and complications after decompression and fusion or after decompression alone for OPLL across a heterogeneous group of surgeons and patients. Based on these results, decompression and fusion is a superior surgical technique compared with posterior decompression alone in patients with OPLL. These results indicate that surgical decompression and fusion lead to a faster recovery, improved postoperative neurological functioning, and a lower incidence of OPLL progression compared with posterior decompression only. Furthermore, decompression and fusion did not lead to a greater incidence of complications compared with posterior decompression only.

  1. Serum zinc levels of cord blood: relation to birth weight and gestational period.

    PubMed

    Gómez, Tahiry; Bequer, Leticia; Mollineda, Angel; González, Olga; Diaz, Mireisy; Fernández, Douglas

    2015-04-01

    Zn-deficiency has been associated with numerous alterations during pregnancy including low birth weight; however, the research relating neonatal zinc status and birth weight has not produced reliable results. To compare the serum Zn-levels of cord blood in healthy newborns and low birth weight newborns, and to assess a possible relationship between zinc concentration and neonatal birth weight and gestational age. 123 newborns divided in "study group" (n=50) with <2500g birth weight neonates and "control group" (n=73) with ≥2500g birth weight neonates were enrolled. Study group was subdivided according to gestational age in preterm (<37 weeks) and full-term (≥37 weeks). Serum cord blood samples were collected and the Zn-levels were analyzed using flame Atomic Absorption Spectrophotometry method and the result was expressed in μmol/L. The Zn-levels were compared between the groups (Mann-Whitney-U test) and the Zn-levels were correlated with the birth weight and gestational age (Spearman's rank correlations). Statistically significant low positive correlation between Zn-levels and birth weight (ρ=0.283; p=0.005) was found. No statistically significant difference between Zn-levels of study and control groups [17.00±0.43 vs. 18.16±0.32 (p=0.053)] was found. Statistically significant low positive correlation between Zn-levels and gestational age (ρ=0.351; p=0.001) was found. No statistically significant difference between Zn-levels of preterm as compare to full-term newborns [16.33±0.42 vs. 18.43±0.93 (p=0.079)] was found. Zn-level of preterm subgroup was significantly lower compared to control group (p=0.001). Despite low birth weight preterm neonates had significantly lower serum zinc levels of cord blood than healthy term neonates, the correlation between cord blood zinc levels and birth weight and gestational age was lower. The results are not enough to relate the change in cord blood zinc concentration to the birth weight values or gestational period. In relation to complicated pregnancies, further studies regarding zinc levels in blood in our population are required. Copyright © 2015 Elsevier GmbH. All rights reserved.

  2. Quantitative comparison of tympanic membrane displacements using two optical methods to recover the optical phase

    NASA Astrophysics Data System (ADS)

    Santiago-Lona, Cynthia V.; Hernández-Montes, María del Socorro; Mendoza-Santoyo, Fernando; Esquivel-Tejeda, Jesús

    2018-02-01

    The study and quantification of the tympanic membrane (TM) displacements add important information to advance the knowledge about the hearing process. A comparative statistical analysis between two commonly used demodulation methods employed to recover the optical phase in digital holographic interferometry, namely the fast Fourier transform and phase-shifting interferometry, is presented as applied to study thin tissues such as the TM. The resulting experimental TM surface displacement data are used to contrast both methods through the analysis of variance and F tests. Data are gathered when the TMs are excited with continuous sound stimuli at levels 86, 89 and 93 dB SPL for the frequencies of 800, 1300 and 2500 Hz under the same experimental conditions. The statistical analysis shows repeatability in z-direction displacements with a standard deviation of 0.086, 0.098 and 0.080 μm using the Fourier method, and 0.080, 0.104 and 0.055 μm with the phase-shifting method at a 95% confidence level for all frequencies. The precision and accuracy are evaluated by means of the coefficient of variation; the results with the Fourier method are 0.06143, 0.06125, 0.06154 and 0.06154, 0.06118, 0.06111 with phase-shifting. The relative error between both methods is 7.143, 6.250 and 30.769%. On comparing the measured displacements, the results indicate that there is no statistically significant difference between both methods for frequencies at 800 and 1300 Hz; however, errors and other statistics increase at 2500 Hz.

  3. Clinical comparison of CR and screen film for imaging the critically ill neonate

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Brasch, Robert C.; Gooding, Charles A.; Gould, Robert G.; Cohen, Pierre A.; Rencken, Ingo R.; Huang, H. K.

    1996-05-01

    A clinical comparison of computed radiography (CR) versus screen-film for imaging the critically-ill neonate is performed, utilizing a modified (hybrid) film cassette containing a CR (standard ST-V) imaging plate, a conventional screen and film, allowing simultaneous acquisition of perfectly matched CR and plain film images. For 100 portable neonatal chest and abdominal projection radiographs, plain film was subjectively compared to CR hardcopy. Three pediatric radiologists graded overall image quality on a scale of one (poor) to five (excellent), as well as visualization of various anatomic structures (i.e., lung parenchyma, pulmonary vasculature, tubes/lines) and pathological findings (i.e., pulmonary interstitial emphysema, pleural effusion, pneumothorax). Results analyzed using a combined kappa statistic of the differences between scores from each matched set, combined over the three readers showed no statistically significant difference in overall image quality between screen- film and CR (p equals 0.19). Similarly, no statistically significant difference was seen between screen-film and CR for anatomic structure visualization and for visualization of pathological findings. These results indicate that the image quality of CR is comparable to plain film, and that CR may be a suitable alternative to screen-film imaging for portable neonatal chest and abdominal examinations.

  4. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Statistical Analysis of Zebrafish Locomotor Response.

    PubMed

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  6. Statistical Analysis of Zebrafish Locomotor Response

    PubMed Central

    Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling’s T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling’s T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure. PMID:26437184

  7. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  8. Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach

    NASA Astrophysics Data System (ADS)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2016-09-01

    Polarimetric radar-based hydrometeor classification is the procedure of identifying different types of hydrometeors by exploiting polarimetric radar observations. The main drawback of the existing supervised classification methods, mostly based on fuzzy logic, is a significant dependency on a presumed electromagnetic behaviour of different hydrometeor types. Namely, the results of the classification largely rely upon the quality of scattering simulations. When it comes to the unsupervised approach, it lacks the constraints related to the hydrometeor microphysics. The idea of the proposed method is to compensate for these drawbacks by combining the two approaches in a way that microphysical hypotheses can, to a degree, adjust the content of the classes obtained statistically from the observations. This is done by means of an iterative approach, performed offline, which, in a statistical framework, examines clustered representative polarimetric observations by comparing them to the presumed polarimetric properties of each hydrometeor class. Aside from comparing, a routine alters the content of clusters by encouraging further statistical clustering in case of non-identification. By merging all identified clusters, the multi-dimensional polarimetric signatures of various hydrometeor types are obtained for each of the studied representative datasets, i.e. for each radar system of interest. These are depicted by sets of centroids which are then employed in operational labelling of different hydrometeors. The method has been applied on three C-band datasets, each acquired by different operational radar from the MeteoSwiss Rad4Alp network, as well as on two X-band datasets acquired by two research mobile radars. The results are discussed through a comparative analysis which includes a corresponding supervised and unsupervised approach, emphasising the operational potential of the proposed method.

  9. [Technique of thulium laser in managing bladder cuff in nephroureterectomy for upper urinary tract urothelium carcinoma].

    PubMed

    Pang, Kun; Sun, Xiao-Wen; Liu, Shi-Bo; Li, Wei-Guo; Shao, Yi; Zhuo, Jian; Wei, Hai-Bin; Xia, Shu-Jie

    2012-11-13

    To explore the application of thulium laser (2 µm laser) in managing bladder cuff in nephroureterectomy for upper urinary tract urothelium carcinoma (UUT-UC). The medical records of 56 patients undergoing nephroureterectomy at our hospital were reviewed retrospectively. The operative indicators, oncologic outcomes and clinicopathologic data were compared among the groups of open surgery (Group A), electric coagulation (Group B) and thulium laser technique (Group C). Furthermore a model of burst pressure measurement was built to measure the different burst pressures of sealing distal ureter. The follow-up results: when the indicators of operative duration, intraoperative blood loss volume, removal time of drainage tube, removal time of catheter and hospital stays were compared among three groups, Group A had no statistical differences with Group B/C in terms of removal time of drainage tube and removal time of catheter. But significant statistical differences existed in terms of operative duration, intraoperative blood loss volume and hospital stays ((232 ± 52) vs (148 ± 47) and (130 ± 49) min, (358 ± 81) vs (136 ± 74) and (145 ± 70) ml, (13 ± 3) vs (11 ± 4) and (10 ± 3) d, all P < 0.05). No statistical differences existed between Groups B and C in terms of all the above indicators. Burst pressure measurement results: no statistical differences existed between Group C and B ((116 ± 21) vs (139 ± 32) cm H2O, P > 0.05). For the surgical treatment of UUT-UC, thulium laser technique has no difference in operation indicators and oncologic outcomes compared to open surgery. Besides, it has the advantages of improved spatial beam quality and more precise tissue incision.

  10. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  11. What You Learn is What You See: Using Eye Movements to Study Infant Cross-Situational Word Learning

    PubMed Central

    Smith, Linda

    2016-01-01

    Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, the underlying mechanisms that make statistical learning possible and powerful are not yet known. With the goal of providing new insights into this issue, the research reported in this paper used an eye tracker to record the moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. Various measures are applied to such fine-grained temporal data, such as looking duration and shift rate (the number of shifts in gaze from one visual object to the other) trial by trial, showing different eye movement patterns between strong and weak statistical learners. Moreover, an information-theoretic measure is developed and applied to gaze data to quantify the degree of learning uncertainty trial by trial. Next, a simple associative statistical learning model is applied to eye movement data and these simulation results are compared with empirical results from young children, showing strong correlations between these two. This suggests that an associative learning mechanism with selective attention can provide a cognitively plausible model of cross-situational statistical learning. The work represents the first steps to use eye movement data to infer underlying real-time processes in statistical word learning. PMID:22213894

  12. Evaluation of trauma care using TRISS method: the role of adjusted misclassification rate and adjusted w-statistic.

    PubMed

    Llullaku, Sadik S; Hyseni, Nexhmi Sh; Bytyçi, Cen I; Rexhepi, Sylejman K

    2009-01-15

    Major trauma is a leading cause of death worldwide. Evaluation of trauma care using Trauma Injury and Injury Severity Score (TRISS) method is focused in trauma outcome (deaths and survivors). For testing TRISS method TRISS misclassification rate is used. Calculating w-statistic, as a difference between observed and TRISS expected survivors, we compare our trauma care results with the TRISS standard. The aim of this study is to analyze interaction between misclassification rate and w-statistic and to adjust these parameters to be closer to the truth. Analysis of components of TRISS misclassification rate and w-statistic and actual trauma outcome. The component of false negative (FN) (by TRISS method unexpected deaths) has two parts: preventable (Pd) and non-preventable (nonPd) trauma deaths. Pd represents inappropriate trauma care of an institution; otherwise nonpreventable trauma deaths represents errors in TRISS method. Removing patients with preventable trauma deaths we get an Adjusted misclassification rate: (FP + FN - Pd)/N or (b+c-Pd)/N. Substracting nonPd from FN value in w-statistic formula we get an Adjusted w-statistic: [FP-(FN - nonPd)]/N, respectively (FP-Pd)/N, or (b-Pd)/N). Because adjusted formulas clean method from inappropriate trauma care, and clean trauma care from the methods error, TRISS adjusted misclassification rate and adjusted w-statistic gives more realistic results and may be used in researches of trauma outcome.

  13. Comparison of long-term results between laparoscopy-assisted gastrectomy and open gastrectomy with D2 lymph node dissection for advanced gastric cancer.

    PubMed

    Hamabe, Atsushi; Omori, Takeshi; Tanaka, Koji; Nishida, Toshirou

    2012-06-01

    Laparoscopy-assisted gastrectomy (LAG) has been established as a low-invasive surgery for early gastric cancer. However, it remains unknown whether it is applicable also for advanced gastric cancer, mainly because the long-term results of LAG with D2 lymph node dissection for advanced gastric cancer have not been well validated compared with open gastrectomy (OG). A retrospective cohort study was performed to compare LAG and OG with D2 lymph node dissection. For this study, 167 patients (66 LAG and 101 OG patients) who underwent gastrectomy with D2 lymph node dissection for advanced gastric cancer were reviewed. Recurrence-free survival and overall survival time were estimated using Kaplan-Meier curves. Stratified log-rank statistical evaluation was used to compare the difference between the LAG and OG groups stratified by histologic type, pathologic T status, N status, and postoperative adjuvant chemotherapy. The adjusted Cox proportional hazards regression models were used to calculate the hazard ratios (HRs) of LAG. The 5-year recurrence-free survival rate was 89.6% in the LAG group and 75.8% in the OG group (nonsignificant difference; stratified log-rank statistic, 3.11; P = 0.0777). The adjusted HR of recurrence for LAG compared with OG was 0.389 [95% confidence interval (CI) 0.131-1.151]. The 5-year overall survival rate was 94.4% in the LAG group and 78.5% in the OG group (nonsignificant difference; stratified log-rank statistic, 0.4817; P = 0.4877). The adjusted HR of death for LAG compared with OG was 0.633 (95% CI 0.172-2.325). The findings show that LAG with D2 lymph node dissection is acceptable in terms of long-term results for advanced gastric cancer cases and may be applicable for advanced gastric cancer treatment.

  14. Total Ossicular Replacement Prosthesis: A New Fat Interposition Technique

    PubMed Central

    Saliba, Issam; Sabbah, Valérie; Poirier, Jackie Bibeau

    2018-01-01

    Objective: To compare audiometric results between the standard total ossicular replacement prosthesis (TORP-S) and a new fat interposition total ossicular replacement prosthesis (TORP-F) in pediatric and adult patients and to assess the complication and the undesirable outcome. Study design: This is a retrospective study. Methods: This study included 104 patients who had undergone titanium implants with TORP-F and 54 patients who had undergone the procedure with TORP-S between 2008 and 2013 in our tertiary care centers. The new technique consists of interposing a fat graft between the 4 legs of the universal titanium prosthesis (Medtronic Xomed Inc, Jacksonville, FL, USA) to provide a more stable TORP in the ovale window niche. Normally, this prosthesis is designed to fit on the stapes’ head as a partial ossicular replacement prosthesis. Results: The postoperative air-bone gap less than 25 dB for the combined cohort was 69.2% and 41.7% for the TORP-F and the TORP-S groups, respectively. The mean follow-up was 17 months postoperatively. By stratifying data, the pediatric cohort shows 56.5% in the TORP-F group (n = 52) compared with 40% in the TORP-S group (n = 29). However, the adult cohort shows 79.3% in the TORP-F group (n = 52) compared with 43.75% in the TORP-S group (n = 25). These improvements in hearing were statistically significant. There were no statistically significant differences in the speech discrimination scores. The only undesirable outcome that was statistically different between the 2 groups was the prosthesis displacement: 7% in the TORP-F group compared with 19% in the TORP-S group (P = .03). Conclusions: The interposition of a fat graft between the legs of the titanium implants (TORP-F) provides superior hearing results compared with a standard procedure (TORP-S) in pediatric and adult populations because of its better stability in the oval window niche. PMID:29326537

  15. Comparative evaluation of serum antioxidant levels in periodontally diseased patients: An interventional study

    PubMed Central

    Thomas, Biju; Madani, Shabeer Mohamed; Prasad, B. Rajendra; Kumari, Suchetha

    2014-01-01

    Background: Periodontal disease is an immune-inflammatory disease characterized by connective tissue breakdown, loss of attachment and alveolar bone resorption. In normal physiology, there is a dynamic equilibrium between reactive oxygen species activity and antioxidant defense capacity and when that equilibrium shifts in favor of reactive oxygen species, oxidative stress results. Oxidative stress is thought to play a causative role in the pathogenesis of periodontal diseases. Catalase (CAT) protects cells from hydrogen peroxide generated within them. Even though, CAT is not essential for some cell types under normal conditions, it plays an important role countering the effects of oxidative stress on the cell. Aim: This study was designed to estimate and compare the CAT and total antioxidant capacity (TAOC) levels in the serum of periodontitis, gingivitis, and healthy individuals before and after nonsurgical periodontal therapy. Materials and Methods: This study was conducted in the Department of Periodontics, A. B. Shetty Memorial Institute of Dental Sciences, Deralakatte, Mangalore. The study was designed as a single blinded interventional study comprising of 75 subjects, inclusive of both sexes and divided into three groups of 25 patients each. Patients were categorized into chronic periodontitis, gingivitis and healthy. The severity of inflammation was assessed by using gingival index and pocket probing depth. Biochemical analysis was done to estimate the TAOC and CAT levels before and after nonsurgical periodontal therapy. Results obtained were then statistically analyzed using ANOVA test and paired t-test. Results: The results showed a higher level of serum TAOC and CAT in the healthy group compared with the other groups. The difference was found to be statistically significant (P < 0.0001). The posttreatment levels of TAOC were statistically higher than the pretreatment levels in periodontitis group. PMID:25191070

  16. Effect of essential oil of Origanum rotundifolium on some plant pathogenic bacteria, seed germination and plant growth of tomato

    NASA Astrophysics Data System (ADS)

    Dadaşoǧlu, Fatih; Kotan, Recep; Karagöz, Kenan; Dikbaş, Neslihan; Ćakmakçi, Ramazan; Ćakir, Ahmet; Kordali, Şaban; Özer, Hakan

    2016-04-01

    The aim of this study is to determine effect of Origanum rotundifolium's essential oil on some plant pathogenic bacterias, seed germination and plant growth of tomato. Xanthomonas axanopodis pv. vesicatoria strain (Xcv-761) and Clavibacter michiganensis ssp. michiganensis strain (Cmm) inoculated to tomato seed. The seeds were tested for germination in vitro and disease severity and some plant growth parameters in vivo. In vitro assay, maximum seed germination was observed at 62,5 µl/ml essential oil treatment in seeds inoculated with Xcv-761 and at 62,5 µl/ml essential oil and streptomycin treatment in seeds inoculated with Cmm. The least infected cotiledon number was observed at 500 µg/ml streptomycin treatment in seeds inoculated with Cmm. In vivo assay, maximum seed germination was observed at 250 µl/ml essential oil teratment in tomato inoculated with Cmm. Lowest disease severity, is seen in the CMM infected seeds with 250 µl/ml essential oil application these results were statistically significant when compared with pathogen infected seeds. Similarly, in application conducted with XCV-761 infected seed, the lowest disease severity was observed for seeds as a result of 250 µl/ml essential oil application. Also according to the results obtained from essential oil application of CMM infected seeds conducted with 62,5 µl/ml dose; while disease severity was found statistically insignificant compared to 250 µl/ml to essential oil application, ıt was found statistically significant compared to pathogen infected seeds. The results showed that essential oil of O. rotundifolium has a potential for some suppressed plant disease when it is used in appropriate dose.

  17. Financial statistics for public health dispensary decisions in Nigeria: insights on standard presentation typologies.

    PubMed

    Agundu, Prince Umor C

    2003-01-01

    Public health dispensaries in Nigeria in recent times have demonstrated the poise to boost corporate productivity in the new millennium and to drive the nation closer to concretising the lofty goal of health-for-all. This is very pronounced considering the face-lift giving to the physical environment, increase in the recruitment and development of professionals, and upward review of financial subventions. However, there is little or no emphasis on basic statistical appreciation/application which enhances the decision making ability of corporate executives. This study used the responses from 120 senior public health officials in Nigeria and analyzed them with chi-square statistical technique. The results established low statistical aptitude, inadequate statistical training programmes, little/no emphasis on statistical literacy compared to computer literacy, amongst others. Consequently, it was recommended that these lapses be promptly addressed to enhance official executive performance in the establishments. Basic statistical data presentation typologies have been articulated in this study to serve as first-aid instructions to the target group, as they represent the contributions of eminent scholars in this area of intellectualism.

  18. Effectiveness of Quantitative Real Time PCR in Long-Term Follow-up of Chronic Myeloid Leukemia Patients.

    PubMed

    Savasoglu, Kaan; Payzin, Kadriye Bahriye; Ozdemirkiran, Fusun; Berber, Belgin

    2015-08-01

    To determine the use of the Quantitative Real Time PCR (RQ-PCR) assay follow-up with Chronic Myeloid Leukemia (CML) patients. Cross-sectional observational. Izmir Ataturk Education and Research Hospital, Izmir, Turkey, from 2009 to 2013. Cytogenetic, FISH, RQ-PCR test results from 177 CMLpatients' materials selected between 2009 - 2013 years was set up for comparison analysis. Statistical analysis was performed to compare between FISH, karyotype and RQ-PCR results of the patients. Karyotyping and FISH specificity and sensitivity rates determined by ROC analysis compared with RQ-PCR results. Chi-square test was used to compare test failure rates. Sensitivity and specificity values were determined for karyotyping 17.6 - 98% (p=0.118, p > 0.05) and for FISH 22.5 - 96% (p=0.064, p > 0.05) respectively. FISH sensitivity was slightly higher than karyotyping but there was calculated a strong correlation between them (p < 0.001). RQ-PCR test failure rate did not correlate with other two tests (p > 0.05); however, karyotyping and FISH test failure rate was statistically significant (p < 0.001). Besides, the situation needed for karyotype analysis, RQ-PCR assay can be used alone in the follow-up of CMLdisease.

  19. Comparing Assessment Methods in Undergraduate Statistics Courses

    ERIC Educational Resources Information Center

    Baxter, Sarah E.

    2017-01-01

    The purpose of this study was to compare undergraduate students' academic performance and attitudes about statistics in the context of two different types of assessment structures for an introductory statistics course. One assessment structure used in-class quizzes that emphasized computation and procedural fluency as well as vocabulary…

  20. Social and economic sustainability of urban systems: comparative analysis of metropolitan statistical areas in Ohio, USA

    EPA Science Inventory

    This article presents a general and versatile methodology for assessing sustainability with Fisher Information as a function of dynamic changes in urban systems. Using robust statistical methods, six Metropolitan Statistical Areas (MSAs) in Ohio were evaluated to comparatively as...

  1. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  2. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  3. Efficacy of UV-C irradiation for inactivation of food-borne pathogens on sliced cheese packaged with different types and thicknesses of plastic films.

    PubMed

    Ha, Jae-Won; Back, Kyeong-Hwan; Kim, Yoon-Hee; Kang, Dong-Hyun

    2016-08-01

    In this study, the efficacy of using UV-C light to inactivate sliced cheese inoculated with Escherichia coli O157:H7, Salmonella Typhimurium, and Listeria monocytogenes and, packaged with 0.07 mm films of polyethylene terephthalate (PET), polyvinylchloride (PVC), polypropylene (PP), and polyethylene (PE) was investigated. The results show that compared with PET and PVC, PP and PE films showed significantly reduced levels of the three pathogens compared to inoculated but non-treated controls. Therefore, PP and PE films of different thicknesses (0.07 mm, 0.10 mm, and 0.13 mm) were then evaluated for pathogen reduction of inoculated sliced cheese samples. Compared with 0.10 and 0.13 mm, 0.07 mm thick PP and PE films did not show statistically significant reductions compared to non-packaged treated samples. Moreover, there were no statistically significant differences between the efficacy of PP and PE films. These results suggest that adjusted PP or PE film packaging in conjunction with UV-C radiation can be applied to control foodborne pathogens in the dairy industry. Copyright © 2016. Published by Elsevier Ltd.

  4. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  5. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  6. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  7. DISSCO: direct imputation of summary statistics allowing covariates

    PubMed Central

    Xu, Zheng; Duan, Qing; Yan, Song; Chen, Wei; Li, Mingyao; Lange, Ethan; Li, Yun

    2015-01-01

    Background: Imputation of individual level genotypes at untyped markers using an external reference panel of genotyped or sequenced individuals has become standard practice in genetic association studies. Direct imputation of summary statistics can also be valuable, for example in meta-analyses where individual level genotype data are not available. Two methods (DIST and ImpG-Summary/LD), that assume a multivariate Gaussian distribution for the association summary statistics, have been proposed for imputing association summary statistics. However, both methods assume that the correlations between association summary statistics are the same as the correlations between the corresponding genotypes. This assumption can be violated in the presence of confounding covariates. Methods: We analytically show that in the absence of covariates, correlation among association summary statistics is indeed the same as that among the corresponding genotypes, thus serving as a theoretical justification for the recently proposed methods. We continue to prove that in the presence of covariates, correlation among association summary statistics becomes the partial correlation of the corresponding genotypes controlling for covariates. We therefore develop direct imputation of summary statistics allowing covariates (DISSCO). Results: We consider two real-life scenarios where the correlation and partial correlation likely make practical difference: (i) association studies in admixed populations; (ii) association studies in presence of other confounding covariate(s). Application of DISSCO to real datasets under both scenarios shows at least comparable, if not better, performance compared with existing correlation-based methods, particularly for lower frequency variants. For example, DISSCO can reduce the absolute deviation from the truth by 3.9–15.2% for variants with minor allele frequency <5%. Availability and implementation: http://www.unc.edu/∼yunmli/DISSCO. Contact: yunli@med.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25810429

  8. A framework for incorporating DTI Atlas Builder registration into Tract-Based Spatial Statistics and a simulated comparison to standard TBSS.

    PubMed

    Leming, Matthew; Steiner, Rachel; Styner, Martin

    2016-02-27

    Tract-based spatial statistics (TBSS) 6 is a software pipeline widely employed in comparative analysis of the white matter integrity from diffusion tensor imaging (DTI) datasets. In this study, we seek to evaluate the relationship between different methods of atlas registration for use with TBSS and different measurements of DTI (fractional anisotropy, FA, axial diffusivity, AD, radial diffusivity, RD, and medial diffusivity, MD). To do so, we have developed a novel tool that builds on existing diffusion atlas building software, integrating it into an adapted version of TBSS called DAB-TBSS (DTI Atlas Builder-Tract-Based Spatial Statistics) by using the advanced registration offered in DTI Atlas Builder 7 . To compare the effectiveness of these two versions of TBSS, we also propose a framework for simulating population differences for diffusion tensor imaging data, providing a more substantive means of empirically comparing DTI group analysis programs such as TBSS. In this study, we used 33 diffusion tensor imaging datasets and simulated group-wise changes in this data by increasing, in three different simulations, the principal eigenvalue (directly altering AD), the second and third eigenvalues (RD), and all three eigenvalues (MD) in the genu, the right uncinate fasciculus, and the left IFO. Additionally, we assessed the benefits of comparing the tensors directly using a functional analysis of diffusion tensor tract statistics (FADTTS 10 ). Our results indicate comparable levels of FA-based detection between DAB-TBSS and TBSS, with standard TBSS registration reporting a higher rate of false positives in other measurements of DTI. Within the simulated changes investigated here, this study suggests that the use of DTI Atlas Builder's registration enhances TBSS group-based studies.

  9. Effects of Plasma Rich in Growth Factors and Platelet-Rich Fibrin on Proliferation and Viability of Human Gingival Fibroblasts

    PubMed Central

    Vahabi, Surena; Vaziri, Shahram; Torshabi, Maryam

    2015-01-01

    Objectives: Platelet preparations are commonly used to enhance bone and soft tissue regeneration. Considering the existing controversies on the efficacy of platelet products for tissue regeneration, more in vitro studies are required. The aim of the present study was to compare the in vitro effects of plasma rich in growth factors (PRGF) and platelet-rich fibrin (PRF) on proliferation and viability of human gingival fibroblasts (HGFs). Materials and Methods: Anitua’s PRGF and Choukran’s PRF were prepared according to the standard protocols. After culture periods of 24, 48 and 72 hours, proliferation of HGFs was evaluated by the methyl thiazol tetrazolium assay. Statistical analysis was performed using one-way ANOVA followed by Tukey-Kramer’s multiple comparisons and P-values<0.05 were considered statistically significant. Results: PRGF treatment induced statistically significant (P<0.001) proliferation of HGF cells compared to the negative control (100% viability) at 24, 48 and 72 hours in values of 123%±2.25%, 102%±2.8% and 101%±3.92%, respectively. The PRF membrane treatment of HGF cells had a statistically significant effect on cell proliferation (21%±1.73%, P<0.001) at 24 hours compared to the negative control. However, at 48 and 72 hours after treatment, PRF had a negative effect on HGF cell proliferation and caused 38% and 60% decrease in viability and proliferation compared to the negative control, respectively. The HGF cell proliferation was significantly higher in PRGF than in PRF group (P< 0.001). Conclusion: This study demonstrated that PRGF had a strong stimulatory effect on HGF cell viability and proliferation compared to PRF. PMID:26877740

  10. Comparison of classical statistical methods and artificial neural network in traffic noise prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nedic, Vladimir, E-mail: vnedic@kg.ac.rs; Despotovic, Danijela, E-mail: ddespotovic@kg.ac.rs; Cvetanovic, Slobodan, E-mail: slobodan.cvetanovic@eknfak.ni.ac.rs

    2014-11-15

    Traffic is the main source of noise in urban environments and significantly affects human mental and physical health and labor productivity. Therefore it is very important to model the noise produced by various vehicles. Techniques for traffic noise prediction are mainly based on regression analysis, which generally is not good enough to describe the trends of noise. In this paper the application of artificial neural networks (ANNs) for the prediction of traffic noise is presented. As input variables of the neural network, the proposed structure of the traffic flow and the average speed of the traffic flow are chosen. Themore » output variable of the network is the equivalent noise level in the given time period L{sub eq}. Based on these parameters, the network is modeled, trained and tested through a comparative analysis of the calculated values and measured levels of traffic noise using the originally developed user friendly software package. It is shown that the artificial neural networks can be a useful tool for the prediction of noise with sufficient accuracy. In addition, the measured values were also used to calculate equivalent noise level by means of classical methods, and comparative analysis is given. The results clearly show that ANN approach is superior in traffic noise level prediction to any other statistical method. - Highlights: • We proposed an ANN model for prediction of traffic noise. • We developed originally designed user friendly software package. • The results are compared with classical statistical methods. • The results are much better predictive capabilities of ANN model.« less

  11. Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls

    PubMed Central

    Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.

    2013-01-01

    As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950

  12. [Continuity of hospital identifiers in hospital discharge data - Analysis of the nationwide German DRG Statistics from 2005 to 2013].

    PubMed

    Nimptsch, Ulrike; Wengler, Annelene; Mansky, Thomas

    2016-11-01

    In Germany, nationwide hospital discharge data (DRG statistics provided by the research data centers of the Federal Statistical Office and the Statistical Offices of the 'Länder') are increasingly used as data source for health services research. Within this data hospitals can be separated via their hospital identifier ([Institutionskennzeichen] IK). However, this hospital identifier primarily designates the invoicing unit and is not necessarily equivalent to one hospital location. Aiming to investigate direction and extent of possible bias in hospital-level analyses this study examines the continuity of the hospital identifier within a cross-sectional and longitudinal approach and compares the results to official hospital census statistics. Within the DRG statistics from 2005 to 2013 the annual number of hospitals as classified by hospital identifiers was counted for each year of observation. The annual number of hospitals derived from DRG statistics was compared to the number of hospitals in the official census statistics 'Grunddaten der Krankenhäuser'. Subsequently, the temporal continuity of hospital identifiers in the DRG statistics was analyzed within cohorts of hospitals. Until 2013, the annual number of hospital identifiers in the DRG statistics fell by 175 (from 1,725 to 1,550). This decline affected only providers with small or medium case volume. The number of hospitals identified in the DRG statistics was lower than the number given in the census statistics (e.g., in 2013 1,550 IK vs. 1,668 hospitals in the census statistics). The longitudinal analyses revealed that the majority of hospital identifiers persisted in the years of observation, while one fifth of hospital identifiers changed. In cross-sectional studies of German hospital discharge data the separation of hospitals via the hospital identifier might lead to underestimating the number of hospitals and consequential overestimation of caseload per hospital. Discontinuities of hospital identifiers over time might impair the follow-up of hospital cohorts. These limitations must be taken into account in analyses of German hospital discharge data focusing on the hospital level. Copyright © 2016. Published by Elsevier GmbH.

  13. Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis

    PubMed Central

    Ré, Miguel A.; Azad, Rajeev K.

    2014-01-01

    Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms. PMID:24728338

  14. The effect of rare variants on inflation of the test statistics in case-control analyses.

    PubMed

    Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P

    2015-02-20

    The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

  15. Generalization of entropy based divergence measures for symbolic sequence analysis.

    PubMed

    Ré, Miguel A; Azad, Rajeev K

    2014-01-01

    Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms.

  16. Hybrid cochlear implantation: quality of life, quality of hearing, and working performance compared to patients with conventional unilateral or bilateral cochlear implantation.

    PubMed

    Härkönen, Kati; Kivekäs, Ilkka; Kotti, Voitto; Sivonen, Ville; Vasama, Juha-Pekka

    2017-10-01

    The objective of the present study is to evaluate the effect of hybrid cochlear implantation (hCI) on quality of life (QoL), quality of hearing (QoH), and working performance in adult patients, and to compare the long-term results of patients with hCI to those of patients with conventional unilateral cochlear implantation (CI), bilateral CI, and single-sided deafness (SSD) with CI. Sound localization accuracy and speech-in-noise test were also compared between these groups. Eight patients with high-frequency sensorineural hearing loss of unknown etiology were selected in the study. Patients with hCI had better long-term speech perception in noise than uni- or bilateral CI patients, but the difference was not statistically significant. The sound localization accuracy was equal in the hCI, bilateral CI, and SSD patients. QoH was statistically significantly better in bilateral CI patients than in the others. In hCI patients, residual hearing was preserved in all patients after the surgery. During the 3.6-year follow-up, the mean hearing threshold at 125-500 Hz decreased on average by 15 dB HL in the implanted ear. QoL and working performance improved significantly in all CI patients. Hearing outcomes with hCI are comparable to the results of bilateral CI or CI with SSD, but hearing in noise and sound localization are statistically significantly better than with unilateral CI. Interestingly, the impact of CI on QoL, QoH, and working performance was similar in all groups.

  17. Coronal Holes and Solar f -Mode Wave Scattering Off Linear Boundaries

    NASA Astrophysics Data System (ADS)

    Hess Webber, Shea A.

    2016-11-01

    Coronal holes (CHs) are solar atmospheric features that have reduced emission in the extreme ultraviolet (EUV) spectrum due to decreased plasma density along open magnetic field lines. CHs are the source of the fast solar wind, can influence other solar activity, and track the solar cycle. Our interest in them deals with boundary detection near the solar surface. Detecting CH boundaries is important for estimating their size and tracking their evolution through time, as well as for comparing the physical properties within and outside of the feature. In this thesis, we (1) investigate CHs using statistical properties and image processing techniques on EUV images to detect CH boundaries in the low corona and chromosphere. SOHO/EIT data is used to locate polar CH boundaries on the solar limb, which are then tracked through two solar cycles. Additionally, we develop an edge-detection algorithm that we use on SDO/AIA data of a polar hole extension with an approximately linear boundary. These locations are used later to inform part of the helioseismic investigation; (2) develop a local time-distance (TD) helioseismology technique that can be used to detect CH boundary signatures at the photospheric level. We employ a new averaging scheme that makes use of the quasi-linear topology of elongated scattering regions, and create simulated data to test the new technique and compare results of some associated assumptions. This method enhances the wave propagation signal in the direction perpendicular to the linear feature and reduces the computational time of the TD analysis. We also apply a new statistical analysis of the significance of differences between the TD results; and (3) apply the TD techniques to solar CH data from SDO/HMI. The data correspond to the AIA data used in the edge-detection algorithm on EUV images. We look for statistically significant differences between the TD results inside and outside the CH region. In investigation (1), we found that the polar CH areas did not change significantly between minima, even though the magnetic field strength weakened. The results of (2) indicate that TD helioseismology techniques can be extended to make use of feature symmetry in the domain. The linear technique used here produces results that differ between a linear scattering region and a circular scattering region, shown using the simulated data algorithm. This suggests that using usual TD methods on scattering regions that are radially asymmetric may produce results with signatures of the anisotropy. The results of (1) and (3) indicate that the TD signal within our CH is statistically significantly different compared to unrelated quiet sun results. Surprisingly, the TD results in the quiet sun near the CH boundary also show significant differences compared to the separate quiet sun.

  18. Important Literature in Endocrinology: Citation Analysis and Historial Methodology.

    ERIC Educational Resources Information Center

    Hurt, C. D.

    1982-01-01

    Results of a study comparing two approaches to the identification of important literature in endocrinology reveals that association between ranking of cited items using the two methods is not statistically significant and use of citation or historical analysis alone will not result in same set of literature. Forty-two sources are appended. (EJS)

  19. Contribution of artificial intelligence to the knowledge of prognostic factors in laryngeal carcinoma.

    PubMed

    Zapater, E; Moreno, S; Fortea, M A; Campos, A; Armengot, M; Basterra, J

    2000-11-01

    Many studies have investigated prognostic factors in laryngeal carcinoma, with sometimes conflicting results. Apart from the importance of environmental factors, the different statistical methods employed may have influenced such discrepancies. A program based on artificial intelligence techniques is designed to determine the prognostic factors in a series of 122 laryngeal carcinomas. The results obtained are compared with those derived from two classical statistical methods (Cox regression and mortality tables). Tumor location was found to be the most important prognostic factor by all methods. The proposed intelligent system is found to be a sound method capable of detecting exceptional cases.

  20. Evaluation of EMIT and RIA high volume test procedures for THC metabolites in urine utilizing GC/MS confirmation.

    PubMed

    Abercrombie, M L; Jewell, J S

    1986-01-01

    Results of EMIT, Abuscreen RIA, and GC/MS tests for THC metabolites in a high volume random urinalysis program are compared. Samples were field tested by non-laboratory personnel with an EMIT system using a 100 ng/mL cutoff. Samples were then sent to the Army Forensic Toxicology Drug Testing Laboratory (WRAMC) at Fort Meade, Maryland, where they were tested by RIA (Abuscreen) using a statistical 100 ng/mL cutoff. Confirmations of all RIA positives were accomplished using a GC/MS procedure. EMIT and RIA results agreed for 91% of samples. Data indicated a 4% false positive rate and a 10% false negative rate for EMIT field testing. In a related study, results for samples which tested positive by RIA for THC metabolites using a statistical 100 ng/mL cutoff were compared with results by GC/MS utilizing a 20 ng/mL cutoff for the THCA metabolite. Presence of THCA metabolite was detected in 99.7% of RIA positive samples. No relationship between quantitations determined by the two tests was found.

  1. Long-term Results of an Analytical Assessment of Student Compounded Preparations.

    PubMed

    Roark, Angie M; Anksorus, Heidi N; Shrewsbury, Robert P

    2014-11-15

    To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students' compounded preparations were analyzed. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy.

  2. Comparative evaluation of the results of three techniques in the reconstruction of the anterior cruciate ligament, with a minimum follow-up of two years.

    PubMed

    Cury, Ricardo de Paula Leite; Sprey, Jan Willem Cerf; Bragatto, André Luiz Lima; Mansano, Marcelo Valentim; Moscovici, Herman Fabian; Guglielmetti, Luiz Gabriel Betoni

    2017-01-01

    To compare the clinical results of the reconstruction of the anterior cruciate ligament by transtibial, transportal, and outside-in techniques. This was a retrospective study on 90 patients (ACL reconstruction with autologous flexor tendons) operated between August 2009 and June 2012, by the medial transportal (30), transtibial (30), and "outside-in" (30) techniques. The following parameters were assessed: objective and subjective IKDC, Lysholm, KT1000, Lachman test, Pivot-Shift and anterior drawer test. On physical examination, the Lachman test and Pivot-Shift indicated a slight superiority of the outside-in technique, but without statistical significance ( p  = 0.132 and p  = 0.186 respectively). The anterior drawer, KT1000, subjective IKDC, Lysholm, and objective IKDC tests showed similar results in the groups studied. A higher number of complications were observed in the medial transportal technique ( p  = 0.033). There were no statistically significant differences in the clinical results of patients undergoing reconstruction of the anterior cruciate ligament by transtibial, medial transportal, and outside-in techniques.

  3. The Skillings-Mack test (Friedman test when there are missing data).

    PubMed

    Chatfield, Mark; Mander, Adrian

    2009-04-01

    The Skillings-Mack statistic (Skillings and Mack, 1981, Technometrics 23: 171-177) is a general Friedman-type statistic that can be used in almost any block design with an arbitrary missing-data structure. The missing data can be either missing by design, for example, an incomplete block design, or missing completely at random. The Skillings-Mack test is equivalent to the Friedman test when there are no missing data in a balanced complete block design, and the Skillings-Mack test is equivalent to the test suggested in Durbin (1951, British Journal of Psychology, Statistical Section 4: 85-90) for a balanced incomplete block design. The Friedman test was implemented in Stata by Goldstein (1991, Stata Technical Bulletin 3: 26-27) and further developed in Goldstein (2005, Stata Journal 5: 285). This article introduces the skilmack command, which performs the Skillings-Mack test.The skilmack command is also useful when there are many ties or equal ranks (N.B. the Friedman statistic compared with the chi(2) distribution will give a conservative result), as well as for small samples; appropriate results can be obtained by simulating the distribution of the test statistic under the null hypothesis.

  4. Statistical analysis plan for the family-led rehabilitation after stroke in India (ATTEND) trial: A multicenter randomized controlled trial of a new model of stroke rehabilitation compared to usual care.

    PubMed

    Billot, Laurent; Lindley, Richard I; Harvey, Lisa A; Maulik, Pallab K; Hackett, Maree L; Murthy, Gudlavalleti Vs; Anderson, Craig S; Shamanna, Bindiganavale R; Jan, Stephen; Walker, Marion; Forster, Anne; Langhorne, Peter; Verma, Shweta J; Felix, Cynthia; Alim, Mohammed; Gandhi, Dorcas Bc; Pandian, Jeyaraj Durai

    2017-02-01

    Background In low- and middle-income countries, few patients receive organized rehabilitation after stroke, yet the burden of chronic diseases such as stroke is increasing in these countries. Affordable models of effective rehabilitation could have a major impact. The ATTEND trial is evaluating a family-led caregiver delivered rehabilitation program after stroke. Objective To publish the detailed statistical analysis plan for the ATTEND trial prior to trial unblinding. Methods Based upon the published registration and protocol, the blinded steering committee and management team, led by the trial statistician, have developed a statistical analysis plan. The plan has been informed by the chosen outcome measures, the data collection forms and knowledge of key baseline data. Results The resulting statistical analysis plan is consistent with best practice and will allow open and transparent reporting. Conclusions Publication of the trial statistical analysis plan reduces potential bias in trial reporting, and clearly outlines pre-specified analyses. Clinical Trial Registrations India CTRI/2013/04/003557; Australian New Zealand Clinical Trials Registry ACTRN1261000078752; Universal Trial Number U1111-1138-6707.

  5. Comparative Gender Performance in Business Statistics.

    ERIC Educational Resources Information Center

    Mogull, Robert G.

    1989-01-01

    Comparative performance of male and female students in introductory and intermediate statistics classes was examined for over 16 years at a state university. Gender means from 97 classes and 1,609 males and 1,085 females revealed a probabilistic--although statistically insignificant--superior performance by female students that appeared to…

  6. Image Statistics and the Representation of Material Properties in the Visual Cortex

    PubMed Central

    Baumgartner, Elisabeth; Gegenfurtner, Karl R.

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714

  7. Image Statistics and the Representation of Material Properties in the Visual Cortex.

    PubMed

    Baumgartner, Elisabeth; Gegenfurtner, Karl R

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.

  8. Extracorporeal Shock Wave Therapy Versus Trigger Point Injection in the Treatment of Myofascial Pain Syndrome in the Quadratus Lumborum

    PubMed Central

    2017-01-01

    Objective To compare the effectiveness of extracorporeal shock wave therapy (ESWT) and trigger point injection (TPI) for the treatment of myofascial pain syndrome in the quadratus lumborum. Methods In a retrospective study at our institute, 30 patients with myofascial pain syndrome in the quadratus lumborum were assigned to ESWT or TPI groups. We assessed ESWT and TPI treatment according to their affects on pain relief and disability improvement. The outcome measures for the pain assessment were a visual analogue scale score and pain pressure threshold. The outcome measures for the disability assessment were Oswestry Disability Index, Roles and Maudsley, and Quebec Back Pain Disability Scale scores. Results Both groups demonstrated statistically significant improvements in pain and disability measures after treatment. However, in comparing the treatments, we found ESWT to be more effective than TPI for pain relief. There were no statistically significant differences between the groups with respect to disability. Conclusion Compared to TPI, ESWT showed superior results for pain relief. Thus, we consider ESWT as an effective treatment for myofascial pain syndrome in the quadratus lumborum. PMID:28971042

  9. Schoolchildren with Learning Difficulties Have Low Iron Status and High Anemia Prevalence

    PubMed Central

    Arcanjo, C. P. C.; Santos, P. R.

    2016-01-01

    Background. In developing countries there is high prevalence of iron deficiency anemia, which reduces cognitive performance, work performance, and endurance; it also causes learning difficulties and negative impact on development for infant population. Methods. The study concerns a case-control study; data was collected from an appropriate sample consisting of schoolchildren aged 8 years. The sample was divided into two subgroups: those with deficient initial reading skills (DIRS) (case) and those without (control). Blood samples were taken to analyze hemoglobin and serum ferritin levels. These results were then used to compare the two groups with Student's t-test. Association between DIRS and anemia was analyzed using odds ratio (OR). Results. Hemoglobin and serum ferritin levels of schoolchildren with DIRS were statistically lower when compared to those without, hemoglobin p = 0.02 and serum ferritin p = 0.04. DIRS was statistically associated with a risk of anemia with a weighted OR of 1.62. Conclusions. In this study, schoolchildren with DIRS had lower hemoglobin and serum ferritin levels when compared to those without. PMID:27703806

  10. Schoolchildren with Learning Difficulties Have Low Iron Status and High Anemia Prevalence.

    PubMed

    Arcanjo, F P N; Arcanjo, C P C; Santos, P R

    2016-01-01

    Background . In developing countries there is high prevalence of iron deficiency anemia, which reduces cognitive performance, work performance, and endurance; it also causes learning difficulties and negative impact on development for infant population. Methods . The study concerns a case-control study; data was collected from an appropriate sample consisting of schoolchildren aged 8 years. The sample was divided into two subgroups: those with deficient initial reading skills (DIRS) (case) and those without (control). Blood samples were taken to analyze hemoglobin and serum ferritin levels. These results were then used to compare the two groups with Student's t -test. Association between DIRS and anemia was analyzed using odds ratio (OR). Results . Hemoglobin and serum ferritin levels of schoolchildren with DIRS were statistically lower when compared to those without, hemoglobin p = 0.02 and serum ferritin p = 0.04. DIRS was statistically associated with a risk of anemia with a weighted OR of 1.62. Conclusions . In this study, schoolchildren with DIRS had lower hemoglobin and serum ferritin levels when compared to those without.

  11. Comparison of anti-plaque efficacy between a low and high cost dentifrice: A short term randomized double-blind trial

    PubMed Central

    Ganavadiya, Rahul; Shekar, B. R. Chandra; Goel, Pankaj; Hongal, Sudheer G.; Jain, Manish; Gupta, Ruchika

    2014-01-01

    Objective: The aim of this study was to compare the anti-plaque efficacy of a low and high cost commercially available tooth paste among 13-20 years old adolescents in a Residential Home, Bhopal, India. Materials and Methods: The study was randomized double-blind parallel clinical trial conducted in a Residential Home, Bhopal, India. A total of 65 patients with established dental plaque and gingivitis were randomly assigned to either low cost or high cost dentifrice group for 4 weeks. The plaque and gingival scores at baseline and post-intervention were assessed and compared. Statistical analysis was performed using paired t-test and the independent sample t-test. The statistical significance was fixed at 0.05. Results: Results indicated a significant reduction in plaque and gingival scores in both groups post-intervention compared with the baseline. Difference between the groups was not significant. No adverse events were reported and both the dentifrices were well-tolerated. Conclusion: Low cost dentifrice is equally effective to the high cost dentifrice in reducing plaque and gingival inflammation. PMID:25202220

  12. Imaging of the midpalatal suture in a porcine model: flat-panel volume computed tomography compared with multislice computed tomography.

    PubMed

    Hahn, Wolfram; Fricke-Zech, Susanne; Fialka-Fricke, Julia; Dullin, Christian; Zapf, Antonia; Gruber, Rudolf; Sennhenn-kirchner, Sabine; Kubein-Meesenburg, Dietmar; Sadat-Khonsari, Reza

    2009-09-01

    An investigation was conducted to compare the image quality of prototype flat-panel volume computed tomography (fpVCT) and multislice computed tomography (MSCT) of suture structures. Bone samples were taken from the midpalatal suture of 5 young (16 weeks) and 5 old (200 weeks) Sus scrofa domestica and fixed in formalin solution. An fpVCT prototype and an MSCT were used to obtain images of the specimens. The facial reformations were assessed by 4 observers using a 1 (excellent) to 5 (poor) rating scale for the weighted criteria visualization of the suture structure. A linear mixed model was used for statistical analysis. Results with P < .05 were considered to be statistically significant. The visualization of the suture of young specimens was significantly better than that of older animals (P < .001). The visualization of the suture with fpVCT was significantly better than that with MSCT (P < .001). Compared with MSCT, fpVCT produces superior results in the visualization of the midpalatal suture in a Sus scrofa domestica model.

  13. graph-GPA: A graphical model for prioritizing GWAS results and investigating pleiotropic architecture.

    PubMed

    Chung, Dongjun; Kim, Hang J; Zhao, Hongyu

    2017-02-01

    Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. However, identification of risk variants associated with complex diseases remains challenging as they are often affected by many genetic variants with small or moderate effects. There has been accumulating evidence suggesting that different complex traits share common risk basis, namely pleiotropy. Recently, several statistical methods have been developed to improve statistical power to identify risk variants for complex traits through a joint analysis of multiple GWAS datasets by leveraging pleiotropy. While these methods were shown to improve statistical power for association mapping compared to separate analyses, they are still limited in the number of phenotypes that can be integrated. In order to address this challenge, in this paper, we propose a novel statistical framework, graph-GPA, to integrate a large number of GWAS datasets for multiple phenotypes using a hidden Markov random field approach. Application of graph-GPA to a joint analysis of GWAS datasets for 12 phenotypes shows that graph-GPA improves statistical power to identify risk variants compared to statistical methods based on smaller number of GWAS datasets. In addition, graph-GPA also promotes better understanding of genetic mechanisms shared among phenotypes, which can potentially be useful for the development of improved diagnosis and therapeutics. The R implementation of graph-GPA is currently available at https://dongjunchung.github.io/GGPA/.

  14. The effects of BleedArrest on hemorrhage control in a porcine model.

    PubMed

    Gegel, Brian; Burgert, James; Loughren, Michael; Johnson, Don

    2012-01-01

    The purpose of this study was to examine the effectiveness of the hemostatic agent BleedArrest compared to control. This was a prospective, experimental design employing an established porcine model of uncontrolled hemorrhage. The minimum number of animals (n=10 per group) was used to obtain a statistically valid result. There were no statistically significant differences between the groups (P>.05) indicating that the groups were equivalent on the following parameters: activating clotting time, the subject weights, core body temperatures, amount of one minute hemorrhage, arterial blood pressures, and the amount and percentage of total blood volume. There were significant differences in the amount of hemorrhage (P=.033) between the BleedArrest (mean=72, SD±72 mL) and control (mean=317.30, SD±112.02 mL). BleedArrest is statistically and clinically superior at controlling hemorrhage compared to the standard pressure dressing control group. In conclusion, BleedArrest is an effective hemostatic agent for use in civilian and military trauma management.

  15. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  16. Is math anxiety in the secondary classroom limiting physics mastery? A study of math anxiety and physics performance

    NASA Astrophysics Data System (ADS)

    Mercer, Gary J.

    This quantitative study examined the relationship between secondary students with math anxiety and physics performance in an inquiry-based constructivist classroom. The Revised Math Anxiety Rating Scale was used to evaluate math anxiety levels. The results were then compared to the performance on a physics standardized final examination. A simple correlation was performed, followed by a multivariate regression analysis to examine effects based on gender and prior math background. The correlation showed statistical significance between math anxiety and physics performance. The regression analysis showed statistical significance for math anxiety, physics performance, and prior math background, but did not show statistical significance for math anxiety, physics performance, and gender.

  17. Evaluation of changes to foot shape in females 5 years after mastectomy: a case-control study.

    PubMed

    Głowacka-Mrotek, Iwona; Sowa, Magdalena; Siedlecki, Zygmunt; Nowikiewicz, Tomasz; Hagner, Wojciech; Zegarski, Wojciech

    2017-06-01

    The aim of this study was to evaluate changes in foot shape of women 5 years after undergoing breast amputation. Evaluation of foot shape was performed using a non-invasive device for computer analysis of the plantar surface of the foot. Obtained results were compared between feet on the healthy breast side (F1) and on the amputated breast side (F2). 128 women aged 63.60 ± 8.83, 5-6 years after breast amputation were enrolled in this case-control study. Weight bearing on the lower extremity on the amputated breast side (F1) compared with the healthy breast side (F2) showed statistically significant differences (p < 0.01). Patients put more weight onto the healthy breast side. No statistically significant difference was found with regard to F1 and F2 foot length (p = 0.4239), as well as BETA (p = 0.4470) and GAMMA (p = 0.4566) angles. Highly statistically significant differences were noted with respect to foot width, ALPHA angle, and Sztriter-Godunov index-higher values were observed on the healthy breast side (p < 0.001). Highly statistically significant differences were also noted while comparing Clark's angles, higher values being observed on the operated breast side (p < 0.001). Differences in foot shape on the healthy breast side and amputated breast side constitute a long-term negative consequence of mastectomy, and can be caused by unbalanced weight put on feet on the healthy breast side compared to the amputated breast side.

  18. An effect size filter improves the reproducibility in spectral counting-based comparative proteomics.

    PubMed

    Gregori, Josep; Villarreal, Laura; Sánchez, Alex; Baselga, José; Villanueva, Josep

    2013-12-16

    The microarray community has shown that the low reproducibility observed in gene expression-based biomarker discovery studies is partially due to relying solely on p-values to get the lists of differentially expressed genes. Their conclusions recommended complementing the p-value cutoff with the use of effect-size criteria. The aim of this work was to evaluate the influence of such an effect-size filter on spectral counting-based comparative proteomic analysis. The results proved that the filter increased the number of true positives and decreased the number of false positives and the false discovery rate of the dataset. These results were confirmed by simulation experiments where the effect size filter was used to evaluate systematically variable fractions of differentially expressed proteins. Our results suggest that relaxing the p-value cut-off followed by a post-test filter based on effect size and signal level thresholds can increase the reproducibility of statistical results obtained in comparative proteomic analysis. Based on our work, we recommend using a filter consisting of a minimum absolute log2 fold change of 0.8 and a minimum signal of 2-4 SpC on the most abundant condition for the general practice of comparative proteomics. The implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of the results obtained among independent laboratories and MS platforms. Quality control analysis of microarray-based gene expression studies pointed out that the low reproducibility observed in the lists of differentially expressed genes could be partially attributed to the fact that these lists are generated relying solely on p-values. Our study has established that the implementation of an effect size post-test filter improves the statistical results of spectral count-based quantitative proteomics. The results proved that the filter increased the number of true positives whereas decreased the false positives and the false discovery rate of the datasets. The results presented here prove that a post-test filter applying a reasonable effect size and signal level thresholds helps to increase the reproducibility of statistical results in comparative proteomic analysis. Furthermore, the implementation of feature filtering approaches could improve proteomic biomarker discovery initiatives by increasing the reproducibility of results obtained among independent laboratories and MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. A noninferiority clinical trial comparing fluconazole and ketoconazole in combination with cephalexin for the treatment of dogs with Malassezia dermatitis.

    PubMed

    Sickafoose, L; Hosgood, G; Snook, T; Westermeyer, R; Merchant, S

    2010-01-01

    This double-blinded noninferiority clinical trial evaluated the use of oral fluconazole for the treatment of Malassezia dermatitis in dogs by comparing it with use of an accepted therapeutic agent, ketoconazole. Dogs presenting with Malassezia dermatitis were treated with either fluconazole or ketoconazole in addition to cephalexin for concurrent bacterial dermatitis. Statistically significant improvements in cytologic yeast count, clinical signs associated with Malassezia dermatitis, and pruritus were seen with both antifungal treatments. There was no statistical difference between the treatments with regard to the magnitude of reduction in these parameters. These results suggest that fluconazole is at least as effective as ketoconazole for the treatment of dogs with Malassezia dermatitis.

  20. Comparative analysis on the selection of number of clusters in community detection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro; Kabashima, Yoshiyuki

    2018-02-01

    We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.

  1. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  2. A comparative study of event-related coupling patterns during an auditory oddball task in schizophrenia

    NASA Astrophysics Data System (ADS)

    Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto

    2015-02-01

    Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.

  3. [Study of the reliability in one dimensional size measurement with digital slit lamp microscope].

    PubMed

    Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng

    2010-11-01

    To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.

  4. The Relationship between Statistics Self-Efficacy, Statistics Anxiety, and Performance in an Introductory Graduate Statistics Course

    ERIC Educational Resources Information Center

    Schneider, William R.

    2011-01-01

    The purpose of this study was to determine the relationship between statistics self-efficacy, statistics anxiety, and performance in introductory graduate statistics courses. The study design compared two statistics self-efficacy measures developed by Finney and Schraw (2003), a statistics anxiety measure developed by Cruise and Wilkins (1980),…

  5. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  6. Statistical Models for Averaging of the Pump–Probe Traces: Example of Denoising in Terahertz Time-Domain Spectroscopy

    NASA Astrophysics Data System (ADS)

    Skorobogatiy, Maksim; Sadasivan, Jayesh; Guerboukha, Hichem

    2018-05-01

    In this paper, we first discuss the main types of noise in a typical pump-probe system, and then focus specifically on terahertz time domain spectroscopy (THz-TDS) setups. We then introduce four statistical models for the noisy pulses obtained in such systems, and detail rigorous mathematical algorithms to de-noise such traces, find the proper averages and characterise various types of experimental noise. Finally, we perform a comparative analysis of the performance, advantages and limitations of the algorithms by testing them on the experimental data collected using a particular THz-TDS system available in our laboratories. We conclude that using advanced statistical models for trace averaging results in the fitting errors that are significantly smaller than those obtained when only a simple statistical average is used.

  7. Dealing with the Conflicting Results of Psycholinguistic Experiments: How to Resolve Them with the Help of Statistical Meta-analysis.

    PubMed

    Rákosi, Csilla

    2018-01-22

    This paper proposes the use of the tools of statistical meta-analysis as a method of conflict resolution with respect to experiments in cognitive linguistics. With the help of statistical meta-analysis, the effect size of similar experiments can be compared, a well-founded and robust synthesis of the experimental data can be achieved, and possible causes of any divergence(s) in the outcomes can be revealed. This application of statistical meta-analysis offers a novel method of how diverging evidence can be dealt with. The workability of this idea is exemplified by a case study dealing with a series of experiments conducted as non-exact replications of Thibodeau and Boroditsky (PLoS ONE 6(2):e16782, 2011. https://doi.org/10.1371/journal.pone.0016782 ).

  8. Choroidal Thickness Analysis in Patients with Usher Syndrome Type 2 Using EDI OCT.

    PubMed

    Colombo, L; Sala, B; Montesano, G; Pierrottet, C; De Cillà, S; Maltese, P; Bertelli, M; Rossetti, L

    2015-01-01

    To portray Usher Syndrome type 2, analyzing choroidal thickness and comparing data reported in published literature on RP and healthy subjects. Methods. 20 eyes of 10 patients with clinical signs and genetic diagnosis of Usher Syndrome type 2. Each patient underwent a complete ophthalmologic examination including Best Corrected Visual Acuity (BCVA), intraocular pressure (IOP), axial length (AL), automated visual field (VF), and EDI OCT. Both retinal and choroidal measures were measured. Statistical analysis was performed to correlate choroidal thickness with age, BCVA, IOP, AL, VF, and RT. Comparison with data about healthy people and nonsyndromic RP patients was performed. Results. Mean subfoveal choroidal thickness (SFCT) was 248.21 ± 79.88 microns. SFCT was statistically significant correlated with age (correlation coefficient -0.7248179, p < 0.01). No statistically significant correlation was found between SFCT and BCVA, IOP, AL, VF, and RT. SFCT was reduced if compared to healthy subjects (p < 0.01). No difference was found when compared to choroidal thickness from nonsyndromic RP patients (p = 0.2138). Conclusions. Our study demonstrated in vivo choroidal thickness reduction in patients with Usher Syndrome type 2. These data are important for the comprehension of mechanisms of disease and for the evaluation of therapeutic approaches.

  9. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  10. Using expert knowledge to incorporate uncertainty in cause-of-death assignments for modeling of cause-specific mortality

    USGS Publications Warehouse

    Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.

    2018-01-01

    Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause-of-death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.

  11. Comparison of culture media for ex vivo cultivation of limbal epithelial progenitor cells

    PubMed Central

    Loureiro, Renata Ruoco; Cristovam, Priscila Cardoso; Martins, Caio Marques; Covre, Joyce Luciana; Sobrinho, Juliana Aparecida; Ricardo, José Reinaldo da Silva; Hazarbassanov, Rossen Myhailov; Höfling-Lima, Ana Luisa; Belfort, Rubens; Nishi, Mauro

    2013-01-01

    Purpose To compare the effectiveness of three culture media for growth, proliferation, differentiation, and viability of ex vivo cultured limbal epithelial progenitor cells. Methods Limbal epithelial progenitor cell cultures were established from ten human corneal rims and grew on plastic wells in three culture media: supplemental hormonal epithelial medium (SHEM), keratinocyte serum-free medium (KSFM), and Epilife. The performance of culturing limbal epithelial progenitor cells in each medium was evaluated according to the following parameters: growth area of epithelial migration; immunocytochemistry for adenosine 5′-triphosphate-binding cassette member 2 (ABCG2), p63, Ki67, cytokeratin 3 (CK3), and vimentin (VMT) and real-time reverse transcription polymerase chain reaction (RT–PCR) for CK3, ABCG2, and p63, and cell viability using Hoechst staining. Results Limbal epithelial progenitor cells cultivated in SHEM showed a tendency to faster migration, compared to KSFM and Epilife. Immunocytochemical analysis showed that proliferated cells in the SHEM had lower expression for markers related to progenitor epithelial cells (ABCG2) and putative progenitor cells (p63), and a higher percentage of positive cells for differentiated epithelium (CK3) when compared to KSFM and Epilife. In PCR analysis, ABCG2 expression was statistically higher for Epilife compared to SHEM. Expression of p63 was statistically higher for Epilife compared to SHEM and KSFM. However, CK3 expression was statistically lower for KSFM compared to SHEM. Conclusions Based on our findings, we concluded that cells cultured in KSFM and Epilife media presented a higher percentage of limbal epithelial progenitor cells, compared to SHEM. PMID:23378720

  12. Anxiety and Attitude of Graduate Students in On-Campus vs. Online Statistics Courses

    ERIC Educational Resources Information Center

    DeVaney, Thomas A.

    2010-01-01

    This study compared levels of statistics anxiety and attitude toward statistics for graduate students in on-campus and online statistics courses. The Survey of Attitudes Toward Statistics and three subscales of the Statistics Anxiety Rating Scale were administered at the beginning and end of graduate level educational statistic courses.…

  13. Statistical and Spatial Analysis of Bathymetric Data for the St. Clair River, 1971-2007

    USGS Publications Warehouse

    Bennion, David

    2009-01-01

    To address questions concerning ongoing geomorphic processes in the St. Clair River, selected bathymetric datasets spanning 36 years were analyzed. Comparisons of recent high-resolution datasets covering the upper river indicate a highly variable, active environment. Although statistical and spatial comparisons of the datasets show that some changes to the channel size and shape have taken place during the study period, uncertainty associated with various survey methods and interpolation processes limit the statistically certain results. The methods used to spatially compare the datasets are sensitive to small variations in position and depth that are within the range of uncertainty associated with the datasets. Characteristics of the data, such as the density of measured points and the range of values surveyed, can also influence the results of spatial comparison. With due consideration of these limitations, apparently active and ongoing areas of elevation change in the river are mapped and discussed.

  14. Eigenvalue statistics for the sum of two complex Wishart matrices

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh

    2014-09-01

    The sum of independent Wishart matrices, taken from distributions with unequal covariance matrices, plays a crucial role in multivariate statistics, and has applications in the fields of quantitative finance and telecommunication. However, analytical results concerning the corresponding eigenvalue statistics have remained unavailable, even for the sum of two Wishart matrices. This can be attributed to the complicated and rotationally noninvariant nature of the matrix distribution that makes extracting the information about eigenvalues a nontrivial task. Using a generalization of the Harish-Chandra-Itzykson-Zuber integral, we find exact solution to this problem for the complex Wishart case when one of the covariance matrices is proportional to the identity matrix, while the other is arbitrary. We derive exact and compact expressions for the joint probability density and marginal density of eigenvalues. The analytical results are compared with numerical simulations and we find perfect agreement.

  15. Clonality: an R package for testing clonal relatedness of two tumors from the same patient based on their genomic profiles.

    PubMed

    Ostrovnaya, Irina; Seshan, Venkatraman E; Olshen, Adam B; Begg, Colin B

    2011-06-15

    If a cancer patient develops multiple tumors, it is sometimes impossible to determine whether these tumors are independent or clonal based solely on pathological characteristics. Investigators have studied how to improve this diagnostic challenge by comparing the presence of loss of heterozygosity (LOH) at selected genetic locations of tumor samples, or by comparing genomewide copy number array profiles. We have previously developed statistical methodology to compare such genomic profiles for an evidence of clonality. We assembled the software for these tests in a new R package called 'Clonality'. For LOH profiles, the package contains significance tests. The analysis of copy number profiles includes a likelihood ratio statistic and reference distribution, as well as an option to produce various plots that summarize the results. Bioconductor (http://bioconductor.org/packages/release/bioc/html/Clonality.html) and http://www.mskcc.org/mskcc/html/13287.cfm.

  16. Comparing Data Sets: Implicit Summaries of the Statistical Properties of Number Sets

    ERIC Educational Resources Information Center

    Morris, Bradley J.; Masnick, Amy M.

    2015-01-01

    Comparing datasets, that is, sets of numbers in context, is a critical skill in higher order cognition. Although much is known about how people compare single numbers, little is known about how number sets are represented and compared. We investigated how subjects compared datasets that varied in their statistical properties, including ratio of…

  17. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Effect of Temperature on Jet Velocity Spectra

    NASA Technical Reports Server (NTRS)

    Bridges, James E.; Wernet, Mark P.

    2007-01-01

    Statistical jet noise prediction codes that accurately predict spectral directivity for both cold and hot jets are highly sought both in industry and academia. Their formulation, whether based upon manipulations of the Navier-Stokes equations or upon heuristic arguments, require substantial experimental observation of jet turbulence statistics. Unfortunately, the statistics of most interest involve the space-time correlation of flow quantities, especially velocity. Until the last 10 years, all turbulence statistics were made with single-point probes, such as hotwires or laser Doppler anemometry. Particle image velocimetry (PIV) brought many new insights with its ability to measure velocity fields over large regions of jets simultaneously; however, it could not measure velocity at rates higher than a few fields per second, making it unsuitable for obtaining temporal spectra and correlations. The development of time-resolved PIV, herein called TR-PIV, has removed this limitation, enabling measurement of velocity fields at high resolution in both space and time. In this paper, ground-breaking results from the application of TR-PIV to single-flow hot jets are used to explore the impact of heat on turbulent statistics of interest to jet noise models. First, a brief summary of validation studies is reported, undertaken to show that the new technique produces the same trusted results as hotwire at cold, low-speed jets. Second, velocity spectra from cold and hot jets are compared to see the effect of heat on the spectra. It is seen that heated jets possess 10 percent more turbulence intensity compared to the unheated jets with the same velocity. The spectral shapes, when normalized using Strouhal scaling, are insensitive to temperature if the stream-wise location is normalized relative to the potential core length. Similarly, second order velocity correlations, of interest in modeling of jet noise sources, are also insensitive to temperature as well.

  19. Maternal characteristics and immunization status of children in North Central of Nigeria

    PubMed Central

    Adenike, Olugbenga-Bello; Adejumoke, Jimoh; Olufunmi, Oke; Ridwan, Oladejo

    2017-01-01

    Introduction Routine immunization coverage in Nigeria is one of the lowest national coverage rates in the world. The objective of this study was to compare the mother’ characteristics and the child’s Immunization status in some selected rural and urban communities in the North central part of Nigeria. Methods A descriptive cross sectional study, using a multistage sampling technique to select 600 respondent women with an index child between 0-12 months. Results Mean age of rural respondents was 31.40±7.21 years and 32.72+6.77 years among urban respondents, though there was no statistically significant difference in age between the 2 locations (p-0.762). One hundred and ninetyseven (65.7%) and 241(80.3%) of rural and urban respondents respectively were aware of immunization, the difference was statistically significant (p-0.016). knowledge in urban areas was better than among rural respondents. There was statistically significant association between respondents age, employment status, mothers' educational status and the child's immunization status (P<0.05), while variables like parity, age at marriage, marital status, No of children, household income and place of index were not statistically associated with immunization status as P>0.05. More than half 179(59.7%) of rural and 207(69.0%) of urban had good practice of immunization though the difference was not statistically significant (p-0.165) Conclusion The immunization coverage in urban community was better than that of the rural community. The result of this study has clearly indicated that mothers in Nigeria have improved on taking their children for immunization in both rural and urban area compared to previous reports PMID:28588745

  20. ROTAS: a rotamer-dependent, atomic statistical potential for assessment and prediction of protein structures.

    PubMed

    Park, Jungkap; Saitou, Kazuhiro

    2014-09-18

    Multibody potentials accounting for cooperative effects of molecular interactions have shown better accuracy than typical pairwise potentials. The main challenge in the development of such potentials is to find relevant structural features that characterize the tightly folded proteins. Also, the side-chains of residues adopt several specific, staggered conformations, known as rotamers within protein structures. Different molecular conformations result in different dipole moments and induce charge reorientations. However, until now modeling of the rotameric state of residues had not been incorporated into the development of multibody potentials for modeling non-bonded interactions in protein structures. In this study, we develop a new multibody statistical potential which can account for the influence of rotameric states on the specificity of atomic interactions. In this potential, named "rotamer-dependent atomic statistical potential" (ROTAS), the interaction between two atoms is specified by not only the distance and relative orientation but also by two state parameters concerning the rotameric state of the residues to which the interacting atoms belong. It was clearly found that the rotameric state is correlated to the specificity of atomic interactions. Such rotamer-dependencies are not limited to specific type or certain range of interactions. The performance of ROTAS was tested using 13 sets of decoys and was compared to those of existing atomic-level statistical potentials which incorporate orientation-dependent energy terms. The results show that ROTAS performs better than other competing potentials not only in native structure recognition, but also in best model selection and correlation coefficients between energy and model quality. A new multibody statistical potential, ROTAS accounting for the influence of rotameric states on the specificity of atomic interactions was developed and tested on decoy sets. The results show that ROTAS has improved ability to recognize native structure from decoy models compared to other potentials. The effectiveness of ROTAS may provide insightful information for the development of many applications which require accurate side-chain modeling such as protein design, mutation analysis, and docking simulation.

  1. A Multiphase Validation of Atlas-Based Automatic and Semiautomatic Segmentation Strategies for Prostate MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Spencer; Rodrigues, George, E-mail: george.rodrigues@lhsc.on.ca; Department of Epidemiology/Biostatistics, University of Western Ontario, London

    2013-01-01

    Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual,more » N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.« less

  2. Weigh-in-Motion Sensor and Controller Operation and Performance Comparison

    DOT National Transportation Integrated Search

    2018-01-01

    This research project utilized statistical inference and comparison techniques to compare the performance of different Weigh-in-Motion (WIM) sensors. First, we analyzed test-vehicle data to perform an accuracy check of the results reported by the sen...

  3. Role of spatial inhomogenity in GPCR dimerisation predicted by receptor association-diffusion models

    NASA Astrophysics Data System (ADS)

    Deshpande, Sneha A.; Pawar, Aiswarya B.; Dighe, Anish; Athale, Chaitanya A.; Sengupta, Durba

    2017-06-01

    G protein-coupled receptor (GPCR) association is an emerging paradigm with far reaching implications in the regulation of signalling pathways and therapeutic interventions. Recent super resolution microscopy studies have revealed that receptor dimer steady state exhibits sub-second dynamics. In particular the GPCRs, muscarinic acetylcholine receptor M1 (M1MR) and formyl peptide receptor (FPR), have been demonstrated to exhibit a fast association/dissociation kinetics, independent of ligand binding. In this work, we have developed a spatial kinetic Monte Carlo model to investigate receptor homo-dimerisation at a single receptor resolution. Experimentally measured association/dissociation kinetic parameters and diffusion coefficients were used as inputs to the model. To test the effect of membrane spatial heterogeneity on the simulated steady state, simulations were compared to experimental statistics of dimerisation. In the simplest case the receptors are assumed to be diffusing in a spatially homogeneous environment, while spatial heterogeneity is modelled to result from crowding, membrane micro-domains and cytoskeletal compartmentalisation or ‘corrals’. We show that a simple association-diffusion model is sufficient to reproduce M1MR association statistics, but fails to reproduce FPR statistics despite comparable kinetic constants. A parameter sensitivity analysis is required to reproduce the association statistics of FPR. The model reveals the complex interplay between cytoskeletal components and their influence on receptor association kinetics within the features of the membrane landscape. These results constitute an important step towards understanding the factors modulating GPCR organisation.

  4. Experimental design matters for statistical analysis: how to handle blocking.

    PubMed

    Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian

    2018-03-01

    Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. Statistical analysis of corn yields responding to climate variability at various spatio-temporal resolutions

    NASA Astrophysics Data System (ADS)

    Jiang, H.; Lin, T.

    2017-12-01

    Rain-fed corn production systems are subject to sub-seasonal variations of precipitation and temperature during the growing season. As each growth phase has varied inherent physiological process, plants necessitate different optimal environmental conditions during each phase. However, this temporal heterogeneity towards climate variability alongside the lifecycle of crops is often simplified and fixed as constant responses in large scale statistical modeling analysis. To capture the time-variant growing requirements in large scale statistical analysis, we develop and compare statistical models at various spatial and temporal resolutions to quantify the relationship between corn yield and weather factors for 12 corn belt states from 1981 to 2016. The study compares three spatial resolutions (county, agricultural district, and state scale) and three temporal resolutions (crop growth phase, monthly, and growing season) to characterize the effects of spatial and temporal variability. Our results show that the agricultural district model together with growth phase resolution can explain 52% variations of corn yield caused by temperature and precipitation variability. It provides a practical model structure balancing the overfitting problem in county specific model and weak explanation power in state specific model. In US corn belt, precipitation has positive impact on corn yield in growing season except for vegetative stage while extreme heat attains highest sensitivity from silking to dough phase. The results show the northern counties in corn belt area are less interfered by extreme heat but are more vulnerable to water deficiency.

  6. Electron microscopic quantification of collagen fibril diameters in the rabbit medial collateral ligament: a baseline for comparison.

    PubMed

    Frank, C; Bray, D; Rademaker, A; Chrusch, C; Sabiston, P; Bodie, D; Rangayyan, R

    1989-01-01

    To establish a normal baseline for comparison, thirty-one thousand collagen fibril diameters were measured in calibrated transmission electron (TEM) photomicrographs of normal rabbit medial collateral ligaments (MCL's). A new automated method of quantitation was used to compare statistically fibril minimum diameter distributions in one midsubstance location in both MCL's from six animals at 3 months of age (immature) and three animals at 10 months of age (mature). Pooled results demonstrate that rabbit MCL's have statistically different (p less than 0.001) mean minimum diameters at these two ages. Interanimal differences in mean fibril minimum diameters were also significant (p less than 0.001) and varied by 20% to 25% in both mature and immature animals. Finally, there were significant differences (p less than 0.001) in mean diameters and distributions from side-to-side in all animals. These mean left-to-right differences were less than 10% in all mature animals but as much as 62% in some immature animals. Statistical analysis of these data demonstrate that animal-to-animal comparisons using these protocols require a large number of animals with appropriate numbers of fibrils being measured to detect small intergroup differences. With experiments which compare left to right ligaments, far fewer animals are required to detect similarly small differences. These results demonstrate the necessity for rigorous control of sampling, an extensive normal baseline and statistically confirmed experimental designs in any TEM comparisons of collagen fibril diameters.

  7. An intelligent system based on fuzzy probabilities for medical diagnosis– a study in aphasia diagnosis*

    PubMed Central

    Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi

    2009-01-01

    BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867

  8. Quality of life in breast cancer patients--a quantile regression analysis.

    PubMed

    Pourhoseingholi, Mohamad Amin; Safaee, Azadeh; Moghimi-Dehkordi, Bijan; Zeighami, Bahram; Faghihzadeh, Soghrat; Tabatabaee, Hamid Reza; Pourhoseingholi, Asma

    2008-01-01

    Quality of life study has an important role in health care especially in chronic diseases, in clinical judgment and in medical resources supplying. Statistical tools like linear regression are widely used to assess the predictors of quality of life. But when the response is not normal the results are misleading. The aim of this study is to determine the predictors of quality of life in breast cancer patients, using quantile regression model and compare to linear regression. A cross-sectional study conducted on 119 breast cancer patients that admitted and treated in chemotherapy ward of Namazi hospital in Shiraz. We used QLQ-C30 questionnaire to assessment quality of life in these patients. A quantile regression was employed to assess the assocciated factors and the results were compared to linear regression. All analysis carried out using SAS. The mean score for the global health status for breast cancer patients was 64.92+/-11.42. Linear regression showed that only grade of tumor, occupational status, menopausal status, financial difficulties and dyspnea were statistically significant. In spite of linear regression, financial difficulties were not significant in quantile regression analysis and dyspnea was only significant for first quartile. Also emotion functioning and duration of disease statistically predicted the QOL score in the third quartile. The results have demonstrated that using quantile regression leads to better interpretation and richer inference about predictors of the breast cancer patient quality of life.

  9. Improving information retrieval in functional analysis.

    PubMed

    Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A

    2016-12-01

    Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Average ambulatory measures of sound pressure level, fundamental frequency, and vocal dose do not differ between adult females with phonotraumatic lesions and matched control subjects

    PubMed Central

    Van Stan, Jarrad H.; Mehta, Daryush D.; Zeitels, Steven M.; Burns, James A.; Barbu, Anca M.; Hillman, Robert E.

    2015-01-01

    Objectives Clinical management of phonotraumatic vocal fold lesions (nodules, polyps) is based largely on assumptions that abnormalities in habitual levels of sound pressure level (SPL), fundamental frequency (f0), and/or amount of voice use play a major role in lesion development and chronic persistence. This study used ambulatory voice monitoring to evaluate if significant differences in voice use exist between patients with phonotraumatic lesions and normal matched controls. Methods Subjects were 70 adult females: 35 with vocal fold nodules or polyps and 35 age-, sex-, and occupation-matched normal individuals. Weeklong summary statistics of voice use were computed from anterior neck surface acceleration recorded using a smartphone-based ambulatory voice monitor. Results Paired t-tests and Kolmogorov-Smirnov tests resulted in no statistically significant differences between patients and matched controls regarding average measures of SPL, f0, vocal dose measures, and voicing/voice rest periods. Paired t-tests comparing f0 variability between the groups resulted in statistically significant differences with moderate effect sizes. Conclusions Individuals with phonotraumatic lesions did not exhibit differences in average ambulatory measures of vocal behavior when compared with matched controls. More refined characterizations of underlying phonatory mechanisms and other potentially contributing causes are warranted to better understand risk factors associated with phonotraumatic lesions. PMID:26024911

  11. Use of error grid analysis to evaluate acceptability of a point of care prothrombin time meter.

    PubMed

    Petersen, John R; Vonmarensdorf, Hans M; Weiss, Heidi L; Elghetany, M Tarek

    2010-02-01

    Statistical methods (linear regression, correlation analysis, etc.) are frequently employed in comparing methods in the central laboratory (CL). Assessing acceptability of point of care testing (POCT) equipment, however, is more difficult because statistically significant biases may not have an impact on clinical care. We showed how error grid (EG) analysis can be used to evaluate POCT PT INR with the CL. We compared results from 103 patients seen in an anti-coagulation clinic that were on Coumadin maintenance therapy using fingerstick samples for POCT (Roche CoaguChek XS and S) and citrated venous blood samples for CL (Stago STAR). To compare clinical acceptability of results we developed an EG with zones A, B, C and D. Using 2nd order polynomial equation analysis, POCT results highly correlate with the CL for CoaguChek XS (R(2)=0. 955) and CoaguChek S (R(2)=0. 93), respectively but does not indicate if POCT results are clinically interchangeable with the CL. Using EG it is readily apparent which levels can be considered clinically identical to the CL despite analytical bias. We have demonstrated the usefulness of EG in determining acceptability of POCT PT INR testing and how it can be used to determine cut-offs where differences in POCT results may impact clinical care. Copyright 2009 Elsevier B.V. All rights reserved.

  12. Strategies Used by Students to Compare Two Data Sets

    ERIC Educational Resources Information Center

    Reaburn, Robyn

    2012-01-01

    One of the common tasks of inferential statistics is to compare two data sets. Long before formal statistical procedures, however, students can be encouraged to make comparisons between data sets and therefore build up intuitive statistical reasoning. Such tasks also give meaning to the data collection students may do. This study describes the…

  13. Developing Teachers' Reasoning about Comparing Distributions: A Cross-Institutional Effort

    ERIC Educational Resources Information Center

    Tran, Dung; Lee, Hollylynne; Doerr, Helen

    2016-01-01

    The research reported here uses a pre/post-test model and stimulated recall interviews to assess teachers' statistical reasoning about comparing distributions, when enrolled in a graduate-level statistics education course. We discuss key aspects of the course design aimed at improving teachers' learning and teaching of statistics, and the…

  14. Retention of Statistical Concepts in a Preliminary Randomization-Based Introductory Statistics Curriculum

    ERIC Educational Resources Information Center

    Tintle, Nathan; Topliff, Kylie; VanderStoep, Jill; Holmes, Vicki-Lynn; Swanson, Todd

    2012-01-01

    Previous research suggests that a randomization-based introductory statistics course may improve student learning compared to the consensus curriculum. However, it is unclear whether these gains are retained by students post-course. We compared the conceptual understanding of a cohort of students who took a randomization-based curriculum (n = 76)…

  15. Evaluation of Methods Used for Estimating Selected Streamflow Statistics, and Flood Frequency and Magnitude, for Small Basins in North Coastal California

    USGS Publications Warehouse

    Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard

    2004-01-01

    Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.

  16. Mapping cell populations in flow cytometry data for cross‐sample comparison using the Friedman–Rafsky test statistic as a distance measure

    PubMed Central

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu

    2015-01-01

    Abstract Flow cytometry (FCM) is a fluorescence‐based single‐cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap‐FR, a novel method for cell population mapping across FCM samples. FlowMap‐FR is based on the Friedman–Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap‐FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap‐FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap‐FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap‐FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap‐FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback–Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL‐distance in distinguishing equivalent from nonequivalent cell populations. FlowMap‐FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F‐measure of 0.88 was obtained, indicating high precision and recall of the FR‐based population matching results. FlowMap‐FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © 2015 International Society for Advancement of Cytometry PMID:26274018

  17. Mapping cell populations in flow cytometry data for cross-sample comparison using the Friedman-Rafsky test statistic as a distance measure.

    PubMed

    Hsiao, Chiaowen; Liu, Mengya; Stanton, Rick; McGee, Monnie; Qian, Yu; Scheuermann, Richard H

    2016-01-01

    Flow cytometry (FCM) is a fluorescence-based single-cell experimental technology that is routinely applied in biomedical research for identifying cellular biomarkers of normal physiological responses and abnormal disease states. While many computational methods have been developed that focus on identifying cell populations in individual FCM samples, very few have addressed how the identified cell populations can be matched across samples for comparative analysis. This article presents FlowMap-FR, a novel method for cell population mapping across FCM samples. FlowMap-FR is based on the Friedman-Rafsky nonparametric test statistic (FR statistic), which quantifies the equivalence of multivariate distributions. As applied to FCM data by FlowMap-FR, the FR statistic objectively quantifies the similarity between cell populations based on the shapes, sizes, and positions of fluorescence data distributions in the multidimensional feature space. To test and evaluate the performance of FlowMap-FR, we simulated the kinds of biological and technical sample variations that are commonly observed in FCM data. The results show that FlowMap-FR is able to effectively identify equivalent cell populations between samples under scenarios of proportion differences and modest position shifts. As a statistical test, FlowMap-FR can be used to determine whether the expression of a cellular marker is statistically different between two cell populations, suggesting candidates for new cellular phenotypes by providing an objective statistical measure. In addition, FlowMap-FR can indicate situations in which inappropriate splitting or merging of cell populations has occurred during gating procedures. We compared the FR statistic with the symmetric version of Kullback-Leibler divergence measure used in a previous population matching method with both simulated and real data. The FR statistic outperforms the symmetric version of KL-distance in distinguishing equivalent from nonequivalent cell populations. FlowMap-FR was also employed as a distance metric to match cell populations delineated by manual gating across 30 FCM samples from a benchmark FlowCAP data set. An F-measure of 0.88 was obtained, indicating high precision and recall of the FR-based population matching results. FlowMap-FR has been implemented as a standalone R/Bioconductor package so that it can be easily incorporated into current FCM data analytical workflows. © The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC.

  18. Direct and indirect comparison meta-analysis of levetiracetam versus phenytoin or valproate for convulsive status epilepticus.

    PubMed

    Brigo, Francesco; Bragazzi, Nicola; Nardone, Raffaele; Trinka, Eugen

    2016-11-01

    The aim of this study was to conduct a meta-analysis of published studies to directly compare intravenous (IV) levetiracetam (LEV) with IV phenytoin (PHT) or IV valproate (VPA) as second-line treatment of status epilepticus (SE), to indirectly compare intravenous IV LEV with IV VPA using common reference-based indirect comparison meta-analysis, and to verify whether results of indirect comparisons are consistent with results of head-to-head randomized controlled trials (RCTs) directly comparing IV LEV with IV VPA. Random-effects Mantel-Haenszel meta-analyses to obtain odds ratios (ORs) for efficacy and safety of LEV versus VPA and LEV or VPA versus PHT were used. Adjusted indirect comparisons between LEV and VPA were used. Two RCTs comparing LEV with PHT (144 episodes of SE) and 3 RCTs comparing VPA with PHT (227 episodes of SE) were included. Direct comparisons showed no difference in clinical seizure cessation, neither between VPA and PHT (OR: 1.07; 95% CI: 0.57 to 2.03) nor between LEV and PHT (OR: 1.18; 95% CI: 0.50 to 2.79). Indirect comparisons showed no difference between LEV and VPA for clinical seizure cessation (OR: 1.16; 95% CI: 0.45 to 2.97). Results of indirect comparisons are consistent with results of a recent RCT directly comparing LEV with VPA. The absence of a statistically significant difference in direct and indirect comparisons is due to the lack of sufficient statistical power to detect a difference. Conducting a RCT that has not enough people to detect a clinically important difference or to estimate an effect with sufficient precision can be regarded a waste of time and resources and may raise several ethical concerns, especially in RCT on SE. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. An application of an optimal statistic for characterizing relative orientations

    NASA Astrophysics Data System (ADS)

    Jow, Dylan L.; Hill, Ryley; Scott, Douglas; Soler, J. D.; Martin, P. G.; Devlin, M. J.; Fissel, L. M.; Poidevin, F.

    2018-02-01

    We present the projected Rayleigh statistic (PRS), a modification of the classic Rayleigh statistic, as a test for non-uniform relative orientation between two pseudo-vector fields. In the application here, this gives an effective way of investigating whether polarization pseudo-vectors (spin-2 quantities) are preferentially parallel or perpendicular to filaments in the interstellar medium. For example, there are other potential applications in astrophysics, e.g. when comparing small-scale orientations with larger scale shear patterns. We compare the efficiency of the PRS against histogram binning methods that have previously been used for characterizing the relative orientations of gas column density structures with the magnetic field projected on the plane of the sky. We examine data for the Vela C molecular cloud, where the column density is inferred from Herschel submillimetre observations, and the magnetic field from observations by the Balloon-borne Large-Aperture Submillimetre Telescope in the 250-, 350- and 500-μm wavelength bands. We find that the PRS has greater statistical power than approaches that bin the relative orientation angles, as it makes more efficient use of the information contained in the data. In particular, the use of the PRS to test for preferential alignment results in a higher statistical significance, in each of the four Vela C regions, with the greatest increase being by a factor 1.3 in the South-Nest region in the 250 - μ m band.

  20. Statistical Inference at Work: Statistical Process Control as an Example

    ERIC Educational Resources Information Center

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  1. Dental enamel defect diagnosis through different technology-based devices.

    PubMed

    Kobayashi, Tatiana Yuriko; Vitor, Luciana Lourenço Ribeiro; Carrara, Cleide Felício Carvalho; Silva, Thiago Cruvinel; Rios, Daniela; Machado, Maria Aparecida Andrade Moreira; Oliveira, Thais Marchini

    2018-06-01

    Dental enamel defects (DEDs) are faulty or deficient enamel formations of primary and permanent teeth. Changes during tooth development result in hypoplasia (a quantitative defect) and/or hypomineralisation (a qualitative defect). To compare technology-based diagnostic methods for detecting DEDs. Two-hundred and nine dental surfaces of anterior permanent teeth were selected in patients, 6-11 years of age, with cleft lip with/without cleft palate. First, a conventional clinical examination was conducted according to the modified Developmental Defects of Enamel Index (DDE Index). Dental surfaces were evaluated using an operating microscope and a fluorescence-based device. Interexaminer reproducibility was determined using the kappa test. To compare groups, McNemar's test was used. Cramer's V test was used for comparing the distribution of index codes obtained after classification of all dental surfaces. Cramer's V test revealed statistically significant differences (P < .0001) in the distribution of index codes obtained using the different methods; the coefficients were 0.365 for conventional clinical examination versus fluorescence, 0.961 for conventional clinical examination versus operating microscope and 0.358 for operating microscope versus fluorescence. The sensitivity of the operating microscope and fluorescence method was statistically significant (P = .008 and P < .0001, respectively). Otherwise, the results did not show statistically significant differences in accuracy and specificity for either the operating microscope or the fluorescence methods. This study suggests that the operating microscope performed better than the fluorescence-based device and could be an auxiliary method for the detection of DEDs. © 2017 FDI World Dental Federation.

  2. Socio-Spatial Patterning of Off-Sale and On-Sale Alcohol Outlets in a Texas City

    PubMed Central

    Han, Daikwon; Gorman, Dennis M.

    2014-01-01

    Introduction and Aims To examine the socio-spatial patterning of off-sale and on-sale alcohol outlets following a policy change that ended prohibition of off-sale outlets in Lubbock, Texas. Design and Methods The spatial patterning of alcohol outlets by licensing type was examined using the k-function difference (D statistic) to compare the relative degree of spatial aggregation of the two types of alcohol outlets and by the spatial scan statistic to identify statistically significant geographic clusters of outlets. The sociodemographic characteristics of the areas containing clusters of outlets were compared to the rest of the city. In addition, the socioeconomic characteristics of census block groups with and without existing on-sale outlets were compared, as were the socioeconomic characteristics of census block groups with and without the newly issued off-sale licenses. Results The existing on-sale premises in Lubbock and the newly established off-sale premises introduced as a result of the 2009 policy change displayed different spatial patterns, with the latter being more spatially dispersed. A large cluster of on-sale outlets identified in the north-east of the city was located in a socially and economically disadvantaged area of the city. Discussion and Conclusion The findings support the view that it is important to understand the local context of deprivation within a city when examining the location of alcohol outlets and add to the existing research by drawing attention to the importance of geographic scale in assessing such relationships. PMID:24320205

  3. A stochastic model of particle dispersion in turbulent reacting gaseous environments

    NASA Astrophysics Data System (ADS)

    Sun, Guangyuan; Lignell, David; Hewson, John

    2012-11-01

    We are performing fundamental studies of dispersive transport and time-temperature histories of Lagrangian particles in turbulent reacting flows. The particle-flow statistics including the full particle temperature PDF are of interest. A challenge in modeling particle motions is the accurate prediction of fine-scale aerosol-fluid interactions. A computationally affordable stochastic modeling approach, one-dimensional turbulence (ODT), is a proven method that captures the full range of length and time scales, and provides detailed statistics of fine-scale turbulent-particle mixing and transport. Limited results of particle transport in ODT have been reported in non-reacting flow. Here, we extend ODT to particle transport in reacting flow. The results of particle transport in three flow configurations are presented: channel flow, homogeneous isotropic turbulence, and jet flames. We investigate the functional dependence of the statistics of particle-flow interactions including (1) parametric study with varying temperatures, Reynolds numbers, and particle Stokes numbers; (2) particle temperature histories and PDFs; (3) time scale and the sensitivity of initial and boundary conditions. Flow statistics are compared to both experimental measurements and DNS data.

  4. Statistical Study between Solar Wind, Magnetosheath and Plasma Sheet Fluctuation Properties and Correlation with Magnetotail Bursty Bulk Flows

    NASA Astrophysics Data System (ADS)

    Chu, C. S.; Nykyri, K.; Dimmock, A. P.

    2017-12-01

    In this paper we test a hypothesis that magnetotail reconnection in the thin current sheet could be initiated by external fluctuations. Kelvin-Helmholtz instability (KHI) has been observed during southward IMF and it can produce, cold, dense plasma transport and compressional fluctuations that can move further into the magnetosphere. The properties of the KHI depend on the magnetosheath seed fluctuation spectrum (Nykyri et al., JGR, 2017). In this paper we present a statistical correlation study between Solar Wind, Magnetosheath and Plasma sheet fluctuation properties using 9+ years of THEMIS data in aberrated GSM frame, and in a normalized coordinate system that takes into account the changes of the magnetopause and bow shock location with respect to changing solar wind conditions. We present statistical results of the plasma sheet fluctuation properties (dn, dV and dB) and their dependence on IMF orientation and fluctuation properties and resulting magnetosheath state. These statistical maps are compared with spatial distribution of magnetotail Bursty Bulk Flows to study possible correlations with magnetotail reconnection and these fluctuations.

  5. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  6. Comparison of probability statistics for automated ship detection in SAR imagery

    NASA Astrophysics Data System (ADS)

    Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.

    1998-12-01

    This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.

  7. Evaluating pictogram prediction in a location-aware augmentative and alternative communication system.

    PubMed

    Garcia, Luís Filipe; de Oliveira, Luís Caldas; de Matos, David Martins

    2016-01-01

    This study compared the performance of two statistical location-aware pictogram prediction mechanisms, with an all-purpose (All) pictogram prediction mechanism, having no location knowledge. The All approach had a unique language model under all locations. One of the location-aware alternatives, the location-specific (Spec) approach, made use of specific language models for pictogram prediction in each location of interest. The other location-aware approach resulted from combining the Spec and the All approaches, and was designated the mixed approach (Mix). In this approach, the language models acquired knowledge from all locations, but a higher relevance was assigned to the vocabulary from the associated location. Results from simulations showed that the Mix and Spec approaches could only outperform the baseline in a statistically significant way if pictogram users reuse more than 50% and 75% of their sentences, respectively. Under low sentence reuse conditions there were no statistically significant differences between the location-aware approaches and the All approach. Under these conditions, the Mix approach performed better than the Spec approach in a statistically significant way.

  8. Statistical flaws in design and analysis of fertility treatment studies on cryopreservation raise doubts on the conclusions

    PubMed Central

    van Gelder, P.H.A.J.M.; Nijs, M.

    2011-01-01

    Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care. PMID:24753877

  9. Statistical flaws in design and analysis of fertility treatment -studies on cryopreservation raise doubts on the conclusions.

    PubMed

    van Gelder, P H A J M; Nijs, M

    2011-01-01

    Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost -importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the -required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper -interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care.

  10. Isospin Breaking Corrections to the HVP with Domain Wall Fermions

    NASA Astrophysics Data System (ADS)

    Boyle, Peter; Guelpers, Vera; Harrison, James; Juettner, Andreas; Lehner, Christoph; Portelli, Antonin; Sachrajda, Christopher

    2018-03-01

    We present results for the QED and strong isospin breaking corrections to the hadronic vacuum polarization using Nf = 2 + 1 Domain Wall fermions. QED is included in an electro-quenched setup using two different methods, a stochastic and a perturbative approach. Results and statistical errors from both methods are directly compared with each other.

  11. Appraising into the Sun: Six-State Solar Home Paired-Sale Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence Berkeley National Laboratory

    Although residential solar photovoltaic (PV) installations have proliferated, PV systems on some U.S. homes still receive no value during an appraisal because comparable home sales are lacking. To value residential PV, some previous studies have employed paired-sales appraisal methods to analyze small PV home samples in depth, while others have used statistical methods to analyze large samples. Our first-of-its-kind study connects the two approaches. It uses appraisal methods to evaluate sales price premiums for owned PV systems on single-unit detached houses that were also evaluated in a large statistical study. Independent appraisers evaluated 43 recent home sales pairs in sixmore » states: California, Oregon, Florida, Maryland, North Carolina, and Pennsylvania. We compare these results with contributory-value estimates—based on income (using the PV Value® tool), gross cost, and net cost—as well as hedonic modeling results from the recent statistical study. The results provide strong, appraisal-based evidence of PV premiums in all states. More importantly, the results support the use of cost- and incomebased PV premium estimates when paired-sales analysis is impossible. PV premiums from the paired-sales analysis are most similar to net PV cost estimates. PV Value® income results generally track the appraised premiums, although conservatively. The appraised premiums are in agreement with the hedonic modeling results as well, which bolsters the suitability of both approaches for estimating PV home premiums. Therefore, these results will benefit valuation professionals and mortgage lenders who increasingly are encountering homes equipped with PV and need to understand the factors that can both contribute to and detract from market value.« less

  12. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    PubMed

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  13. Possible future changes in South East Australian frost frequency: an inter-comparison of statistical downscaling approaches

    NASA Astrophysics Data System (ADS)

    Crimp, Steven; Jin, Huidong; Kokic, Philip; Bakar, Shuvo; Nicholls, Neville

    2018-04-01

    Anthropogenic climate change has already been shown to effect the frequency, intensity, spatial extent, duration and seasonality of extreme climate events. Understanding these changes is an important step in determining exposure, vulnerability and focus for adaptation. In an attempt to support adaptation decision-making we have examined statistical modelling techniques to improve the representation of global climate model (GCM) derived projections of minimum temperature extremes (frosts) in Australia. We examine the spatial changes in minimum temperature extreme metrics (e.g. monthly and seasonal frost frequency etc.), for a region exhibiting the strongest station trends in Australia, and compare these changes with minimum temperature extreme metrics derived from 10 GCMs, from the Coupled Model Inter-comparison Project Phase 5 (CMIP 5) datasets, and via statistical downscaling. We compare the observed trends with those derived from the "raw" GCM minimum temperature data as well as examine whether quantile matching (QM) or spatio-temporal (spTimerQM) modelling with Quantile Matching can be used to improve the correlation between observed and simulated extreme minimum temperatures. We demonstrate, that the spTimerQM modelling approach provides correlations with observed daily minimum temperatures for the period August to November of 0.22. This represents an almost fourfold improvement over either the "raw" GCM or QM results. The spTimerQM modelling approach also improves correlations with observed monthly frost frequency statistics to 0.84 as opposed to 0.37 and 0.81 for the "raw" GCM and QM results respectively. We apply the spatio-temporal model to examine future extreme minimum temperature projections for the period 2016 to 2048. The spTimerQM modelling results suggest the persistence of current levels of frost risk out to 2030, with the evidence of continuing decadal variation.

  14. The choice of statistical methods for comparisons of dosimetric data in radiotherapy.

    PubMed

    Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques

    2014-09-18

    Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.

  15. External cooling methods for treatment of fever in adults: a systematic review.

    PubMed

    Chan, E Y; Chen, W T; Assam, P N

    It is unclear if the use of external cooling to treat fever contributes to better patient outcomes. Despite this, it is a common practice to treat febrile patients using external cooling methods alone or in combination with pharmacological antipyretics. The objective of this systematic review was to evaluate the effectiveness and complications of external cooling methods in febrile adults in acute care settings. We included adults admitted to acute care settings and developed elevated body temperature.We considered any external cooling method compared to no cooling.We considered randomised control trials (RCTs), quasi-randomised trials and controlled trials with concurrent control groups SEARCH STRATEGY: We searched relevant published or unpublished studies up to October 2009 regardless of language. We searched major databases, reference lists, bibliographies of all relevant articles, and contacted experts in the field for additional studies. Two reviewers independently screened titles and abstracts, and retrieved all potentially relevant studies. Two reviewers independently conducted the assessment of methodological quality of included studies. The results of studies where appropriate was quantitatively summarised. Relative risks or weighted mean difference and their 95% confidence intervals were calculated using the random effects model in Review Manager 5. For each pooled comparison, heterogeneity was assessed using the chi-squared test at the 5% level of statistical significance, with I statistic used to assess the impact of statistical heterogeneity on study results. Where statistical summary was not appropriate or possible, the findings were summarised in narrative form. We found six RCTs that compared the effectiveness and complications of external cooling methods against no external cooling. There was wide variation in the outcome measures between the included trials. We performed meta-analyses on data from two RCTs totalling 356 patients testing external cooling combined with antipyretics versus antipyretics alone, for the resolution of fever. The results did not show a statistically significant reduction in fever (relative risk 1.12, 95% CI 0.95 to 1.31; P=0.35; I =0%).The evidence from four trials suggested that there was no difference in the mean drop in body temperature post treatment initiation, between external cooling and no cooling groups. The results of most other outcomes also did not demonstrate a statistically significant difference. However summarising the results of five trials consisting of 371 patients found that the external cooling group was more likely to shiver when compared to the no cooling group (relative risk 6.37, 95% CI 2.01 to 20.11; P=0.61; I =0%).Overall this review suggested that external cooling methods (whether used alone or in combination with pharmacologic methods) were not effective in treating fever among adults admitted to acute care settings. Yet they were associated with higher incidences of shivering. These results should be interpreted in light of the methodological limitations of available trials. Given the current available evidence, the routine use of external cooling methods to treat fever in adults may not be warranted until further evidence is available. They could be considered for patients whose conditions are unable to tolerate even slight increase in temperature or who request for them. Whenever they are used, shivering should be prevented. Well-designed, adequately powered, randomised trials comparing external cooling methods against no cooling are needed.

  16. The nature and influence of pharmaceutical industry involvement in asthma trials

    PubMed Central

    Bond, Kenneth; Spooner, Carol; Tjosvold, Lisa; Lemière, Catherine; Rowe, Brian H

    2012-01-01

    BACKGROUND: Pharmaceutical industry-sponsored research has been shown to be biased toward reporting positive results. Frequent industry participation in trials assessing the efficacy of inhaled corticosteroid (ICS) and long-acting beta2-agonist (LABA) combination treatment makes assessing industry influence difficult and warrants an assessment of specific potential publication bias in this area. OBJECTIVE: To describe the frequency of industry involvement in ICS/LABA trials and explore associations among significant outcomes, type of industry involvement and type of primary outcome. METHODS: A systematic review of trials comparing ICS/LABA combination therapy with ICS monotherapy for asthma was conducted. Data concerning the type of industry sponsorship, primary outcome and statistical results were collected. Comparisons between type of sponsorship and significant results were analyzed using Pearson’s χ2 test and relative risk. RESULTS: Of 91 included studies (median year of publication 2005 [interquartile range 1994 to 2008]), 86 (95%) reported pharmaceutical involvement. Author affiliation was reported in 49 of 86 (57%), and 19 of 86 (22%) were industry-reported trials without full publications. The remainder were published journal articles. Studies with a first or senior author affiliated with industry were 1.5 times more likely to report statistically significant results for the primary outcome compared with studies with other types of industry involvement. Pulmonary measures were 1.5 times more likely to be statistically significant than were measures of asthma control. CONCLUSIONS: The potential biases identified were consistent with other research focused on author role and industry involvement, and suggest that degree of bias may vary with type of affiliation. PMID:22891187

  17. Lack of grading agreement among international hemostasis external quality assessment programs

    PubMed Central

    Olson, John D.; Jennings, Ian; Meijer, Piet; Bon, Chantal; Bonar, Roslyn; Favaloro, Emmanuel J.; Higgins, Russell A.; Keeney, Michael; Mammen, Joy; Marlar, Richard A.; Meley, Roland; Nair, Sukesh C.; Nichols, William L.; Raby, Anne; Reverter, Joan C.; Srivastava, Alok; Walker, Isobel

    2018-01-01

    Laboratory quality programs rely on internal quality control and external quality assessment (EQA). EQA programs provide unknown specimens for the laboratory to test. The laboratory's result is compared with other (peer) laboratories performing the same test. EQA programs assign target values using a variety of methods statistical tools and performance assessment of ‘pass’ or ‘fail’ is made. EQA provider members of the international organization, external quality assurance in thrombosis and hemostasis, took part in a study to compare outcome of performance analysis using the same data set of laboratory results. Eleven EQA organizations using eight different analytical approaches participated. Data for a normal and prolonged activated partial thromboplastin time (aPTT) and a normal and reduced factor VIII (FVIII) from 218 laboratories were sent to the EQA providers who analyzed the data set using their method of evaluation for aPTT and FVIII, determining the performance for each laboratory record in the data set. Providers also summarized their statistical approach to assignment of target values and laboratory performance. Each laboratory record in the data set was graded pass/fail by all EQA providers for each of the four analytes. There was a lack of agreement of pass/fail grading among EQA programs. Discordance in the grading was 17.9 and 11% of normal and prolonged aPTT results, respectively, and 20.2 and 17.4% of normal and reduced FVIII results, respectively. All EQA programs in this study employed statistical methods compliant with the International Standardization Organization (ISO), ISO 13528, yet the evaluation of laboratory results for all four analytes showed remarkable grading discordance. PMID:29232255

  18. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  19. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  20. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  1. The Necessity of the Hippocampus for Statistical Learning

    PubMed Central

    Covington, Natalie V.; Brown-Schmidt, Sarah; Duff, Melissa C.

    2018-01-01

    Converging evidence points to a role for the hippocampus in statistical learning, but open questions about its necessity remain. Evidence for necessity comes from Schapiro and colleagues who report that a single patient with damage to hippocampus and broader medial temporal lobe cortex was unable to discriminate new from old sequences in several statistical learning tasks. The aim of the current study was to replicate these methods in a larger group of patients who have either damage localized to hippocampus or a broader medial temporal lobe damage, to ascertain the necessity of the hippocampus in statistical learning. Patients with hippocampal damage consistently showed less learning overall compared with healthy comparison participants, consistent with an emerging consensus for hippocampal contributions to statistical learning. Interestingly, lesion size did not reliably predict performance. However, patients with hippocampal damage were not uniformly at chance and demonstrated above-chance performance in some task variants. These results suggest that hippocampus is necessary for statistical learning levels achieved by most healthy comparison participants but significant hippocampal pathology alone does not abolish such learning. PMID:29308986

  2. Statistical-mechanical predictions and Navier-Stokes dynamics of two-dimensional flows on a bounded domain.

    PubMed

    Brands, H; Maassen, S R; Clercx, H J

    1999-09-01

    In this paper the applicability of a statistical-mechanical theory to freely decaying two-dimensional (2D) turbulence on a bounded domain is investigated. We consider an ensemble of direct numerical simulations in a square box with stress-free boundaries, with a Reynolds number that is of the same order as in experiments on 2D decaying Navier-Stokes turbulence. The results of these simulations are compared with the corresponding statistical equilibria, calculated from different stages of the evolution. It is shown that the statistical equilibria calculated from early times of the Navier-Stokes evolution do not correspond to the dynamical quasistationary states. At best, the global topological structure is correctly predicted from a relatively late time in the Navier-Stokes evolution, when the quasistationary state has almost been reached. This failure of the (basically inviscid) statistical-mechanical theory is related to viscous dissipation and net leakage of vorticity in the Navier-Stokes dynamics at moderate values of the Reynolds number.

  3. Statistical reporting inconsistencies in experimental philosophy

    PubMed Central

    Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan

    2018-01-01

    Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220

  4. Comparative statistics of Garman-Klass, Parkinson, Roger-Satchell and bridge estimators

    NASA Astrophysics Data System (ADS)

    Lapinova, S.; Saichev, A.

    2017-01-01

    Comparative statistical properties of Parkinson, Garman-Klass, Roger-Satchell and bridge oscillation estimators are discussed. Point and interval estimations, related with mentioned estimators are considered.

  5. Single-row, double-row, and transosseous equivalent techniques for isolated supraspinatus tendon tears with minimal atrophy: A retrospective comparative outcome and radiographic analysis at minimum 2-year followup

    PubMed Central

    McCormick, Frank; Gupta, Anil; Bruce, Ben; Harris, Josh; Abrams, Geoff; Wilson, Hillary; Hussey, Kristen; Cole, Brian J.

    2014-01-01

    Purpose: The purpose of this study was to measure and compare the subjective, objective, and radiographic healing outcomes of single-row (SR), double-row (DR), and transosseous equivalent (TOE) suture techniques for arthroscopic rotator cuff repair. Materials and Methods: A retrospective comparative analysis of arthroscopic rotator cuff repairs by one surgeon from 2004 to 2010 at minimum 2-year followup was performed. Cohorts were matched for age, sex, and tear size. Subjective outcome variables included ASES, Constant, SST, UCLA, and SF-12 scores. Objective outcome variables included strength, active range of motion (ROM). Radiographic healing was assessed by magnetic resonance imaging (MRI). Statistical analysis was performed using analysis of variance (ANOVA), Mann — Whitney and Kruskal — Wallis tests with significance, and the Fisher exact probability test <0.05. Results: Sixty-three patients completed the study requirements (20 SR, 21 DR, 22 TOE). There was a clinically and statistically significant improvement in outcomes with all repair techniques (ASES mean improvement P = <0.0001). The mean final ASES scores were: SR 83; (SD 21.4); DR 87 (SD 18.2); TOE 87 (SD 13.2); (P = 0.73). There was a statistically significant improvement in strength for each repair technique (P < 0.001). There was no significant difference between techniques across all secondary outcome assessments: ASES improvement, Constant, SST, UCLA, SF-12, ROM, Strength, and MRI re-tear rates. There was a decrease in re-tear rates from single row (22%) to double-row (18%) to transosseous equivalent (11%); however, this difference was not statistically significant (P = 0.6). Conclusions: Compared to preoperatively, arthroscopic rotator cuff repair, using SR, DR, or TOE techniques, yielded a clinically and statistically significant improvement in subjective and objective outcomes at a minimum 2-year follow-up. Level of Evidence: Therapeutic level 3. PMID:24926159

  6. Evaluation of bearing capacity of piles from cone penetration test data.

    DOT National Transportation Integrated Search

    2007-12-01

    A statistical analysis and ranking criteria were used to compare the CPT methods and the conventional alpha design method. Based on the results, the de Ruiter/Beringen and LCPC methods showed the best capability in predicting the measured load carryi...

  7. Motor vehicle traffic crash fatality counts and estimates of people injured for 2005

    DOT National Transportation Integrated Search

    2006-08-22

    This report updates the 2005 Projections released in April 2006, which : were based on a statistical procedure using incomplete/partial data. : This report also compares fatality counts and estimates of people : injured resulting from motor vehicle t...

  8. METHOD FOR EVALUATING MOLD GROWTH ON CEILING TILE

    EPA Science Inventory

    A method to extract mold spores from porous ceiling tiles was developed using a masticator blender. Ceiling tiles were inoculated and analyzed using four species of mold. Statistical analysis comparing results obtained by masticator extraction and the swab method was performed. T...

  9. Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis

    ERIC Educational Resources Information Center

    Reston, Enriqueta; Krishnan, Saras; Idris, Noraini

    2014-01-01

    This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…

  10. Efficacy of double mirrored omega pattern for skin sparing mastectomy to reduce ischemic complications.

    PubMed

    Santanelli di Pompeo, Fabio; Sorotos, Michail; Laporta, Rosaria; Pagnoni, Marco; Longo, Benedetto

    2018-02-01

    Excellent cosmetic results from skin-sparing mastectomy (SSM) are often impaired by skin flaps' necrosis (SFN), from 8%-25% or worse in smokers. This study prospectively investigated the efficacy of Double-Mirrored Omega Pattern (DMOP-SSM) compared to Wise Pattern SSM (WP-SSM) for immediate reconstruction in moderate/large-breasted smokers. From 2008-2010, DMOP-SSM was performed in 51 consecutive immediate breast reconstructions on 41 smokers (mean age = 49.8 years) with moderate/large and ptotic breasts. This active group (AG) was compared to a similar historical control group (CG) of 37 smokers (mean age = 51.1 years) who underwent WP-SSM and immediate breast reconstruction, with a mean follow-up of 37.6 months. Skin ischaemic complications, number of surgical revisions, time to wound healing, and patient satisfaction were analysed. Descriptive statistics were reported and comparison of performance endpoints was performed using Fisher's exact test and Mann-Whitney U-test. A p-value <.05 was considered significant. Patients' mean age (p = .316) and BMI (p = .215) were not statistically different between groups. Ischaemic complications occurred in 11.7% of DMOP-SSMs and in 32.4% of WP-SSMs (p = .017), and revision rates were, respectively, 5.8% and 24.3% (p = .012), both statistically significant. Mean time to wound healing was, respectively, 16.8 days and 18.4 days (p = .205). Mean patients' satisfaction scores were, respectively, 18.9 and 21.1, statistically significant (p = .022). Although tobacco use in moderate/large breasted patients can severely impair outcomes of breast reconstruction, the DMOP-SSM approach, compared to WP-SSM, allows smokers to benefit from SSM, but with statistically significant reduced skin flaps ischaemic complications, revision surgery, and better cosmetic outcomes.

  11. TVT-Exact and midurethral sling (SLING-IUFT) operative procedures: a randomized study.

    PubMed

    Aniuliene, Rosita; Aniulis, Povilas; Skaudickas, Darijus

    2015-01-01

    The aim of the study is to compare results, effectiveness and complications of TVT exact and midurethral sling (SLING-IUFT) operations in the treatment of female stress urinary incontinence (SUI). A single center nonblind, randomized study of women with SUI who were randomized to TVT-Exact and SLING-IUFT was performed by one surgeon from April 2009 to April 2011. SUI was diagnosed on coughing and Valsalva test and urodynamics (cystometry and uroflowmetry) were assessed before operation and 1 year after surgery. This was a prospective randomized study. The follow up period was 12 months. 76 patients were operated using the TVT-Exact operation and 78 patients - using the SLING-IUFT operation. There was no statistically significant differences between groups for BMI, parity, menopausal status and prolapsed stage (no patients had cystocele greater than stage II). Mean operative time was significantly shorter in the SLING-IUFT group (19 ± 5.6 min.) compared with the TVT-Exact group (27 ± 7.1 min.). There were statistically significant differences in the effectiveness of both procedures: TVT-Exact - at 94.5% and SLING-IUFT - at 61.2% after one year. Hospital stay was statistically significantly shorter in the SLING-IUFT group (1. 2 ± 0.5 days) compared with the TVT-Exact group (3.5 ± 1.5 days). Statistically significantly fewer complications occurred in the SLING-IUFT group. the TVT-Exact and SLING-IUFT operations are both effective for surgical treatment of female stress urinary incontinence. The SLING-IUFT involved a shorter operation time and lower complications rate., the TVT-Exact procedure had statistically significantly more complications than the SLING-IUFT operation, but a higher effectiveness.

  12. Malignant pleural effusions and the role of talc poudrage and talc slurry: a systematic review and meta-analysis

    PubMed Central

    Mummadi, Srinivas; Kumbam, Anusha; Hahn, Peter Y.

    2015-01-01

    Background: Malignant Pleural Effusion (MPE) is common with advanced malignancy. Palliative care with minimal adverse events is the cornerstone of management. Although talc pleurodesis plays an important role in treatment, the best modality of talc application remains controversial.   Objective: To compare rates of successful pleurodesis, rates of respiratory and non-respiratory complications between thoracoscopic talc insufflation/poudrage (TTI) and talc slurry (TS).  Data sources and study selection: MEDLINE (PubMed, OVID),  EBM Reviews (Cochrane database of Systematic Reviews, ACP Journal Club, DARE, Cochrane Central Register of Controlled Trials, Cochrane Methodology Register, Health Technology Assessment and NHS Economic Evaluation Database), EMBASE and Scopus. Randomized controlled trials published between 01/01/1980 - 10/1/2014 and comparing the two strategies were selected.  Results: Twenty-eight potential studies were identified of which 24 studies were further excluded, leaving four studies. No statistically significant difference in the probability of successful pleurodesis was observed between TS and TTI groups (RR 1.06; 95 % CI 0.99-1.14; Q statistic, 4.84). There was a higher risk of post procedural respiratory complications in the TTI group compared to the TS group (RR 1.91, 95% CI= 1.24-2.93, Q statistic 3.15). No statistically significant difference in the incidence of non-respiratory complications between the TTI group and the TS group was observed (RR 0.88, 95% CI= 0.72-1.07, Q statistic 4.61). Conclusions: There is no difference in success rates of pleurodesis based on patient centered outcomes between talc poudrage and talc slurry treatments.  Respiratory complications are more common with talc poudrage via thoracoscopy. PMID:25878773

  13. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    PubMed

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  14. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    PubMed Central

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  15. External validation of ADO, DOSE, COTE and CODEX at predicting death in primary care patients with COPD using standard and machine learning approaches.

    PubMed

    Morales, Daniel R; Flynn, Rob; Zhang, Jianguo; Trucco, Emmanuel; Quint, Jennifer K; Zutis, Kris

    2018-05-01

    Several models for predicting the risk of death in people with chronic obstructive pulmonary disease (COPD) exist but have not undergone large scale validation in primary care. The objective of this study was to externally validate these models using statistical and machine learning approaches. We used a primary care COPD cohort identified using data from the UK Clinical Practice Research Datalink. Age-standardised mortality rates were calculated for the population by gender and discrimination of ADO (age, dyspnoea, airflow obstruction), COTE (COPD-specific comorbidity test), DOSE (dyspnoea, airflow obstruction, smoking, exacerbations) and CODEX (comorbidity, dyspnoea, airflow obstruction, exacerbations) at predicting death over 1-3 years measured using logistic regression and a support vector machine learning (SVM) method of analysis. The age-standardised mortality rate was 32.8 (95%CI 32.5-33.1) and 25.2 (95%CI 25.4-25.7) per 1000 person years for men and women respectively. Complete data were available for 54879 patients to predict 1-year mortality. ADO performed the best (c-statistic of 0.730) compared with DOSE (c-statistic 0.645), COTE (c-statistic 0.655) and CODEX (c-statistic 0.649) at predicting 1-year mortality. Discrimination of ADO and DOSE improved at predicting 1-year mortality when combined with COTE comorbidities (c-statistic 0.780 ADO + COTE; c-statistic 0.727 DOSE + COTE). Discrimination did not change significantly over 1-3 years. Comparable results were observed using SVM. In primary care, ADO appears superior at predicting death in COPD. Performance of ADO and DOSE improved when combined with COTE comorbidities suggesting better models may be generated with additional data facilitated using novel approaches. Copyright © 2018. Published by Elsevier Ltd.

  16. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  17. Kappa statistic to measure agreement beyond chance in free-response assessments.

    PubMed

    Carpentier, Marc; Combescure, Christophe; Merlini, Laura; Perneger, Thomas V

    2017-04-19

    The usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We propose here a kappa statistic that is suitable for free-response assessments. We derived the equivalent of Cohen's kappa statistic for two raters under the assumption that the number of possible findings for any given patient is very large, as well as a formula for sampling variance that is applicable to independent observations (for clustered observations, a bootstrap procedure is proposed). The proposed statistic was applied to a real-life dataset, and compared with the common practice of collapsing observations within a finite number of regions of interest. The free-response kappa is computed from the total numbers of discordant (b and c) and concordant positive (d) observations made in all patients, as 2d/(b + c + 2d). In 84 full-body magnetic resonance imaging procedures in children that were evaluated by 2 independent raters, the free-response kappa statistic was 0.820. Aggregation of results within regions of interest resulted in overestimation of agreement beyond chance. The free-response kappa provides an estimate of agreement beyond chance in situations where only positive findings are reported by raters.

  18. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Statistical Engineering in Air Traffic Management Research

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.

    2015-01-01

    NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.

  20. A proposal for the measurement of graphical statistics effectiveness: Does it enhance or interfere with statistical reasoning?

    NASA Astrophysics Data System (ADS)

    Agus, M.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.

    2015-02-01

    Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics.

  1. The results of nucleic acid testing in remunerated and non-remunerated blood donors in Lithuania

    PubMed Central

    Kalibatas, Vytenis; Kalibatienė, Lina

    2014-01-01

    Background In Lithuania, governmentally covered remuneration for whole blood donations prevails. Donors may choose to accept or reject the remuneration. The purpose of this study was to compare the rate of nucleic acid testing (NAT) discriminatory-positive markers for human immunodeficiency virus-1 (HIV-1), hepatitis B virus (HBV) and hepatitis C virus (HCV) in seronegative, first-time and repeat, remunerated and non-remunerated donations at the National Blood Centre in Lithuania during the period from 2005 to 2010. Materials and methods All seronegative whole blood and blood component donations were individually analysed by NAT for HIV-1, HBV and HCV. Only discriminatory-positive NAT were classified. The prevalence of discriminatory-positive NAT per 100,000 donations in the donor groups and the odds ratios comparing the remunerated and non-remunerated donations were determined. Results Significant differences were observed for HBV NAT results: 47.42 and 26.29 per 100,000 remunerated first-time and repeat donations, respectively, compared to 10.6 and 3.58 per 100,000 non-remunerated first-time and repeat, seronegative donations, respectively. The differences were also significant for HCV NAT results: 47.42 and 51.99 for remunerated first-time and repeat donations, respectively, compared to 2.12 and 0 per 100,000 non-remunerated first-time and repeat, seronegative donations, respectively. No seronegative, discriminatory-positive NAT HIV case was found. The odds of discriminatory HBV and HCV NAT positive results were statistically significantly higher for both first-time and repeat remunerated donations compared to first-time and repeat non-remunerated donations. Discussion First-time and repeat remunerated seronegative donations were associated with a statistically significantly higher prevalence and odds for discriminatory-positive HBV and HCV NAT results compared to first-time and repeat non-remunerated donations at the National Blood Centre in Lithuania. PMID:24120587

  2. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability.

    PubMed

    Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell

    2014-05-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.

  3. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  4. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  5. Pulsed recording of anisotropy and holographic polarization gratings in azo-polymethacrylates with different molecular architectures

    NASA Astrophysics Data System (ADS)

    Forcén, Patricia; Oriol, Luis; Sánchez, Carlos; Alcalá, Rafael; Jankova, Katja; Hvilsted, Søren

    2008-06-01

    Recording of anisotropy and holographic polarization gratings using 532nm, 4ns light pulses has been carried out in thin films of polymers with the same azobenzene content (20wt%) and different molecular architectures. Random and block copolymers comprising azobenzene and methylmethacrylate (MMA) moieties as well as statistical terpolymers with azobenzene, biphenyl, and MMA units have been compared in terms of recording sensitivity and stability upon pulsed excitation. Photoinduced anisotropy just after the pulse was significantly higher in the case of the block copolymers than in the two statistical copolymers. The stability of the recorded anisotropy has also been studied. While a stationary value of the photoinduced anisotropy (approximately 50% of the initial photoinduced value) is reached for the block copolymer, photoinduced anisotropy almost vanished after a few hours in the statistical copolymers. Polarization holographic gratings have been registered using two orthogonally circularly polarized light beams. The results are qualitatively similar to those of photoinduced anisotropy, that is, stability of the registered grating and larger values of diffraction efficiency for the block copolymer as compared with the random copolymers. The recording of holographic gratings with submicron period in films several microns thick, showing both polarization and angular selectivity, has also been demonstrated. Block copolymers showed a lamellar block nanosegregated morphology. The interaction among azo chromophores within the nanosegregated azo blocks seems to be the reason for the stability and the photoresponse enhancement in the block copolymer as compared with the statistical ones.

  6. DNA Damage Analysis in Children with Non-syndromic Developmental Delay by Comet Assay.

    PubMed

    Susai, Surraj; Chand, Parkash; Ballambattu, Vishnu Bhat; Hanumanthappa, Nandeesha; Veeramani, Raveendranath

    2016-05-01

    Majority of the developmental delays in children are non-syndromic and they are believed to have an underlying DNA damage, though not well substantiated. Hence the present study was carried out to find out if there is any increased DNA damage in children with non-syndromic developmental delay by using the comet assay. The present case-control study was undertaken to assess the level of DNA damage in children with non syndromic developmental delay and compare the same with that of age and sex matched controls using submarine gel electrophoresis (Comet Assay). The blood from clinically diagnosed children with non syndromic developmental delay and controls were subjected for alkaline version of comet assay - Single cell gel electrophoresis using lymphocytes isolated from the peripheral blood. The comets were observed under a bright field microscope; photocaptured and scored using the Image J image quantification software. Comet parameters were compared between the cases and controls and statistical analysis and interpretation of results was done using the statistical software SPSS version 20. The mean comet tail length in cases and control was 20.77+7.659μm and 08.97+4.398μm respectively which was statistically significant (p<0.001). Other comet parameters like total comet length and % DNA in tail also showed a statistically significant difference (p < 0.001) between cases and controls. The current investigation unraveled increased levels of DNA damage in children with non syndromic developmental delay when compared to the controls.

  7. Comparison of the Fenwal Amicus and Fresenius Com.Tec cell separators for autologous peripheral blood progenitor cell collection.

    PubMed

    Altuntas, Fevzi; Kocyigit, Ismail; Ozturk, Ahmet; Kaynar, Leylagul; Sari, Ismail; Oztekin, Mehmet; Solmaz, Musa; Eser, Bulent; Cetin, Mustafa; Unal, Ali

    2007-04-01

    Peripheral blood progenitor cells (PBPC) are commonly used as a stem cell source for autologous transplantation. This study was undertaken to evaluate blood cell separators with respect to separation results and content of the harvest. Forty autologous PBPC collections in patients with hematological malignancies were performed with either the Amicus or the COM.TEC cell separators. The median product volume was lower with the Amicus compared to the COM.TEC (125 mL vs. 300 mL; p < 0.001). There was no statistically significant difference in the median number of CD34+ cell/kg in product between the Amicus and the COM.TEC (3.0 x 10(6) vs. 4.1 x 10(6); p = 0.129). There was a statistically higher mean volume of ACD used in collections on the Amicus compared to the COM.TEC (1040 +/- 241 mL vs. 868 +/- 176 mL; p = 0.019). There was a statistical difference in platelet (PLT) contamination of the products between the Amicus and the COM.TEC (0.3 x 10(11) vs. 1.1 x 10(11); p < 0.001). The median % decrease in PB PLT count was statistically higher in the COM.TEC compared to the Amicus instruments (18.5% vs. 9.5%; p = 0.028). In conclusion, both instruments collected PBPCs efficiently. However, Amicus has the advantage of lower PLT contamination in the product, and less decrease in PB platelet count with lower product volume in autologous setting.

  8. The effectiveness of the directional microphone in the Oticon Medical Ponto Pro in participants with unilateral sensorineural hearing loss.

    PubMed

    Oeding, Kristi; Valente, Michael

    2013-09-01

    Current bone anchored hearing solutions (BAHSs) have incorporated automatic adaptive multichannel directional microphones (DMs). Previous fixed single-channel hypercardioid DMs in BAHSs have provided benefit in a diffuse listening environment, but little data are available on the performance of adaptive multichannel DMs in BAHSs for persons with unilateral sensorineural hearing loss (USNHL). The primary goal was to determine if statistically significant differences existed in the mean Reception Threshold for Sentences (RTS in dB) in diffuse uncorrelated restaurant noise between unaided, an omnidirectional microphone (OM), split DM (SDM), and full DM (FDM) in the Oticon Medical Ponto Pro. A second goal was to assess subjective benefit using the Abbreviated Profile of Hearing Aid Benefit (APHAB) comparing the Ponto Pro to the participant's current BAHS, and the Ponto Pro and participant's own BAHS to unaided. The third goal was to compare RTS data of the Ponto Pro to data from an identical study examining Cochlear Americas' Divino. A randomized repeated measures, single blind design was used to measure an RTS for each participant for unaided, OM, SDM, and FDM. Fifteen BAHS users with USNHL were recruited from Washington University in St. Louis and the surrounding area. The Ponto Pro was fit by measuring in-situ bone conduction thresholds and was worn for 4 wk. An RTS was obtained utilizing Hearing in Noise Test (HINT) sentences in uncorrelated restaurant noise from an eight loudspeaker array, and subjective benefit was determined utilizing the APHAB. Analysis of variance (ANOVA) was used to analyze the results of the Ponto Pro HINT and APHAB data, and comparisons between the Ponto Pro and previous Divino data. No statistically significant differences existed in mean RTS between unaided, the Ponto Pro's OM, SDM, or FDM (p = 0.10). The Ponto Pro provided statistically significant benefit for the Background Noise (BN) (p < 0.01) and Reverberation (RV) (p < 0.05) subscales compared to the participant's own BAHS. The Ponto Pro (Ease of Communication [EC] [p < 0.01], BN [p < 0.001], and RV [p < 0.01] subscales) and participant's own BAHS (BN [p < 0.01] and RV [p < 0.01] subscales) overall provided statistically significant benefit compared to unaided. Clinically significant benefit of 5% was present for the Ponto Pro compared to the participant's own BAHS and 10% for the Ponto Pro and the participant's own BAHS compared to unaided. The Ponto Pro's OM (p = 0.05), SDM (p = 0.05), and FDM (p < 0.01) were statistically significantly better than the Divino's OM. No significant differences existed between the Ponto Pro's OM, SDM, and FDM compared to the Divino's DM. No statistically significant differences existed between unaided, OM, SDM, or FDM. Participants preferred the Ponto Pro compared to the participant's own BAHS and the Ponto Pro and participant's own BAHS compared to unaided. The RTS of the Ponto Pro's adaptive multichannel DM was similar to the Divino's fixed hypercardioid DM, but the Ponto Pro's OM was statistically significantly better than the Divino's OM. American Academy of Audiology.

  9. System of Mueller-Jones matrix polarizing mapping of blood plasma films in breast pathology

    NASA Astrophysics Data System (ADS)

    Zabolotna, Natalia I.; Radchenko, Kostiantyn O.; Tarnovskiy, Mykola H.

    2017-08-01

    The combined method of Jones-Mueller matrix mapping and blood plasma films analysis based on the system that proposed in this paper. Based on the obtained data about the structure and state of blood plasma samples the diagnostic conclusions can be make about the state of breast cancer patients ("normal" or "pathology"). Then, by using the statistical analysis obtain statistical and correlational moments for every coordinate distributions; these indicators are served as diagnostic criterias. The final step is to comparing results and choosing the most effective diagnostic indicators. The paper presents the results of Mueller-Jones matrix mapping of optically thin (attenuation coefficient ,τ≤0,1) blood plasma layers.

  10. The social construction of "evidence-based'' drug prevention programs: a reanalysis of data from the Drug Abuse Resistance Education (DARE) program.

    PubMed

    Gorman, Dennis M; Huber, J Charles

    2009-08-01

    This study explores the possibility that any drug prevention program might be considered ;;evidence-based'' given the use of data analysis procedures that optimize the chance of producing statistically significant results by reanalyzing data from a Drug Abuse Resistance Education (DARE) program evaluation. The analysis produced a number of statistically significant differences between the DARE and control conditions on alcohol and marijuana use measures. Many of these differences occurred at cutoff points on the assessment scales for which post hoc meaningful labels were created. Our results are compared to those from evaluations of programs that appear on evidence-based drug prevention lists.

  11. Channel fading for mobile satellite communications using spread spectrum signaling and TDRSS

    NASA Technical Reports Server (NTRS)

    Jenkins, Jeffrey D.; Fan, Yiping; Osborne, William P.

    1995-01-01

    This paper will present some preliminary results from a propagation experiment which employed NASA's TDRSS and an 8 MHz chip rate spread spectrum signal. Channel fade statistics were measured and analyzed in 21 representative geographical locations covering urban/suburban, open plain, and forested areas. Cumulative distribution Functions (CDF's) of 12 individual locations are presented and classified based on location. Representative CDF's from each of these three types of terrain are summarized. These results are discussed, and the fade depths exceeded 10 percent of the time in three types of environments are tabulated. The spread spectrum fade statistics for tree-lined roads are compared with the Empirical Roadside Shadowing Model.

  12. Endovascular Treatment of Diabetic Foot in a Selected Population of Patients with Below-the-Knee Disease: Is the Angiosome Model Effective?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fossaceca, Rita, E-mail: rfossaceca@hotmail.com; Guzzardi, Giuseppe, E-mail: guz@libero.it; Cerini, Paolo, E-mail: cerini84@hotmail.it

    Purpose. To evaluate the efficacy of percutaneous transluminal angioplasty (PTA) in a selected population of diabetic patients with below-the-knee (BTK) disease and to analyze the reliability of the angiosome model. Methods. We made a retrospective analysis of the results of PTA performed in 201 diabetic patients with BTK-only disease treated at our institute from January 2005 to December 2011. We evaluated the postoperative technical success, and at 1, 6, and 12 months' follow-up, we assessed the rates and values of partial and complete ulcer healing, restenosis, major and minor amputation, limb salvage, and percutaneous oximetry (TcPO{sub 2}) (Student's t test).more » We used the angiosome model to compare different clinicolaboratory outcomes in patients treated by direct revascularization (DR) from patients treated with indirect revascularization (IR) technique by Student's t test and the {chi}{sup 2} test. Results. At a mean {+-} standard deviation follow-up of 17.5 {+-} 12 months, we observed a mortality rate of 3.5 %, a major amputation rate of 9.4 %, and a limb salvage rate of 87 % with a statistically significant increase of TcPO{sub 2} values at follow-up compared to baseline (p < 0.05). In 34 patients, treatment was performed with the IR technique and in 167 by DR; in both groups, there was a statistically significant increase of TcPO{sub 2} values at follow-up compared to baseline (p < 0.05), without statistically significant differences in therapeutic efficacy. Conclusion. PTA of the BTK-only disease is a safe and effective option. The DR technique is the first treatment option; we believe, however, that IR is similarly effective, with good results over time.« less

  13. Plaque removal efficacy of a battery-operated toothbrush compared to a manual toothbrush.

    PubMed

    Ruhlman, C D; Bartizek, R D; Biesbrock, A R

    2001-08-01

    Recently, a new power toothbrush has been marketed with a design that fundamentally differs from other marketed power toothbrushes, in that it incorporates a round oscillating head, in conjunction with fixed bristles. The objective of this study was to compare the plaque removal efficacy of a control manual toothbrush (Colgate Navigator) to this experimental power toothbrush (Crest SpinBrush) following a single use. This study was a randomized, controlled, examiner-blind, 4-period crossover design which examined plaque removal with the two toothbrushes following a single use in 40 completed subjects. Plaque was scored before and after brushing using the Turesky Modification of the Quigley-Hein Index. Baseline plaque scores were 1.77 for both the experimental toothbrush and control toothbrush treatment groups. With respect to all surfaces examined, the experimental toothbrush delivered an adjusted (via analysis of covariance) mean difference between baseline and post-brushing plaque scores of 0.48 while the control toothbrush delivered an adjusted mean difference of 0.35. The experimental toothbrush removed, on average, 37.6% more plaque than the control toothbrush. These results were statistically significant (P< 0.001). With respect to buccal surfaces, the experimental toothbrush delivered an adjusted mean difference between baseline and post-brushing plaque scores of 0.54 while the control toothbrush delivered an adjusted mean difference of 0.42. This represents 27.8% more plaque removal with the experimental toothbrush compared to the control toothbrush. These results were also statistically significant (P= 0.001). Results on lingual surfaces also demonstrated statistically significantly (P< 0.001) greater plaque removal for the experimental toothbrush with an average of 53.4% more plaque removal.

  14. Flow Cytometry of Human Primary Epidermal and Follicular Keratinocytes

    PubMed Central

    Gragnani, Alfredo; Ipolito, Michelle Zampieri; Sobral, Christiane S; Brunialti, Milena Karina Coló; Salomão, Reinaldo; Ferreira, Lydia Masako

    2008-01-01

    Objective: The aim of this study was to characterize using flow cytometry cultured human primary keratinocytes isolated from the epidermis and hair follicles by different methods. Methods: Human keratinocytes derived from discarded fragments of total skin and scalp hair follicles from patients who underwent plastic surgery in the Plastic Surgery Division at UNIFESP were used. The epidermal keratinocytes were isolated by using 3 different methods: the standard method, upon exposure to trypsin for 30 minutes; the second, by treatment with dispase for 18 hours and with trypsin for 10 minutes; and the third, by treatment with dispase for 18 hours and with trypsin for 30 minutes. Follicular keratinocytes were isolated using the standard method. Results: On comparing the group treated with dispase for 18 hours and with trypsin for 10 minutes with the group treated with dispase for 18 hours and with trypsin for 30 minutes, it was observed that the first group presented the largest number of viable cells, the smallest number of cells in late apoptosis and necrosis with statistical significance, and no difference in apoptosis. When we compared the group treated with dispase for 18 hours and with trypsin for 10 minutes with the group treated with trypsin, the first group presented the largest number of viable cells, the smallest number of cells in apoptosis with statistical significance, and no difference in late apoptosis and necrosis. When we compared the results of the group treated with dispase for 18 hours and with trypsin for 10 minutes with the results for follical isolation, there was a statistical difference in apoptosis and viable cells. Conclusion: The isolation method of treatment with dispase for 18 hours and with trypsin for 10 minutes produced the largest number of viable cells and the smallest number of cells in apoptosis/necrosis. PMID:18350110

  15. Spectral statistics of the uni-modular ensemble

    NASA Astrophysics Data System (ADS)

    Joyner, Christopher H.; Smilansky, Uzy; Weidenmüller, Hans A.

    2017-09-01

    We investigate the spectral statistics of Hermitian matrices in which the elements are chosen uniformly from U(1) , called the uni-modular ensemble (UME), in the limit of large matrix size. Using three complimentary methods; a supersymmetric integration method, a combinatorial graph-theoretical analysis and a Brownian motion approach, we are able to derive expressions for 1 / N corrections to the mean spectral moments and also analyse the fluctuations about this mean. By addressing the same ensemble from three different point of view, we can critically compare their relative advantages and derive some new results.

  16. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  17. Evaluating image reconstruction methods for tumor detection performance in whole-body PET oncology imaging

    NASA Astrophysics Data System (ADS)

    Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard

    2000-04-01

    This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.

  18. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    PubMed

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p < 0.05). Group 2 did not differ statistically from Group 3 (p > 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  19. Comparative Evaluation of Cone-beam Computed Tomography versus Direct Surgical Measurements in the Diagnosis of Mandibular Molar Furcation Involvement

    PubMed Central

    Padmanabhan, Shyam; Dommy, Ahila; Guru, Sanjeela R.; Joseph, Ajesh

    2017-01-01

    Aim: Periodontists frequently experience inconvenience in accurate assessment and treatment of furcation areas affected by periodontal disease. Furcation involvement (FI) most commonly affects the mandibular molars. Diagnosis of furcation-involved teeth is mainly by the assessment of probing pocket depth, clinical attachment level, furcation entrance probing, and intraoral periapical radiographs. Three-dimensional imaging has provided advantage to the clinician in assessment of bone morphology. Thus, the present study aimed to compare the diagnostic efficacy of cone-beam computed tomography (CBCT) as against direct intrasurgical measurements of furcation defects in mandibular molars. Subjects and Methods: Study population included 14 patients with 25 mandibular molar furcation sites. CBCT was performed to measure height, width, and depth of furcation defects of mandibular molars with Grade II and Grade III FI. Intrasurgical measurements of the FI were assessed during periodontal flap surgery in indicated teeth which were compared with CBCT measurements. Statistical analysis was done using paired t-test and Bland–Altman plot. Results: The CBCT versus intrasurgical furcation measurements were 2.18 ± 0.86 mm and 2.30 ± 0.89 mm for furcation height, 1.87 ± 0.52 mm and 1.84 ± 0.49 mm for furcation width, and 3.81 ± 1.37 mm and 4.05 ± 1.49 mm for furcation depth, respectively. Results showed that there was no statistical significance between the measured parameters, indicating that the two methods were statistically similar. Conclusion: Accuracy of assessment of mandibular molar FI by CBCT was comparable to that of direct surgical measurements. These findings indicate that CBCT is an excellent adjunctive diagnostic tool in periodontal treatment planning. PMID:29042732

  20. Nesfatin-1 alleviates extrahepatic cholestatic damage of liver in rats

    PubMed Central

    Solmaz, Ali; Gülçiçek, Osman Bilgin; Erçetin, Candaş; Yiğitbaş, Hakan; Yavuz, Erkan; Arıcı, Sinan; Erzik, Can; Zengi, Oğuzhan; Demirtürk, Pelin; Çelik, Atilla; Çelebi, Fatih

    2016-01-01

    Obstructive jaundice (OJ) can be defined as cessation of bile flow into the small intestine due to benign or malignant changes. Nesfatin-1, recently discovered anorexigenic peptide derived from nucleobindin-2 in hypothalamic nuclei, was shown to have anti-inflammatory and antiapoptotic effects. This study is aimed to investigate the therapeutic effects of nesfatin-1 on OJ in rats. Twenty-four adult male Wistar-Hannover rats were randomly assigned to three groups: sham (n = 8), control (n = 8), and nesfatin (n = 8). After bile duct ligation, the study groups were treated with saline or nesfatin-1, for 10 days. Afterward, blood and liver tissue samples were obtained for biochemical analyses, measurement of cytokines, determination of the oxidative DNA damage, DNA fragmentation, and histopathologic analyses. Alanine aminotransferase and gamma-glutamyl transferase levels were decreased after the nesfatin treatment; however, these drops were statistically non-significant compared to control group (p = 0.345, p = 0.114). Malondialdehyde levels decreased significantly in nesfatin group compared to control group (p = 0.032). Decreases in interleukin-6 and tumor necrosis factor-α levels from the liver tissue samples were not statistically significant in nesfatin group compared to control group. The level of oxidative DNA damage was lower in nesfatin group, however this result was not statistically significant (p = 0.75). DNA fragmentation results of all groups were similar. Histopathological examination revealed that there was less neutrophil infiltration, edema, bile duct proliferation, hepatocyte necrosis, basement membrane damage, and parenchymal necrosis in nesfatin compared to control group. The nesfatin-1 treatment could alleviate cholestatic liver damage caused by OJ due to its anti-inflammatory and antioxidant effects. PMID:27524109

  1. Measuring hospital efficiency--comparing four European countries.

    PubMed

    Mateus, Céu; Joaquim, Inês; Nunes, Carla

    2015-02-01

    Performing international comparisons on efficiency usually has two main drawbacks: the lack of comparability of data from different countries and the appropriateness and adequacy of data selected for efficiency measurement. With inpatient discharges for four countries, some of the problems of data comparability usually found in international comparisons were mitigated. The objectives are to assess and compare hospital efficiency levels within and between countries, using stochastic frontier analysis with both cross-sectional and panel data. Data from English (2005-2008), Portuguese (2002-2009), Spanish (2003-2009) and Slovenian (2005-2009) hospital discharges and characteristics are used. Weighted hospital discharges were considered as outputs while the number of employees, physicians, nurses and beds were selected as inputs of the production function. Stochastic frontier analysis using both cross-sectional and panel data were performed, as well as ordinary least squares (OLS) analysis. The adequacy of the data was assessed with Kolmogorov-Smirnov and Breusch-Pagan/Cook-Weisberg tests. Data available results were redundant to perform efficiency measurements using stochastic frontier analysis with cross-sectional data. The likelihood ratio test reveals that in cross-sectional data stochastic frontier analysis (SFA) is not statistically different from OLS in Portuguese data, while SFA and OLS estimates are statistically different for Spanish, Slovenian and English data. In the panel data, the inefficiency term is statistically different from 0 in the four countries in analysis, though for Portugal it is still close to 0. Panel data are preferred over cross-section analysis because results are more robust. For all countries except Slovenia, beds and employees are relevant inputs for the production process. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  2. Zirconia Dental Implants: Investigation of Clinical Parameters, Patient Satisfaction, and Microbial Contamination.

    PubMed

    Holländer, Jens; Lorenz, Jonas; Stübinger, Stefan; Hölscher, Werner; Heidemann, Detlef; Ghanaati, Shahram; Sader, Robert

    2016-01-01

    In recent years, dental implants made from zirconia have been further developed and are considered a reliable treatment method for replacing missing teeth. The aim of this study was to analyze dental implants made from zirconia regarding their clinical performance compared with natural teeth (control). One hundred six zirconia implants in 38 adults were analyzed in a clinical study after 1 year of loading. The plaque index (PI), bleeding on probing (BOP), probing pocket depth (PPD), probing attachment level (PAL), and creeping or recession (CR/REC) of the gingiva were detected and compared with natural control teeth (CT). Furthermore, the papilla index (PAP), Periotest values (PTV), microbial colonization of the implant/dental sulcus fluid, and patient satisfaction were assessed. The survival rate was 100%. No statistical significance was observed between implants and teeth regarding BOP, PPD, and PAL. A statistical significance was detected regarding PI and CR/REC with significantly less plaque accumulation and recession in the study group. Mean PAP was 1.76 ± 0.55, whereas the mean PTV was -1.31 ± 2.24 (range from -5 to +6). A non-statistically significant higher colonization of periodontitis/peri-implantitis bacteria was observed in the implant group. The questionnaire showed that the majority of the patients were satisfied with the overall treatment. One-piece zirconia dental implants exhibited similar clinical results (BOP, PPD, and PAL) compared with natural teeth in regard to adhesion of plaque (PI) and creeping attachment (CR/REC); zirconia implants performed even better. The favorable results for PAL and CR/REC reflect the comparable low affinity of zirconia for plaque adhesion. Patient satisfaction indicated a high level of acceptance for zirconia implants. However, a long-term follow-up is needed to support these findings.

  3. The Impact of Supplemental Antioxidants on Visual Function in Nonadvanced Age-Related Macular Degeneration: A Head-to-Head Randomized Clinical Trial.

    PubMed

    Akuffo, Kwadwo Owusu; Beatty, Stephen; Peto, Tunde; Stack, Jim; Stringham, Jim; Kelly, David; Leung, Irene; Corcoran, Laura; Nolan, John M

    2017-10-01

    The purpose of this study was to evaluate the impact of supplemental macular carotenoids (including versus not including meso-zeaxanthin) in combination with coantioxidants on visual function in patients with nonadvanced age-related macular degeneration. In this study, 121 participants were randomly assigned to group 1 (Age-Related Eye Disease Study 2 formulation with a low dose [25 mg] of zinc and an addition of 10 mg meso-zeaxanthin; n = 60) or group 2 (Age-Related Eye Disease Study 2 formulation with a low dose [25 mg] of zinc; n = 61). Visual function was assessed using best-corrected visual acuity, contrast sensitivity (CS), glare disability, retinal straylight, photostress recovery time, reading performance, and the National Eye Institute Visual Function Questionnaire-25. Macular pigment was measured using customized heterochromatic flicker photometry. There was a statistically significant improvement in the primary outcome measure (letter CS at 6 cycles per degree [6 cpd]) over time (P = 0.013), and this observed improvement was statistically comparable between interventions (P = 0.881). Statistically significant improvements in several secondary outcome visual function measures (letter CS at 1.2 and 2.4 cpd; mesopic and photopic CS at all spatial frequencies; mesopic glare disability at 1.5, 3, and 6 cpd; photopic glare disability at 1.5, 3, 6, and 12 cpd; photostress recovery time; retinal straylight; mean and maximum reading speed) were also observed over time (P < 0.05, for all), and were statistically comparable between interventions (P > 0.05, for all). Statistically significant increases in macular pigment at all eccentricities were observed over time (P < 0.0005, for all), and the degree of augmentation was statistically comparable between interventions (P > 0.05). Antioxidant supplementation in patients with nonadvanced age-related macular degeneration results in significant increases in macular pigment and improvements in CS and other measures of visual function. (Clinical trial, http://www.isrctn.com/ISRCTN13894787).

  4. Morbidity and chronic pain following different techniques of caesarean section: A comparative study.

    PubMed

    Belci, D; Di Renzo, G C; Stark, M; Đurić, J; Zoričić, D; Belci, M; Peteh, L L

    2015-01-01

    Research examining long-term outcomes after childbirth performed with different techniques of caesarean section have been limited and do not provide information on morbidity and neuropathic pain. The study compares two groups of patients submitted to the 'Traditional' method using Pfannenstiel incision and patients submitted to the 'Misgav Ladach' method ≥ 5 years after the operation. We find better long-term postoperative results in the patients that were treated with the Misgav Ladach method compared with the Traditional method. The results were statistically better regarding the intensity of pain, presence of neuropathic and chronic pain and the level of satisfaction about cosmetic appearance of the scar.

  5. Control of the amplifications of large-band amplitude-modulated pulses in an Nd-glass amplifier chain

    NASA Astrophysics Data System (ADS)

    Videau, Laurent; Bar, Emmanuel; Rouyer, Claude; Gouedard, Claude; Garnier, Josselin C.; Migus, Arnold

    1999-07-01

    We study nonlinear effects in amplification of partially coherent pulses in a high power laser chain. We compare statistical models with experimental results for temporal and spatial effects. First we show the interplay between self-phase modulation which broadens spectrum bandwidth and gain narrowing which reduces output spectrum. Theoretical results are presented for spectral broadening and energy limitation in case of time-incoherent pulses. In a second part, we introduce spatial incoherence with a multimode optical fiber which provides a smoothed beam. We show with experimental result that spatial filter pinholes are responsible for additive energy losses in the amplification. We develop a statistical model which takes into account the deformation of the focused beam as a function of B integral. We estimate the energy transmission of the spatial filter pinholes and compare this model with experimental data. We find a good agreement between theory and experiments. As a conclusion, we present an analogy between temporal and spatial effects with spectral broadening and spectral filter. Finally, we propose some solutions to control energy limitations in smoothed pulses amplification.

  6. Effects of 980 diode laser treatment combined with scaling and root planing on periodontal pockets in chronic periodontitis patients

    NASA Astrophysics Data System (ADS)

    Fallah, Alireza

    2010-02-01

    Objective: This study compared the effect of 980 Diode laser + scaling and root planing (SRP) versus SRP alone in the treatment of chronic periodontitis. Method: 21 healthy patients with moderate periodontitis with a probing depth of at least 5mm were included in the study. A total of 42 sites were treated during 6weeks with a combination of 980 Diode laser and SRP (21 sites) or SRP alone (21 sites). The gingival index (GI), probing pocket depth (PPD) and bleeding on probing (BOP) were examined at the baseline and after 6 weeks after the start of treatment. Results: Both groups showed statistically significant improvements in GI, BOP and PPD after treatment. The results also showed significant improvement from laser+ SRP group to SRP alone group. Conclusion: The present data suggest that treatment of chronic periodontitis with either 980 Diode laser + SRP or SRP alone results in statistically significant improvements in the clinical parameters. The combination of 980 Diode laser irradiation in the gingival sulcus and SRP, was significantly better as compared to SRP alone.

  7. Universal Recurrence Time Statistics of Characteristic Earthquakes

    NASA Astrophysics Data System (ADS)

    Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.

    2006-12-01

    Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.

  8. Graft survival of diabetic versus nondiabetic donor tissue after initial keratoplasty.

    PubMed

    Vislisel, Jesse M; Liaboe, Chase A; Wagoner, Michael D; Goins, Kenneth M; Sutphin, John E; Schmidt, Gregory A; Zimmerman, M Bridget; Greiner, Mark A

    2015-04-01

    To compare corneal graft survival using tissue from diabetic and nondiabetic donors in patients undergoing initial Descemet stripping automated endothelial keratoplasty (DSAEK) or penetrating keratoplasty (PKP). A retrospective chart review of pseudophakic eyes that underwent DSAEK or PKP was performed. The primary outcome measure was graft failure. Cox proportional hazard regression and Kaplan-Meier survival analyses were used to compare diabetic versus nondiabetic donor tissue for all keratoplasty cases. A total of 183 eyes (136 DSAEK, 47 PKP) were included in the statistical analysis. Among 24 procedures performed using diabetic donor tissue, there were 4 cases (16.7%) of graft failure (3 DSAEK, 1 PKP), and among 159 procedures performed using nondiabetic donor tissue, there were 18 cases (11.3%) of graft failure (12 DSAEK, 6 PKP). Cox proportional hazard ratio of graft failure for all cases comparing diabetic with nondiabetic donor tissue was 1.69, but this difference was not statistically significant (95% confidence interval, 0.56-5.06; P = 0.348). There were no significant differences in Kaplan-Meier curves comparing diabetic with nondiabetic donor tissue for all cases (P = 0.380). Statistical analysis of graft failure by donor diabetes status within each procedure type was not possible because of the small number of graft failure events involving diabetic tissue. We found similar rates of graft failure in all keratoplasty cases when comparing tissue from diabetic and nondiabetic donors, but further investigation is needed to determine whether diabetic donor tissue results in different graft failure rates after DSAEK compared with PKP.

  9. Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach

    NASA Astrophysics Data System (ADS)

    Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.

    2011-03-01

    We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.

  10. Absolute fragmentation cross sections in atom-molecule collisions: Scaling laws for non-statistical fragmentation of polycyclic aromatic hydrocarbon molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.; Gatchell, M.; Stockett, M. H.

    2014-06-14

    We present scaling laws for absolute cross sections for non-statistical fragmentation in collisions between Polycyclic Aromatic Hydrocarbons (PAH/PAH{sup +}) and hydrogen or helium atoms with kinetic energies ranging from 50 eV to 10 keV. Further, we calculate the total fragmentation cross sections (including statistical fragmentation) for 110 eV PAH/PAH{sup +} + He collisions, and show that they compare well with experimental results. We demonstrate that non-statistical fragmentation becomes dominant for large PAHs and that it yields highly reactive fragments forming strong covalent bonds with atoms (H and N) and molecules (C{sub 6}H{sub 5}). Thus nonstatistical fragmentation may be an effectivemore » initial step in the formation of, e.g., Polycyclic Aromatic Nitrogen Heterocycles (PANHs). This relates to recent discussions on the evolution of PAHNs in space and the reactivities of defect graphene structures.« less

  11. Statistical Modeling of Zr/Hf Extraction using TBP-D2EHPA Mixtures

    NASA Astrophysics Data System (ADS)

    Rezaeinejhad Jirandehi, Vahid; Haghshenas Fatmehsari, Davoud; Firoozi, Sadegh; Taghizadeh, Mohammad; Keshavarz Alamdari, Eskandar

    2012-12-01

    In the present work, response surface methodology was employed for the study and prediction of Zr/Hf extraction curves in a solvent extraction system using D2EHPA-TBP mixtures. The effect of change in the levels of temperature, nitric acid concentration, and TBP/D2EHPA ratio (T/D) on the Zr/Hf extraction/separation was studied by the use of central composite design. The results showed a statistically significant effect of T/D, nitric acid concentration, and temperature on the extraction percentage of Zr and Hf. In the case of Zr, a statistically significant interaction was found between T/D and nitric acid, whereas for Hf, both interactive terms between temperature and T/D and nitric acid were significant. Additionally, the extraction curves were profitably predicted applying the developed statistical regression equations; this approach is faster and more economical compared with experimentally obtained curves.

  12. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  13. Estimation of absorption rate constant (ka) following oral administration by Wagner-Nelson, Loo-Riegelman, and statistical moments in the presence of a secondary peak.

    PubMed

    Mahmood, Iftekhar

    2004-01-01

    The objective of this study was to evaluate the performance of Wagner-Nelson, Loo-Reigelman, and statistical moments methods in determining the absorption rate constant(s) in the presence of a secondary peak. These methods were also evaluated when there were two absorption rates without a secondary peak. Different sets of plasma concentration versus time data for a hypothetical drug following one or two compartment models were generated by simulation. The true ka was compared with the ka estimated by Wagner-Nelson, Loo-Riegelman and statistical moments methods. The results of this study indicate that Wagner-Nelson, Loo-Riegelman and statistical moments methods may not be used for the estimation of absorption rate constants in the presence of a secondary peak or when absorption takes place with two absorption rates.

  14. Statistical trends of episiotomy around the world: Comparative systematic review of changing practices.

    PubMed

    Clesse, Christophe; Lighezzolo-Alnot, Joëlle; De Lavergne, Sylvie; Hamlin, Sandrine; Scheffler, Michèle

    2018-06-01

    The authors' purpose for this article is to identify, review and interpret all publications about the episiotomy rates worldwide. Based on the criteria from the PRISMA guidelines, twenty databases were scrutinized. All studies which include national statistics related to episiotomy were selected, as well as studies presenting estimated data. Sixty-one papers were selected with publication dates between 1995 and 2016. A static and dynamic analysis of all the results was carried out. The assumption for the decline in the number of episiotomies is discussed and confirmed, recalling that nowadays high rates of episiotomy remain in less industrialized countries and East Asia. Finally, our analysis aims to investigate the potential determinants which influence apparent statistical disparities.

  15. Volcano plots in analyzing differential expressions with mRNA microarrays.

    PubMed

    Li, Wentian

    2012-12-01

    A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.

  16. Comparison of the Mahalanobis distance and Pearson's χ² statistic as measures of similarity of isotope patterns.

    PubMed

    Zamanzad Ghavidel, Fatemeh; Claesen, Jürgen; Burzykowski, Tomasz; Valkenborg, Dirk

    2014-02-01

    To extract a genuine peptide signal from a mass spectrum, an observed series of peaks at a particular mass can be compared with the isotope distribution expected for a peptide of that mass. To decide whether the observed series of peaks is similar to the isotope distribution, a similarity measure is needed. In this short communication, we investigate whether the Mahalanobis distance could be an alternative measure for the commonly employed Pearson's χ(2) statistic. We evaluate the performance of the two measures by using a controlled MALDI-TOF experiment. The results indicate that Pearson's χ(2) statistic has better discriminatory performance than the Mahalanobis distance and is a more robust measure.

  17. Comparison of the Mahalanobis Distance and Pearson's χ2 Statistic as Measures of Similarity of Isotope Patterns

    NASA Astrophysics Data System (ADS)

    Zamanzad Ghavidel, Fatemeh; Claesen, Jürgen; Burzykowski, Tomasz; Valkenborg, Dirk

    2014-02-01

    To extract a genuine peptide signal from a mass spectrum, an observed series of peaks at a particular mass can be compared with the isotope distribution expected for a peptide of that mass. To decide whether the observed series of peaks is similar to the isotope distribution, a similarity measure is needed. In this short communication, we investigate whether the Mahalanobis distance could be an alternative measure for the commonly employed Pearson's χ2 statistic. We evaluate the performance of the two measures by using a controlled MALDI-TOF experiment. The results indicate that Pearson's χ2 statistic has better discriminatory performance than the Mahalanobis distance and is a more robust measure.

  18. Accurate estimation of normal incidence absorption coefficients with confidence intervals using a scanning laser Doppler vibrometer

    NASA Astrophysics Data System (ADS)

    Vuye, Cedric; Vanlanduit, Steve; Guillaume, Patrick

    2009-06-01

    When using optical measurements of the sound fields inside a glass tube, near the material under test, to estimate the reflection and absorption coefficients, not only these acoustical parameters but also confidence intervals can be determined. The sound fields are visualized using a scanning laser Doppler vibrometer (SLDV). In this paper the influence of different test signals on the quality of the results, obtained with this technique, is examined. The amount of data gathered during one measurement scan makes a thorough statistical analysis possible leading to the knowledge of confidence intervals. The use of a multi-sine, constructed on the resonance frequencies of the test tube, shows to be a very good alternative for the traditional periodic chirp. This signal offers the ability to obtain data for multiple frequencies in one measurement, without the danger of a low signal-to-noise ratio. The variability analysis in this paper clearly shows the advantages of the proposed multi-sine compared to the periodic chirp. The measurement procedure and the statistical analysis are validated by measuring the reflection ratio at a closed end and comparing the results with the theoretical value. Results of the testing of two building materials (an acoustic ceiling tile and linoleum) are presented and compared to supplier data.

  19. Does peer learning or higher levels of e-learning improve learning abilities? A randomized controlled trial

    PubMed Central

    Worm, Bjarne Skjødt; Jensen, Kenneth

    2013-01-01

    Background and aims The fast development of e-learning and social forums demands us to update our understanding of e-learning and peer learning. We aimed to investigate if higher, pre-defined levels of e-learning or social interaction in web forums improved students’ learning ability. Methods One hundred and twenty Danish medical students were randomized to six groups all with 20 students (eCases level 1, eCases level 2, eCases level 2+, eTextbook level 1, eTextbook level 2, and eTextbook level 2+). All students participated in a pre-test, Group 1 participated in an interactive case-based e-learning program, while Group 2 was presented with textbook material electronically. The 2+ groups were able to discuss the material between themselves in a web forum. The subject was head injury and associated treatment and observation guidelines in the emergency room. Following the e-learning, all students completed a post-test. Pre- and post-tests both consisted of 25 questions randomly chosen from a pool of 50 different questions. Results All students concluded the study with comparable pre-test results. Students at Level 2 (in both groups) improved statistically significant compared to students at level 1 (p>0.05). There was no statistically significant difference between level 2 and level 2+. However, level 2+ was associated with statistically significant greater student's satisfaction than the rest of the students (p>0.05). Conclusions This study applies a new way of comparing different types of e-learning using a pre-defined level division and the possibility of peer learning. Our findings show that higher levels of e-learning does in fact provide better results when compared with the same type of e-learning at lower levels. While social interaction in web forums increase student satisfaction, learning ability does not seem to change. Both findings are relevant when designing new e-learning materials. PMID:24229729

  20. In silico identification and comparative analysis of differentially expressed genes in human and mouse tissues

    PubMed Central

    Pao, Sheng-Ying; Lin, Win-Li; Hwang, Ming-Jing

    2006-01-01

    Background Screening for differentially expressed genes on the genomic scale and comparative analysis of the expression profiles of orthologous genes between species to study gene function and regulation are becoming increasingly feasible. Expressed sequence tags (ESTs) are an excellent source of data for such studies using bioinformatic approaches because of the rich libraries and tremendous amount of data now available in the public domain. However, any large-scale EST-based bioinformatics analysis must deal with the heterogeneous, and often ambiguous, tissue and organ terms used to describe EST libraries. Results To deal with the issue of tissue source, in this work, we carefully screened and organized more than 8 million human and mouse ESTs into 157 human and 108 mouse tissue/organ categories, to which we applied an established statistic test using different thresholds of the p value to identify genes differentially expressed in different tissues. Further analysis of the tissue distribution and level of expression of human and mouse orthologous genes showed that tissue-specific orthologs tended to have more similar expression patterns than those lacking significant tissue specificity. On the other hand, a number of orthologs were found to have significant disparity in their expression profiles, hinting at novel functions, divergent regulation, or new ortholog relationships. Conclusion Comprehensive statistics on the tissue-specific expression of human and mouse genes were obtained in this very large-scale, EST-based analysis. These statistical results have been organized into a database, freely accessible at our website , for easy searching of human and mouse tissue-specific genes and for investigating gene expression profiles in the context of comparative genomics. Comparative analysis showed that, although highly tissue-specific genes tend to exhibit similar expression profiles in human and mouse, there are significant exceptions, indicating that orthologous genes, while sharing basic genomic properties, could result in distinct phenotypes. PMID:16626500

Top