Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.
Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias
2016-01-01
To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.
STATISTICAL SAMPLING AND DATA ANALYSIS
Research is being conducted to develop approaches to improve soil and sediment sampling techniques, measurement design and geostatistics, and data analysis via chemometric, environmetric, and robust statistical methods. Improvements in sampling contaminated soil and other hetero...
CORSSA: The Community Online Resource for Statistical Seismicity Analysis
Michael, Andrew J.; Wiemer, Stefan
2010-01-01
Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.
ERIC Educational Resources Information Center
Petocz, Agnes; Newbery, Glenn
2010-01-01
Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…
Metz, Anneke M
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.
Statistical process control methods allow the analysis and improvement of anesthesia care.
Fasting, Sigurd; Gisvold, Sven E
2003-10-01
Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.
NASA Astrophysics Data System (ADS)
Hendikawati, P.; Dewi, N. R.
2017-04-01
Statistics needed for use in the data analysis process and had a comprehensive implementation in daily life so that students must master the well statistical material. The use of Statistics textbook support with ICT and portfolio assessment approach was expected to help the students to improve mathematical connection skills. The subject of this research was 30 student teachers who take Statistics courses. The results of this research are the use of Statistics textbook support with ICT and portfolio assessment approach can improve students mathematical connection skills.
Improving Student Understanding of Spatial Ecology Statistics
ERIC Educational Resources Information Center
Hopkins, Robert, II; Alberts, Halley
2015-01-01
This activity is designed as a primer to teaching population dispersion analysis. The aim is to help improve students' spatial thinking and their understanding of how spatial statistic equations work. Students use simulated data to develop their own statistic and apply that equation to experimental behavioral data for Gambusia affinis (western…
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
ERIC Educational Resources Information Center
Madhere, Serge
An analytic procedure, efficiency analysis, is proposed for improving the utility of quantitative program evaluation for decision making. The three features of the procedure are explained: (1) for statistical control, it adopts and extends the regression-discontinuity design; (2) for statistical inferences, it de-emphasizes hypothesis testing in…
The estimation of the measurement results with using statistical methods
NASA Astrophysics Data System (ADS)
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Randomization Procedures Applied to Analysis of Ballistic Data
1991-06-01
test,;;15. NUMBER OF PAGES data analysis; computationally intensive statistics ; randomization tests; permutation tests; 16 nonparametric statistics ...be 0.13. 8 Any reasonable statistical procedure would fail to support the notion of improvement of dynamic over standard indexing based on this data ...AD-A238 389 TECHNICAL REPORT BRL-TR-3245 iBRL RANDOMIZATION PROCEDURES APPLIED TO ANALYSIS OF BALLISTIC DATA MALCOLM S. TAYLOR BARRY A. BODT - JUNE
Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.
Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V
2018-04-01
A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.
ERIC Educational Resources Information Center
Larson-Hall, Jenifer; Herrington, Richard
2010-01-01
In this article we introduce language acquisition researchers to two broad areas of applied statistics that can improve the way data are analyzed. First we argue that visual summaries of information are as vital as numerical ones, and suggest ways to improve them. Specifically, we recommend choosing boxplots over barplots and adding locally…
Statistical analysis of RHIC beam position monitors performance
NASA Astrophysics Data System (ADS)
Calaga, R.; Tomás, R.
2004-04-01
A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.
Multiple outcomes are often measured on each experimental unit in toxicology experiments. These multiple observations typically imply the existence of correlation between endpoints, and a statistical analysis that incorporates it may result in improved inference. When both disc...
ERIC Educational Resources Information Center
Maric, Marija; Wiers, Reinout W.; Prins, Pier J. M.
2012-01-01
Despite guidelines and repeated calls from the literature, statistical mediation analysis in youth treatment outcome research is rare. Even more concerning is that many studies that "have" reported mediation analyses do not fulfill basic requirements for mediation analysis, providing inconclusive data and clinical implications. As a result, after…
Zhang, Harrison G; Ying, Gui-Shuang
2018-02-09
The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Linnorm: improved statistical analysis for single cell RNA-seq expression data
Yip, Shun H.; Wang, Panwen; Kocher, Jean-Pierre A.; Sham, Pak Chung
2017-01-01
Abstract Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. PMID:28981748
Can Percentiles Replace Raw Scores in the Statistical Analysis of Test Data?
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Zumbo, Bruno D.
2005-01-01
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
[The main directions of reforming the service of medical statistics in Ukraine].
Golubchykov, Mykhailo V; Orlova, Nataliia M; Bielikova, Inna V
2018-01-01
Introduction: Implementation of new methods of information support of managerial decision-making should ensure of the effective health system reform and create conditions for improving the quality of operational management, reasonable planning of medical care and increasing the efficiency of the use of system resources. Reforming of Medical Statistics Service of Ukraine should be considered only in the context of the reform of the entire health system. The aim: This work is an analysis of the current situation and justification of the main directions of reforming of Medical Statistics Service of Ukraine. Material and methods: In the work is used a range of methods: content analysis, bibliosemantic, systematic approach. The information base of the research became: WHO strategic and program documents, data of the Medical Statistics Center of the Ministry of Health of Ukraine. Review: The Medical Statistics Service of Ukraine has a completed and effective structure, headed by the State Institution "Medical Statistics Center of the Ministry of Health of Ukraine." This institution reports on behalf of the Ministry of Health of Ukraine to the State Statistical Service of Ukraine, the WHO European Office and other international organizations. An analysis of the current situation showed that to achieve this goal it is necessary: to improve the system of statistical indicators for an adequate assessment of the performance of health institutions, including in the economic aspect; creation of a developed medical and statistical base of administrative territories; change of existing technologies for the formation of information resources; strengthening the material-technical base of the structural units of Medical Statistics Service; improvement of the system of training and retraining of personnel for the service of medical statistics; development of international cooperation in the field of methodology and practice of medical statistics, implementation of internationally accepted methods for collecting, processing, analyzing and disseminating medical and statistical information; the creation of a medical and statistical service that adapted to the specifics of market relations in health care, flexible and sensitive to changes in international methodologies and standards. Conclusions: The data of medical statistics are the basis for taking managerial decisions by managers at all levels of health care. Reform of Medical Statistics Service of Ukraine should be considered only in the context of the reform of the entire health system. The main directions of the reform of the medical statistics service in Ukraine are: the introduction of information technologies, the improvement of the training of personnel for the service, the improvement of material and technical equipment, the maximum reuse of the data obtained, which provides for the unification of primary data and a system of indicators. The most difficult area is the formation of information funds and the introduction of modern information technologies.
[The main directions of reforming the service of medical statistics in Ukraine].
Golubchykov, Mykhailo V; Orlova, Nataliia M; Bielikova, Inna V
Introduction: Implementation of new methods of information support of managerial decision-making should ensure of the effective health system reform and create conditions for improving the quality of operational management, reasonable planning of medical care and increasing the efficiency of the use of system resources. Reforming of Medical Statistics Service of Ukraine should be considered only in the context of the reform of the entire health system. The aim: This work is an analysis of the current situation and justification of the main directions of reforming of Medical Statistics Service of Ukraine. Material and methods: In the work is used a range of methods: content analysis, bibliosemantic, systematic approach. The information base of the research became: WHO strategic and program documents, data of the Medical Statistics Center of the Ministry of Health of Ukraine. Review: The Medical Statistics Service of Ukraine has a completed and effective structure, headed by the State Institution "Medical Statistics Center of the Ministry of Health of Ukraine." This institution reports on behalf of the Ministry of Health of Ukraine to the State Statistical Service of Ukraine, the WHO European Office and other international organizations. An analysis of the current situation showed that to achieve this goal it is necessary: to improve the system of statistical indicators for an adequate assessment of the performance of health institutions, including in the economic aspect; creation of a developed medical and statistical base of administrative territories; change of existing technologies for the formation of information resources; strengthening the material-technical base of the structural units of Medical Statistics Service; improvement of the system of training and retraining of personnel for the service of medical statistics; development of international cooperation in the field of methodology and practice of medical statistics, implementation of internationally accepted methods for collecting, processing, analyzing and disseminating medical and statistical information; the creation of a medical and statistical service that adapted to the specifics of market relations in health care, flexible and sensitive to changes in international methodologies and standards. Conclusions: The data of medical statistics are the basis for taking managerial decisions by managers at all levels of health care. Reform of Medical Statistics Service of Ukraine should be considered only in the context of the reform of the entire health system. The main directions of the reform of the medical statistics service in Ukraine are: the introduction of information technologies, the improvement of the training of personnel for the service, the improvement of material and technical equipment, the maximum reuse of the data obtained, which provides for the unification of primary data and a system of indicators. The most difficult area is the formation of information funds and the introduction of modern information technologies.
Kaneta, Tomohiro; Nakatsuka, Masahiro; Nakamura, Kei; Seki, Takashi; Yamaguchi, Satoshi; Tsuboi, Masahiro; Meguro, Kenichi
2016-01-01
SPECT is an important diagnostic tool for dementia. Recently, statistical analysis of SPECT has been commonly used for dementia research. In this study, we evaluated the accuracy of visual SPECT evaluation and/or statistical analysis for the diagnosis (Dx) of Alzheimer disease (AD) and other forms of dementia in our community-based study "The Osaki-Tajiri Project." Eighty-nine consecutive outpatients with dementia were enrolled and underwent brain perfusion SPECT with 99mTc-ECD. Diagnostic accuracy of SPECT was tested using 3 methods: visual inspection (SPECT Dx), automated diagnostic tool using statistical analysis with easy Z-score imaging system (eZIS Dx), and visual inspection plus eZIS (integrated Dx). Integrated Dx showed the highest sensitivity, specificity, and accuracy, whereas eZIS was the second most accurate method. We also observed that a higher than expected rate of SPECT images indicated false-negative cases of AD. Among these, 50% showed hypofrontality and were diagnosed as frontotemporal lobar degeneration. These cases typically showed regional "hot spots" in the primary sensorimotor cortex (ie, a sensorimotor hot spot sign), which we determined were associated with AD rather than frontotemporal lobar degeneration. We concluded that the diagnostic abilities were improved by the integrated use of visual assessment and statistical analysis. In addition, the detection of a sensorimotor hot spot sign was useful to detect AD when hypofrontality is present and improved the ability to properly diagnose AD.
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Rossi, Pierre; Gillet, François; Rohrbach, Emmanuelle; Diaby, Nouhou; Holliger, Christof
2009-01-01
The variability of terminal restriction fragment polymorphism analysis applied to complex microbial communities was assessed statistically. Recent technological improvements were implemented in the successive steps of the procedure, resulting in a standardized procedure which provided a high level of reproducibility. PMID:19749066
Linnorm: improved statistical analysis for single cell RNA-seq expression data.
Yip, Shun H; Wang, Panwen; Kocher, Jean-Pierre A; Sham, Pak Chung; Wang, Junwen
2017-12-15
Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Statistical process control: separating signal from noise in emergency department operations.
Pimentel, Laura; Barrueto, Fermin
2015-05-01
Statistical process control (SPC) is a visually appealing and statistically rigorous methodology very suitable to the analysis of emergency department (ED) operations. We demonstrate that the control chart is the primary tool of SPC; it is constructed by plotting data measuring the key quality indicators of operational processes in rationally ordered subgroups such as units of time. Control limits are calculated using formulas reflecting the variation in the data points from one another and from the mean. SPC allows managers to determine whether operational processes are controlled and predictable. We review why the moving range chart is most appropriate for use in the complex ED milieu, how to apply SPC to ED operations, and how to determine when performance improvement is needed. SPC is an excellent tool for operational analysis and quality improvement for these reasons: 1) control charts make large data sets intuitively coherent by integrating statistical and visual descriptions; 2) SPC provides analysis of process stability and capability rather than simple comparison with a benchmark; 3) SPC allows distinction between special cause variation (signal), indicating an unstable process requiring action, and common cause variation (noise), reflecting a stable process; and 4) SPC keeps the focus of quality improvement on process rather than individual performance. Because data have no meaning apart from their context, and every process generates information that can be used to improve it, we contend that SPC should be seriously considered for driving quality improvement in emergency medicine. Copyright © 2015 Elsevier Inc. All rights reserved.
Barber, Julie A; Thompson, Simon G
1998-01-01
Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854
Introduction of statistical information in a syntactic analyzer for document image recognition
NASA Astrophysics Data System (ADS)
Maroneze, André O.; Coüasnon, Bertrand; Lemaitre, Aurélie
2011-01-01
This paper presents an improvement to document layout analysis systems, offering a possible solution to Sayre's paradox (which states that an element "must be recognized before it can be segmented; and it must be segmented before it can be recognized"). This improvement, based on stochastic parsing, allows integration of statistical information, obtained from recognizers, during syntactic layout analysis. We present how this fusion of numeric and symbolic information in a feedback loop can be applied to syntactic methods to improve document description expressiveness. To limit combinatorial explosion during exploration of solutions, we devised an operator that allows optional activation of the stochastic parsing mechanism. Our evaluation on 1250 handwritten business letters shows this method allows the improvement of global recognition scores.
McCormick, Frank; Gupta, Anil; Bruce, Ben; Harris, Josh; Abrams, Geoff; Wilson, Hillary; Hussey, Kristen; Cole, Brian J.
2014-01-01
Purpose: The purpose of this study was to measure and compare the subjective, objective, and radiographic healing outcomes of single-row (SR), double-row (DR), and transosseous equivalent (TOE) suture techniques for arthroscopic rotator cuff repair. Materials and Methods: A retrospective comparative analysis of arthroscopic rotator cuff repairs by one surgeon from 2004 to 2010 at minimum 2-year followup was performed. Cohorts were matched for age, sex, and tear size. Subjective outcome variables included ASES, Constant, SST, UCLA, and SF-12 scores. Objective outcome variables included strength, active range of motion (ROM). Radiographic healing was assessed by magnetic resonance imaging (MRI). Statistical analysis was performed using analysis of variance (ANOVA), Mann — Whitney and Kruskal — Wallis tests with significance, and the Fisher exact probability test <0.05. Results: Sixty-three patients completed the study requirements (20 SR, 21 DR, 22 TOE). There was a clinically and statistically significant improvement in outcomes with all repair techniques (ASES mean improvement P = <0.0001). The mean final ASES scores were: SR 83; (SD 21.4); DR 87 (SD 18.2); TOE 87 (SD 13.2); (P = 0.73). There was a statistically significant improvement in strength for each repair technique (P < 0.001). There was no significant difference between techniques across all secondary outcome assessments: ASES improvement, Constant, SST, UCLA, SF-12, ROM, Strength, and MRI re-tear rates. There was a decrease in re-tear rates from single row (22%) to double-row (18%) to transosseous equivalent (11%); however, this difference was not statistically significant (P = 0.6). Conclusions: Compared to preoperatively, arthroscopic rotator cuff repair, using SR, DR, or TOE techniques, yielded a clinically and statistically significant improvement in subjective and objective outcomes at a minimum 2-year follow-up. Level of Evidence: Therapeutic level 3. PMID:24926159
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
Experimental toxicology: Issues of statistics, experimental design, and replication.
Briner, Wayne; Kirwan, Jeral
2017-01-01
The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis
Ré, Miguel A.; Azad, Rajeev K.
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms. PMID:24728338
Generalization of entropy based divergence measures for symbolic sequence analysis.
Ré, Miguel A; Azad, Rajeev K
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms.
ERIC Educational Resources Information Center
Kennedy, Kate; Peters, Mary; Thomas, Mike
2012-01-01
Value-added analysis is the most robust, statistically significant method available for helping educators quantify student progress over time. This powerful tool also reveals tangible strategies for improving instruction. Built around the work of Battelle for Kids, this book provides a field-tested continuous improvement model for using…
Chung, Dongjun; Kim, Hang J; Zhao, Hongyu
2017-02-01
Genome-wide association studies (GWAS) have identified tens of thousands of genetic variants associated with hundreds of phenotypes and diseases, which have provided clinical and medical benefits to patients with novel biomarkers and therapeutic targets. However, identification of risk variants associated with complex diseases remains challenging as they are often affected by many genetic variants with small or moderate effects. There has been accumulating evidence suggesting that different complex traits share common risk basis, namely pleiotropy. Recently, several statistical methods have been developed to improve statistical power to identify risk variants for complex traits through a joint analysis of multiple GWAS datasets by leveraging pleiotropy. While these methods were shown to improve statistical power for association mapping compared to separate analyses, they are still limited in the number of phenotypes that can be integrated. In order to address this challenge, in this paper, we propose a novel statistical framework, graph-GPA, to integrate a large number of GWAS datasets for multiple phenotypes using a hidden Markov random field approach. Application of graph-GPA to a joint analysis of GWAS datasets for 12 phenotypes shows that graph-GPA improves statistical power to identify risk variants compared to statistical methods based on smaller number of GWAS datasets. In addition, graph-GPA also promotes better understanding of genetic mechanisms shared among phenotypes, which can potentially be useful for the development of improved diagnosis and therapeutics. The R implementation of graph-GPA is currently available at https://dongjunchung.github.io/GGPA/.
Duncan, Fiona; Haigh, Carol
2013-10-01
To explore and improve the quality of continuous epidural analgesia for pain relief using Statistical Process Control tools. Measuring the quality of pain management interventions is complex. Intermittent audits do not accurately capture the results of quality improvement initiatives. The failure rate for one intervention, epidural analgesia, is approximately 30% in everyday practice, so it is an important area for improvement. Continuous measurement and analysis are required to understand the multiple factors involved in providing effective pain relief. Process control and quality improvement Routine prospectively acquired data collection started in 2006. Patients were asked about their pain and side effects of treatment. Statistical Process Control methods were applied for continuous data analysis. A multidisciplinary group worked together to identify reasons for variation in the data and instigated ideas for improvement. The key measure for improvement was a reduction in the percentage of patients with an epidural in severe pain. The baseline control charts illustrated the recorded variation in the rate of several processes and outcomes for 293 surgical patients. The mean visual analogue pain score (VNRS) was four. There was no special cause variation when data were stratified by surgeons, clinical area or patients who had experienced pain before surgery. Fifty-seven per cent of patients were hypotensive on the first day after surgery. We were able to demonstrate a significant improvement in the failure rate of epidurals as the project continued with quality improvement interventions. Statistical Process Control is a useful tool for measuring and improving the quality of pain management. The applications of Statistical Process Control methods offer the potential to learn more about the process of change and outcomes in an Acute Pain Service both locally and nationally. We have been able to develop measures for improvement and benchmarking in routine care that has led to the establishment of a national pain registry. © 2013 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Yousif, Dilon
The purpose of this study was to improve the Quality Assurance (QA) System at the Nemak Windsor Aluminum Plant (WAP). The project used Six Sigma method based on Define, Measure, Analyze, Improve, and Control (DMAIC). Analysis of in process melt at WAP was based on chemical, thermal, and mechanical testing. The control limits for the W319 Al Alloy were statistically recalculated using the composition measured under stable conditions. The "Chemistry Viewer" software was developed for statistical analysis of alloy composition. This software features the Silicon Equivalency (SiBQ) developed by the IRC. The Melt Sampling Device (MSD) was designed and evaluated at WAP to overcome traditional sampling limitations. The Thermal Analysis "Filters" software was developed for cooling curve analysis of the 3XX Al Alloy(s) using IRC techniques. The impact of low melting point impurities on the start of melting was evaluated using the Universal Metallurgical Simulator and Analyzer (UMSA).
Mathysen, Danny G P; Aclimandos, Wagih; Roelant, Ella; Wouters, Kristien; Creuzot-Garcher, Catherine; Ringens, Peter J; Hawlina, Marko; Tassignon, Marie-José
2013-11-01
To investigate whether introduction of item-response theory (IRT) analysis, in parallel to the 'traditional' statistical analysis methods available for performance evaluation of multiple T/F items as used in the European Board of Ophthalmology Diploma (EBOD) examination, has proved beneficial, and secondly, to study whether the overall assessment performance of the current written part of EBOD is sufficiently high (KR-20≥ 0.90) to be kept as examination format in future EBOD editions. 'Traditional' analysis methods for individual MCQ item performance comprise P-statistics, Rit-statistics and item discrimination, while overall reliability is evaluated through KR-20 for multiple T/F items. The additional set of statistical analysis methods for the evaluation of EBOD comprises mainly IRT analysis. These analysis techniques are used to monitor whether the introduction of negative marking for incorrect answers (since EBOD 2010) has a positive influence on the statistical performance of EBOD as a whole and its individual test items in particular. Item-response theory analysis demonstrated that item performance parameters should not be evaluated individually, but should be related to one another. Before the introduction of negative marking, the overall EBOD reliability (KR-20) was good though with room for improvement (EBOD 2008: 0.81; EBOD 2009: 0.78). After the introduction of negative marking, the overall reliability of EBOD improved significantly (EBOD 2010: 0.92; EBOD 2011:0.91; EBOD 2012: 0.91). Although many statistical performance parameters are available to evaluate individual items, our study demonstrates that the overall reliability assessment remains the only crucial parameter to be evaluated allowing comparison. While individual item performance analysis is worthwhile to undertake as secondary analysis, drawing final conclusions seems to be more difficult. Performance parameters need to be related, as shown by IRT analysis. Therefore, IRT analysis has proved beneficial for the statistical analysis of EBOD. Introduction of negative marking has led to a significant increase in the reliability (KR-20 > 0.90), indicating that the current examination format can be kept for future EBOD examinations. © 2013 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The
2015-11-01
While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.
Parsons, Nick R; Price, Charlotte L; Hiskens, Richard; Achten, Juul; Costa, Matthew L
2012-04-25
The application of statistics in reported research in trauma and orthopaedic surgery has become ever more important and complex. Despite the extensive use of statistical analysis, it is still a subject which is often not conceptually well understood, resulting in clear methodological flaws and inadequate reporting in many papers. A detailed statistical survey sampled 100 representative orthopaedic papers using a validated questionnaire that assessed the quality of the trial design and statistical analysis methods. The survey found evidence of failings in study design, statistical methodology and presentation of the results. Overall, in 17% (95% confidence interval; 10-26%) of the studies investigated the conclusions were not clearly justified by the results, in 39% (30-49%) of studies a different analysis should have been undertaken and in 17% (10-26%) a different analysis could have made a difference to the overall conclusions. It is only by an improved dialogue between statistician, clinician, reviewer and journal editor that the failings in design methodology and analysis highlighted by this survey can be addressed.
Compositional Solution Space Quantification for Probabilistic Software Analysis
NASA Technical Reports Server (NTRS)
Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem
2014-01-01
Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.
Harris, Alex; Reeder, Rachelle; Hyun, Jenny
2011-01-01
The authors surveyed 21 editors and reviewers from major psychology journals to identify and describe the statistical and design errors they encounter most often and to get their advice regarding prevention of these problems. Content analysis of the text responses revealed themes in 3 major areas: (a) problems with research design and reporting (e.g., lack of an a priori power analysis, lack of congruence between research questions and study design/analysis, failure to adequately describe statistical procedures); (b) inappropriate data analysis (e.g., improper use of analysis of variance, too many statistical tests without adjustments, inadequate strategy for addressing missing data); and (c) misinterpretation of results. If researchers attended to these common methodological and analytic issues, the scientific quality of manuscripts submitted to high-impact psychology journals might be significantly improved.
Dexter, Franklin; Shafer, Steven L
2017-03-01
Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.
Monitoring and Evaluation: Statistical Support for Life-cycle Studies, Annual Report 2003.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalski, John
2003-11-01
The ongoing mission of this project is the development of statistical tools for analyzing fisheries tagging data in the most precise and appropriate manner possible. This mission also includes providing statistical guidance on the best ways to design large-scale tagging studies. This mission continues because the technologies for conducting fish tagging studies continuously evolve. In just the last decade, fisheries biologists have seen the evolution from freeze-brands and coded wire tags (CWT) to passive integrated transponder (PIT) tags, balloon-tags, radiotelemetry, and now, acoustic-tags. With each advance, the technology holds the promise of more detailed and precise information. However, the technologymore » for analyzing and interpreting the data also becomes more complex as the tagging techniques become more sophisticated. The goal of the project is to develop the analytical tools in parallel with the technical advances in tagging studies, so that maximum information can be extracted on a timely basis. Associated with this mission is the transfer of these analytical capabilities to the field investigators to assure consistency and the highest levels of design and analysis throughout the fisheries community. Consequently, this project provides detailed technical assistance on the design and analysis of tagging studies to groups requesting assistance throughout the fisheries community. Ideally, each project and each investigator would invest in the statistical support needed for the successful completion of their study. However, this is an ideal that is rarely if every attained. Furthermore, there is only a small pool of highly trained scientists in this specialized area of tag analysis here in the Northwest. Project 198910700 provides the financial support to sustain this local expertise on the statistical theory of tag analysis at the University of Washington and make it available to the fisheries community. Piecemeal and fragmented support from various agencies and organizations would be incapable of maintaining a center of expertise. The mission of the project is to help assure tagging studies are designed and analyzed from the onset to extract the best available information using state-of-the-art statistical methods. The overarching goals of the project is to assure statistically sound survival studies so that fish managers can focus on the management implications of their findings and not be distracted by concerns whether the studies are statistically reliable or not. Specific goals and objectives of the study include the following: (1) Provide consistent application of statistical methodologies for survival estimation across all salmon life cycle stages to assure comparable performance measures and assessment of results through time, to maximize learning and adaptive management opportunities, and to improve and maintain the ability to responsibly evaluate the success of implemented Columbia River FWP salmonid mitigation programs and identify future mitigation options. (2) Improve analytical capabilities to conduct research on survival processes of wild and hatchery chinook and steelhead during smolt outmigration, to improve monitoring and evaluation capabilities and assist in-season river management to optimize operational and fish passage strategies to maximize survival. (3) Extend statistical support to estimate ocean survival and in-river survival of returning adults. Provide statistical guidance in implementing a river-wide adult PIT-tag detection capability. (4) Develop statistical methods for survival estimation for all potential users and make this information available through peer-reviewed publications, statistical software, and technology transfers to organizations such as NOAA Fisheries, the Fish Passage Center, US Fish and Wildlife Service, US Geological Survey (USGS), US Army Corps of Engineers (USACE), Public Utility Districts (PUDs), the Independent Scientific Advisory Board (ISAB), and other members of the Northwest fisheries community. (5) Provide and maintain statistical software for tag analysis and user support. (6) Provide improvements in statistical theory and software as requested by user groups. These improvements include extending software capabilities to address new research issues, adapting tagging techniques to new study designs, and extending the analysis capabilities to new technologies such as radio-tags and acoustic-tags.« less
Statistical analysis of arthroplasty data
2011-01-01
It is envisaged that guidelines for statistical analysis and presentation of results will improve the quality and value of research. The Nordic Arthroplasty Register Association (NARA) has therefore developed guidelines for the statistical analysis of arthroplasty register data. The guidelines are divided into two parts, one with an introduction and a discussion of the background to the guidelines (Ranstam et al. 2011a, see pages x-y in this issue), and this one with a more technical statistical discussion on how specific problems can be handled. This second part contains (1) recommendations for the interpretation of methods used to calculate survival, (2) recommendations on howto deal with bilateral observations, and (3) a discussion of problems and pitfalls associated with analysis of factors that influence survival or comparisons between outcomes extracted from different hospitals. PMID:21619500
D'Agostino, M F; Sanz, J; Martínez-Castro, I; Giuffrè, A M; Sicari, V; Soria, A C
2014-07-01
Statistical analysis has been used for the first time to evaluate the dispersion of quantitative data in the solid-phase microextraction (SPME) followed by gas chromatography-mass spectrometry (GC-MS) analysis of blackberry (Rubus ulmifolius Schott) volatiles with the aim of improving their precision. Experimental and randomly simulated data were compared using different statistical parameters (correlation coefficients, Principal Component Analysis loadings and eigenvalues). Non-random factors were shown to significantly contribute to total dispersion; groups of volatile compounds could be associated with these factors. A significant improvement of precision was achieved when considering percent concentration ratios, rather than percent values, among those blackberry volatiles with a similar dispersion behavior. As novelty over previous references, and to complement this main objective, the presence of non-random dispersion trends in data from simple blackberry model systems was evidenced. Although the influence of the type of matrix on data precision was proved, the possibility of a better understanding of the dispersion patterns in real samples was not possible from model systems. The approach here used was validated for the first time through the multicomponent characterization of Italian blackberries from different harvest years. Copyright © 2014 Elsevier B.V. All rights reserved.
Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis
NASA Astrophysics Data System (ADS)
Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan
2017-10-01
This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.
Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong
ERIC Educational Resources Information Center
White, Patrick; Gorard, Stephen
2017-01-01
Recent concerns about a shortage of capacity for statistical and numerical analysis skills among social science students and researchers have prompted a range of initiatives aiming to improve teaching in this area. However, these projects have rarely re-evaluated the content of what is taught to students and have instead focussed primarily on…
Zhu, Xiaofeng; Feng, Tao; Tayo, Bamidele O; Liang, Jingjing; Young, J Hunter; Franceschini, Nora; Smith, Jennifer A; Yanek, Lisa R; Sun, Yan V; Edwards, Todd L; Chen, Wei; Nalls, Mike; Fox, Ervin; Sale, Michele; Bottinger, Erwin; Rotimi, Charles; Liu, Yongmei; McKnight, Barbara; Liu, Kiang; Arnett, Donna K; Chakravati, Aravinda; Cooper, Richard S; Redline, Susan
2015-01-08
Genome-wide association studies (GWASs) have identified many genetic variants underlying complex traits. Many detected genetic loci harbor variants that associate with multiple-even distinct-traits. Most current analysis approaches focus on single traits, even though the final results from multiple traits are evaluated together. Such approaches miss the opportunity to systemically integrate the phenome-wide data available for genetic association analysis. In this study, we propose a general approach that can integrate association evidence from summary statistics of multiple traits, either correlated, independent, continuous, or binary traits, which might come from the same or different studies. We allow for trait heterogeneity effects. Population structure and cryptic relatedness can also be controlled. Our simulations suggest that the proposed method has improved statistical power over single-trait analysis in most of the cases we studied. We applied our method to the Continental Origins and Genetic Epidemiology Network (COGENT) African ancestry samples for three blood pressure traits and identified four loci (CHIC2, HOXA-EVX1, IGFBP1/IGFBP3, and CDH17; p < 5.0 × 10(-8)) associated with hypertension-related traits that were missed by a single-trait analysis in the original report. Six additional loci with suggestive association evidence (p < 5.0 × 10(-7)) were also observed, including CACNA1D and WNT3. Our study strongly suggests that analyzing multiple phenotypes can improve statistical power and that such analysis can be executed with the summary statistics from GWASs. Our method also provides a way to study a cross phenotype (CP) association by using summary statistics from GWASs of multiple phenotypes. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Statistical Considerations of Food Allergy Prevention Studies.
Bahnson, Henry T; du Toit, George; Lack, Gideon
Clinical studies to prevent the development of food allergy have recently helped reshape public policy recommendations on the early introduction of allergenic foods. These trials are also prompting new research, and it is therefore important to address the unique design and analysis challenges of prevention trials. We highlight statistical concepts and give recommendations that clinical researchers may wish to adopt when designing future study protocols and analysis plans for prevention studies. Topics include selecting a study sample, addressing internal and external validity, improving statistical power, choosing alpha and beta, analysis innovations to address dilution effects, and analysis methods to deal with poor compliance, dropout, and missing data. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Bilan; Qiu, Xing; Zhu, Tong; Tian, Wei; Hu, Rui; Ekholm, Sven; Schifitto, Giovanni; Zhong, Jianhui
2016-03-01
Subject-specific longitudinal DTI study is vital for investigation of pathological changes of lesions and disease evolution. Spatial Regression Analysis of Diffusion tensor imaging (SPREAD) is a non-parametric permutation-based statistical framework that combines spatial regression and resampling techniques to achieve effective detection of localized longitudinal diffusion changes within the whole brain at individual level without a priori hypotheses. However, boundary blurring and dislocation limit its sensitivity, especially towards detecting lesions of irregular shapes. In the present study, we propose an improved SPREAD (dubbed improved SPREAD, or iSPREAD) method by incorporating a three-dimensional (3D) nonlinear anisotropic diffusion filtering method, which provides edge-preserving image smoothing through a nonlinear scale space approach. The statistical inference based on iSPREAD was evaluated and compared with the original SPREAD method using both simulated and in vivo human brain data. Results demonstrated that the sensitivity and accuracy of the SPREAD method has been improved substantially by adapting nonlinear anisotropic filtering. iSPREAD identifies subject-specific longitudinal changes in the brain with improved sensitivity, accuracy, and enhanced statistical power, especially when the spatial correlation is heterogeneous among neighboring image pixels in DTI.
Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain
Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young
2010-01-01
Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071
Toward improved analysis of concentration data: Embracing nondetects.
Shoari, Niloofar; Dubé, Jean-Sébastien
2018-03-01
Various statistical tests on concentration data serve to support decision-making regarding characterization and monitoring of contaminated media, assessing exposure to a chemical, and quantifying the associated risks. However, the routine statistical protocols cannot be directly applied because of challenges arising from nondetects or left-censored observations, which are concentration measurements below the detection limit of measuring instruments. Despite the existence of techniques based on survival analysis that can adjust for nondetects, these are seldom taken into account properly. A comprehensive review of the literature showed that managing policies regarding analysis of censored data do not always agree and that guidance from regulatory agencies may be outdated. Therefore, researchers and practitioners commonly resort to the most convenient way of tackling the censored data problem by substituting nondetects with arbitrary constants prior to data analysis, although this is generally regarded as a bias-prone approach. Hoping to improve the interpretation of concentration data, the present article aims to familiarize researchers in different disciplines with the significance of left-censored observations and provides theoretical and computational recommendations (under both frequentist and Bayesian frameworks) for adequate analysis of censored data. In particular, the present article synthesizes key findings from previous research with respect to 3 noteworthy aspects of inferential statistics: estimation of descriptive statistics, hypothesis testing, and regression analysis. Environ Toxicol Chem 2018;37:643-656. © 2017 SETAC. © 2017 SETAC.
Measurements and analysis in imaging for biomedical applications
NASA Astrophysics Data System (ADS)
Hoeller, Timothy L.
2009-02-01
A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results to baseline data using F-statistics. Self-parings over time are also useful. Special and common causes are identified apart from aging in applying the statistical methods. In the future, implementation of imaging measurement methods by research staff, doctors, and concerned patient partners result in improved health diagnosis, reporting, and cause determination. The long-term prospects for quantified measurements are better quality in imaging analysis with applications of higher utility for heath care providers.
Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS
Archambault, S.; Archer, A.; Benbow, W.; ...
2017-04-05
We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.
Response to Comments on "Evidence for mesothermy in dinosaurs".
Grady, John M; Enquist, Brian J; Dettweiler-Robinson, Eva; Wright, Natalie A; Smith, Felisa A
2015-05-29
D'Emic and Myhrvold raise a number of statistical and methodological issues with our recent analysis of dinosaur growth and energetics. However, their critiques and suggested improvements lack biological and statistical justification. Copyright © 2015, American Association for the Advancement of Science.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki
2018-04-01
Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.
Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.
[Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].
Golder, W
1999-09-01
To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.
The Content of Statistical Requirements for Authors in Biomedical Research Journals
Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang
2016-01-01
Background: Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues’ serious considerations not only at the stage of data analysis but also at the stage of methodological design. Methods: Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Results: Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including “address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation,” and “statistical methods and the reasons.” Conclusions: Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible. PMID:27748343
The Content of Statistical Requirements for Authors in Biomedical Research Journals.
Liu, Tian-Yi; Cai, Si-Yu; Nie, Xiao-Lu; Lyu, Ya-Qi; Peng, Xiao-Xia; Feng, Guo-Shuang
2016-10-20
Robust statistical designing, sound statistical analysis, and standardized presentation are important to enhance the quality and transparency of biomedical research. This systematic review was conducted to summarize the statistical reporting requirements introduced by biomedical research journals with an impact factor of 10 or above so that researchers are able to give statistical issues' serious considerations not only at the stage of data analysis but also at the stage of methodological design. Detailed statistical instructions for authors were downloaded from the homepage of each of the included journals or obtained from the editors directly via email. Then, we described the types and numbers of statistical guidelines introduced by different press groups. Items of statistical reporting guideline as well as particular requirements were summarized in frequency, which were grouped into design, method of analysis, and presentation, respectively. Finally, updated statistical guidelines and particular requirements for improvement were summed up. Totally, 21 of 23 press groups introduced at least one statistical guideline. More than half of press groups can update their statistical instruction for authors gradually relative to issues of new statistical reporting guidelines. In addition, 16 press groups, covering 44 journals, address particular statistical requirements. The most of the particular requirements focused on the performance of statistical analysis and transparency in statistical reporting, including "address issues relevant to research design, including participant flow diagram, eligibility criteria, and sample size estimation," and "statistical methods and the reasons." Statistical requirements for authors are becoming increasingly perfected. Statistical requirements for authors remind researchers that they should make sufficient consideration not only in regards to statistical methods during the research design, but also standardized statistical reporting, which would be beneficial in providing stronger evidence and making a greater critical appraisal of evidence more accessible.
2011-07-01
joined the project team in the statistical and research coordination role. Dr. Collin is an employee at the University of Pittsburgh. A successful...3. Submit to Ft. Detrick Completed Milestone: Statistical analysis planning 1. Review planned data metrics and data gathering tools...approach to performance assessment for continuous quality improvement. Analyzing data with modern statistical techniques to determine the
Davis, J.C.
2000-01-01
Geologists may feel that geological data are not amenable to statistical analysis, or at best require specialized approaches such as nonparametric statistics and geostatistics. However, there are many circumstances, particularly in systematic studies conducted for environmental or regulatory purposes, where traditional parametric statistical procedures can be beneficial. An example is the application of analysis of variance to data collected in an annual program of measuring groundwater levels in Kansas. Influences such as well conditions, operator effects, and use of the water can be assessed and wells that yield less reliable measurements can be identified. Such statistical studies have resulted in yearly improvements in the quality and reliability of the collected hydrologic data. Similar benefits may be achieved in other geological studies by the appropriate use of classical statistical tools.
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS).
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M; Khan, Ajmal
2016-01-01
This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher's exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher's exact test, logistic regression, epidemiological statistics, and non-parametric tests. This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design.
Improving Markov Chain Models for Road Profiles Simulation via Definition of States
2012-04-01
wavelet transform in pavement profile analysis," Vehicle System Dynamics: International Journal of Vehicle Mechanics and Mobility, vol. 47, no. 4...34Estimating Markov Transition Probabilities from Micro -Unit Data," Journal of the Royal Statistical Society. Series C (Applied Statistics), pp. 355-371
Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K
2015-01-01
Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273
Money Does Matter Somewhere: A Reply to Hanushek.
ERIC Educational Resources Information Center
Hedges, Larry V.; And Others
1994-01-01
Replies to E. A. Hanushek's questioning of the validity of meta-analysis as used by the authors in analyzing resource allocation and its effects on improving student academic performance. Statistical analysis procedures are examined. (GLR)
Shrout, Patrick E; Rodgers, Joseph L
2018-01-04
Psychology advances knowledge by testing statistical hypotheses using empirical observations and data. The expectation is that most statistically significant findings can be replicated in new data and in new laboratories, but in practice many findings have replicated less often than expected, leading to claims of a replication crisis. We review recent methodological literature on questionable research practices, meta-analysis, and power analysis to explain the apparently high rates of failure to replicate. Psychologists can improve research practices to advance knowledge in ways that improve replicability. We recommend that researchers adopt open science conventions of preregi-stration and full disclosure and that replication efforts be based on multiple studies rather than on a single replication attempt. We call for more sophisticated power analyses, careful consideration of the various influences on effect sizes, and more complete disclosure of nonsignificant as well as statistically significant findings.
An improved multiple linear regression and data analysis computer program package
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.
Tsallis statistics and neurodegenerative disorders
NASA Astrophysics Data System (ADS)
Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.
2016-08-01
In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.
NASA Astrophysics Data System (ADS)
Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu
2018-04-01
The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.
Thin layer asphaltic concrete density measuring using nuclear gages.
DOT National Transportation Integrated Search
1989-03-01
A Troxler 4640 thin layer nuclear gage was evaluated under field conditions to determine if it would provide improved accuracy of density measurements on asphalt overlays of 1-3/4 and 2 inches in thickness. Statistical analysis shows slightly improve...
Bunijevac, Mila; Petrović-Lazić, Mirjana; Jovanović-Simić, Nadica; Vuković, Mile
2016-02-01
The major role of larynx in speech, respiration and swallowing makes carcinomas of this region and their treatment very influential for patients' life quality. The aim of this study was to assess the importance of voice therapy in patients after open surgery on vocal cords. This study included 21 male patients and the control group of 19 subjects. The vowel (A) was recorded and analyzed for each examinee. All the patients were recorded twice: firstly, when they contacted the clinic and secondly, after a three-month vocal therapy, which was held twiceper week on an outpatient basis. The voice analysis was carried out in the Ear, Nose and Throat (ENT) Clinic, Clinical Hospital Center "Zvezdara" in Belgrade. The values of the acoustic parameters in the patients submitted to open surgery on the vocal cords before vocal rehabilitation and the control group subjects were significantly different in all specified parameters. These results suggest that the voice of the patients was damaged before vocal rehabilitation. The results of the acoustic parameters of the vowel (A) before and after vocal rehabilitation of the patients with open surgery on vocal cords were statistically significantly different. Among the parameters--Jitter (%), Shimmer (%)--the observed difference was highly statistically significant (p < 0.01). The voice turbulence index and the noise/harmonic ratio were also notably improved, and the observed difference was statistically significant (p < 0.05). The analysis of the tremor intensity index showed no significant improvement and the observed difference was not statistically significant (p > 0.05 ). CONCLUSION. There was a significant improvement of the acoustic parameters of the vowel (A) in the study subjects three months following vocal therapy. Only one out of five representative parameters showed no significant improvement.
Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben
2017-09-15
Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Woo, Jason R; Shikanov, Sergey; Zorn, Kevin C; Shalhav, Arieh L; Zagaja, Gregory P
2009-12-01
Posterior rhabdosphincter (PR) reconstruction during robot-assisted radical prostatectomy (RARP) was introduced in an attempt to improve postoperative continence. In the present study, we evaluate time to achieve continence in patients who are undergoing RARP with and without PR reconstruction. A prospective RARP database was searched for most recent cases that were accomplished with PR reconstruction (group 1, n = 69) or with standard technique (group 2, n = 63). We performed the analysis applying two definitions of continence: 0 pads per day or 0-1 security pad per day. Patients were evaluated by telephone interview. Statistical analysis was carried out using the Kaplan-Meier method and log-rank test. With PR reconstruction, continence was improved when defined as 0-1 security pad per day (median time of 90 vs 150 days; P = 0.01). This difference did not achieve statistical significance when continence was defined as 0 pads per day (P = 0.12). A statistically significant improvement in continence rate and time to achieve continence is seen in patients who are undergoing PR reconstruction during RARP, with continence defined as 0-1 security/safety pad per day. A larger, prospective and randomized study is needed to better understand the impact of this technique on postoperative continence.
Grbovic, Vesna; Jurisic-Skevin, Aleksandra; Djukic, Svetlana; Stefanović, Srdjan; Nurkovic, Jasmin
2016-01-01
[Purpose] Painful diabetic polyneuropathy occurs as a complication in 16% of all patients with diabetes mellitus. [Subjects and Methods] A clinical, prospective open-label randomized intervention study was conducted of 60 adult patients, with distal sensorimotor diabetic neuropathy two groups of 30 patients, with diabetes mellitus type 2 with distal sensorimotor diabetic neuropathy. Patients in group A were treated with combined physical procedures, and patients in group B were treated with alpha lipoic acid. [Results] There where a statistically significant improvements in terminal latency and the amplitude of the action potential in group A patients, while group B patients showed a statistically significant improvements in conduction velocity and terminal latency of n. peroneus. Group A patients showed a statistically significant improvements in conduction velocity and terminal latency, while group B patients also showed a statistically significant improvements in conduction velocity and terminal latency. This was reflected in a significant improvements in electrophysiological parameters (conduction velocity, amplitude and latency) of the motor and sensory nerves (n. peroneus, n. suralis). [Conclusion] These results present further evidence justifying of the use of physical agents in the treatment of diabetic sensorimotor polyneuropathy. PMID:27065527
Improved analyses using function datasets and statistical modeling
John S. Hogland; Nathaniel M. Anderson
2014-01-01
Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...
Statistical process control for residential treated wood
Patricia K. Lebow; Timothy M. Young; Stan Lebow
2017-01-01
This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...
Zhang, Ying; Sun, Jin; Zhang, Yun-Jiao; Chai, Qian-Yun; Zhang, Kang; Ma, Hong-Li; Wu, Xiao-Ke; Liu, Jian-Ping
2016-10-21
Although Traditional Chinese Medicine (TCM) has been widely used in clinical settings, a major challenge that remains in TCM is to evaluate its efficacy scientifically. This randomized controlled trial aims to evaluate the efficacy and safety of berberine in the treatment of patients with polycystic ovary syndrome. In order to improve the transparency and research quality of this clinical trial, we prepared this statistical analysis plan (SAP). The trial design, primary and secondary outcomes, and safety outcomes were declared to reduce selection biases in data analysis and result reporting. We specified detailed methods for data management and statistical analyses. Statistics in corresponding tables, listings, and graphs were outlined. The SAP provided more detailed information than trial protocol on data management and statistical analysis methods. Any post hoc analyses could be identified via referring to this SAP, and the possible selection bias and performance bias will be reduced in the trial. This study is registered at ClinicalTrials.gov, NCT01138930 , registered on 7 June 2010.
Effect of the absolute statistic on gene-sampling gene-set analysis methods.
Nam, Dougu
2017-06-01
Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.
NASA Astrophysics Data System (ADS)
Jokhio, Gul A.; Syed Mohsin, Sharifah M.; Gul, Yasmeen
2018-04-01
It has been established that Adobe provides, in addition to being sustainable and economic, a better indoor air quality without spending extensive amounts of energy as opposed to the modern synthetic materials. The material, however, suffers from weak structural behaviour when subjected to adverse loading conditions. A wide range of mechanical properties has been reported in literature owing to lack of research and standardization. The present paper presents the statistical analysis of the results that were obtained through compressive and flexural tests on Adobe samples. Adobe specimens with and without wire mesh reinforcement were tested and the results were reported. The statistical analysis of these results presents an interesting read. It has been found that the compressive strength of adobe increases by about 43% after adding a single layer of wire mesh reinforcement. This increase is statistically significant. The flexural response of Adobe has also shown improvement with the addition of wire mesh reinforcement, however, the statistical significance of the same cannot be established.
Spouge, J L
1992-01-01
Reports on retroviral primate trials rarely publish any statistical analysis. Present statistical methodology lacks appropriate tests for these trials and effectively discourages quantitative assessment. This paper describes the theory behind VACMAN, a user-friendly computer program that calculates statistics for in vitro and in vivo infectivity data. VACMAN's analysis applies to many retroviral trials using i.v. challenges and is valid whenever the viral dose-response curve has a particular shape. Statistics from actual i.v. retroviral trials illustrate some unappreciated principles of effective animal use: dilutions other than 1:10 can improve titration accuracy; infecting titration animals at the lowest doses possible can lower challenge doses; and finally, challenging test animals in small trials with more virus than controls safeguards against false successes, "reuses" animals, and strengthens experimental conclusions. The theory presented also explains the important concept of viral saturation, a phenomenon that may cause in vitro and in vivo titrations to agree for some retroviral strains and disagree for others. PMID:1323844
A critique of Rasch residual fit statistics.
Karabatsos, G
2000-01-01
In test analysis involving the Rasch model, a large degree of importance is placed on the "objective" measurement of individual abilities and item difficulties. The degree to which the objectivity properties are attained, of course, depends on the degree to which the data fit the Rasch model. It is therefore important to utilize fit statistics that accurately and reliably detect the person-item response inconsistencies that threaten the measurement objectivity of persons and items. Given this argument, it is somewhat surprising that there is far more emphasis placed in the objective measurement of person and items than there is in the measurement quality of Rasch fit statistics. This paper provides a critical analysis of the residual fit statistics of the Rasch model, arguably the most often used fit statistics, in an effort to illustrate that the task of Rasch fit analysis is not as simple and straightforward as it appears to be. The faulty statistical properties of the residual fit statistics do not allow either a convenient or a straightforward approach to Rasch fit analysis. For instance, given a residual fit statistic, the use of a single minimum critical value for misfit diagnosis across different testing situations, where the situations vary in sample and test properties, leads to both the overdetection and underdetection of misfit. To improve this situation, it is argued that psychometricians need to implement residual-free Rasch fit statistics that are based on the number of Guttman response errors, or use indices that are statistically optimal in detecting measurement disturbances.
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
A methodological analysis of chaplaincy research: 2000-2009.
Galek, Kathleen; Flannelly, Kevin J; Jankowski, Katherine R B; Handzo, George F
2011-01-01
The present article presents a comprehensive review and analysis of quantitative research conducted in the United States on chaplaincy and closely related topics published between 2000 and 2009. A combined search strategy identified 49 quantitative studies in 13 journals. The analysis focuses on the methodological sophistication of the studies, compared to earlier research on chaplaincy and pastoral care. Cross-sectional surveys of convenience samples still dominate the field, but sample sizes have increased somewhat over the past three decades. Reporting of the validity and reliability of measures continues to be low, although reporting of response rates has improved. Improvements in the use of inferential statistics and statistical controls were also observed, compared to previous research. The authors conclude that more experimental research is needed on chaplaincy, along with an increased use of hypothesis testing, regardless of the research designs that are used.
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Research design and statistical methods in Pakistan Journal of Medical Sciences (PJMS)
Akhtar, Sohail; Shah, Syed Wadood Ali; Rafiq, M.; Khan, Ajmal
2016-01-01
Objective: This article compares the study design and statistical methods used in 2005, 2010 and 2015 of Pakistan Journal of Medical Sciences (PJMS). Methods: Only original articles of PJMS were considered for the analysis. The articles were carefully reviewed for statistical methods and designs, and then recorded accordingly. The frequency of each statistical method and research design was estimated and compared with previous years. Results: A total of 429 articles were evaluated (n=74 in 2005, n=179 in 2010, n=176 in 2015) in which 171 (40%) were cross-sectional and 116 (27%) were prospective study designs. A verity of statistical methods were found in the analysis. The most frequent methods include: descriptive statistics (n=315, 73.4%), chi-square/Fisher’s exact tests (n=205, 47.8%) and student t-test (n=186, 43.4%). There was a significant increase in the use of statistical methods over time period: t-test, chi-square/Fisher’s exact test, logistic regression, epidemiological statistics, and non-parametric tests. Conclusion: This study shows that a diverse variety of statistical methods have been used in the research articles of PJMS and frequency improved from 2005 to 2015. However, descriptive statistics was the most frequent method of statistical analysis in the published articles while cross-sectional study design was common study design. PMID:27022365
Jia, Yongliang; Leung, Siu-wai
2015-11-01
There have been no systematic reviews, let alone meta-analyses, of randomized controlled trials (RCTs) comparing tongxinluo capsule (TXL) and beta-blockers in treating angina pectoris. This study aimed to evaluate the efficacy of TXL and beta-blockers in treating angina pectoris by a meta-analysis of eligible RCTs. The RCTs comparing TXL with beta-blockers (including metoprolol) in treating angina pectoris were searched and retrieved from databases including PubMed, Chinese National Knowledge Infrastructure, and WanFang Data. Eligible RCTs were selected according to prespecified criteria. Meta-analysis was performed on the odds ratios (OR) of symptomatic and electrocardiographic (ECG) improvements after treatment. Subgroup analysis, sensitivity analysis, meta-regression, and publication biases analysis were conducted to evaluate the robustness of the results. Seventy-three RCTs published between 2000 and 2014 with 7424 participants were eligible. Overall ORs comparing TXL with beta-blockers were 3.40 (95% confidence interval [CI], 2.97-3.89; p<0.0001) for symptomatic improvement and 2.63 (95% CI, 2.29-3.02; p<0.0001) for ECG improvement. Subgroup analysis and sensitivity analysis found no statistically significant dependence of overall ORs on specific study characteristics except efficacy criteria. Meta-regression found no significant except sample sizes for data on symptomatic improvement. Publication biases were statistically significant. TXL seems to be more effective than beta-blockers in treating angina pectoris, on the basis of the eligible RCTs. Further RCTs are warranted to reduce publication bias and verify efficacy.
Ignjatović, Aleksandra; Stojanović, Miodrag; Milošević, Zoran; Anđelković Apostolović, Marija
2017-12-02
The interest in developing risk models in medicine not only is appealing, but also associated with many obstacles in different aspects of predictive model development. Initially, the association of biomarkers or the association of more markers with the specific outcome was proven by statistical significance, but novel and demanding questions required the development of new and more complex statistical techniques. Progress of statistical analysis in biomedical research can be observed the best through the history of the Framingham study and development of the Framingham score. Evaluation of predictive models comes from a combination of the facts which are results of several metrics. Using logistic regression and Cox proportional hazards regression analysis, the calibration test, and the ROC curve analysis should be mandatory and eliminatory, and the central place should be taken by some new statistical techniques. In order to obtain complete information related to the new marker in the model, recently, there is a recommendation to use the reclassification tables by calculating the net reclassification index and the integrated discrimination improvement. Decision curve analysis is a novel method for evaluating the clinical usefulness of a predictive model. It may be noted that customizing and fine-tuning of the Framingham risk score initiated the development of statistical analysis. Clinically applicable predictive model should be a trade-off between all abovementioned statistical metrics, a trade-off between calibration and discrimination, accuracy and decision-making, costs and benefits, and quality and quantity of patient's life.
Austin, Peter C
2007-11-01
I conducted a systematic review of the use of propensity score matching in the cardiovascular surgery literature. I examined the adequacy of reporting and whether appropriate statistical methods were used. I examined 60 articles published in the Annals of Thoracic Surgery, European Journal of Cardio-thoracic Surgery, Journal of Cardiovascular Surgery, and the Journal of Thoracic and Cardiovascular Surgery between January 1, 2004, and December 31, 2006. Thirty-one of the 60 studies did not provide adequate information on how the propensity score-matched pairs were formed. Eleven (18%) of studies did not report on whether matching on the propensity score balanced baseline characteristics between treated and untreated subjects in the matched sample. No studies used appropriate methods to compare baseline characteristics between treated and untreated subjects in the propensity score-matched sample. Eight (13%) of the 60 studies explicitly used statistical methods appropriate for the analysis of matched data when estimating the effect of treatment on the outcomes. Two studies used appropriate methods for some outcomes, but not for all outcomes. Thirty-nine (65%) studies explicitly used statistical methods that were inappropriate for matched-pairs data when estimating the effect of treatment on outcomes. Eleven studies did not report the statistical tests that were used to assess the statistical significance of the treatment effect. Analysis of propensity score-matched samples tended to be poor in the cardiovascular surgery literature. Most statistical analyses ignored the matched nature of the sample. I provide suggestions for improving the reporting and analysis of studies that use propensity score matching.
Changes in parameters of right ventricular function with cardiac resynchronization therapy.
Sharma, Abhishek; Lavie, Carl J; Vallakati, Ajay; Garg, Akash; Goel, Sunny; Lazar, Jason; Fonarow, Gregg C
2017-11-01
Studies have shown that cardiac resynchronization therapy (CRT) significantly improves right ventricle (RV) size and function in patients with heart failure (HF). CRT does not lead to improvement in RV function independent of baseline clinical variables. A systematic search of studies published between 1966 to August 31, 2015 was conducted using Pub Med, CINAHL, Cochrane CENTRAL and the Web of Science databases. Studies reporting tricuspid annular plane systolic excursion (TAPSE) or RV basal strain or RV long axis diameter or RV short axis diameter or RV fractional area change (FAC), before and after CRT, were identified. A meta-analysis was performed using random effects with inverse variance method to determine the pooled mean difference in various parameters of RV function after CRT. Meta-regression analysis was performed to test the relationship between change in various parameters of RV functions after CRT and covariates- age, QRS duration, and left ventricular ejection fraction (LVEF). Thirteen studies (N=1541) were selected for final analysis. CRT therapy led to statistically significant increases in TAPSE [1.21 (95% CI 0.55-1.86; p<0.001)], RV FAC [2.26 (95% CI 0.50-4.01; p<0.001)] and basal strain [2.82 (95% CI 0.59-5.05; p<0.001)] and statistically significant decreases in mean RV long axis diameter [-2.94 (95% CI -5.07- -0.82; p=0.005)] and short axis diameter [-1.39 (95% CI -2.10- -0.67; p=0.876)] after a mean follow up period of 9 months. However, after meta-regression analysis for age, QRS duration, and baseline LVEF as covariates, there was no significant improvement in any of the parameters of RV function after CRT. There was a statistically significant improvement in TAPSE, RV basal strain, RV fractional area, RV long axis and short axis with CRT. However, improvement in these echocardiographic parameters of RV function after CRT was not independent of baseline clinical variables but statistically dependent on age, QRS duration and baseline LVEF. © 2017 Wiley Periodicals, Inc.
Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.
ERIC Educational Resources Information Center
Secondary School Mathematics Curriculum Improvement Study, New York, NY.
The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…
DOT National Transportation Integrated Search
1975-02-01
Rail fasteners for concrete ties and direct fixation and bolted rail joints have been identified as key components for improving track performance. However, the lack of statistical load data limits the development of improved design criteria and eval...
On Improving the Experiment Methodology in Pedagogical Research
ERIC Educational Resources Information Center
Horakova, Tereza; Houska, Milan
2014-01-01
The paper shows how the methodology for a pedagogical experiment can be improved through including the pre-research stage. If the experiment has the form of a test procedure, an improvement of methodology can be achieved using for example the methods of statistical and didactic analysis of tests which are traditionally used in other areas, i.e.…
Improving information retrieval in functional analysis.
Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A
2016-12-01
Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sale, Patrizio; De Pandis, Maria Francesca; Le Pera, Domenica; Sova, Ivan; Cimolin, Veronica; Ancillao, Andrea; Albertini, Giorgio; Galli, Manuela; Stocchi, Fabrizio; Franceschini, Marco
2013-05-24
Over the last years, the introduction of robotic technologies into Parkinson's disease rehabilitation settings has progressed from concept to reality. However, the benefit of robotic training remains elusive. This pilot randomized controlled observer trial is aimed at investigating the feasibility, the effectiveness and the efficacy of new end-effector robot training in people with mild Parkinson's disease. Design. Pilot randomized controlled trial. Robot training was feasible, acceptable, safe, and the participants completed 100% of the prescribed training sessions. A statistically significant improvement in gait index was found in favour of the EG (T0 versus T1). In particular, the statistical analysis of primary outcome (gait speed) using the Friedman test showed statistically significant improvements for the EG (p = 0,0195). The statistical analysis performed by Friedman test of Step length left (p = 0,0195) and right (p = 0,0195) and Stride length left (p = 0,0078) and right (p = 0,0195) showed a significant statistical gain. No statistically significant improvements on the CG were found. Robot training is a feasible and safe form of rehabilitative exercise for cognitively intact people with mild PD. This original approach can contribute to increase a short time lower limb motor recovery in idiopathic PD patients. The focus on the gait recovery is a further characteristic that makes this research relevant to clinical practice. On the whole, the simplicity of treatment, the lack of side effects, and the positive results from patients support the recommendation to extend the use of this treatment. Further investigation regarding the long-time effectiveness of robot training is warranted. ClinicalTrials.gov NCT01668407.
ASCS online fault detection and isolation based on an improved MPCA
NASA Astrophysics Data System (ADS)
Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan
2014-09-01
Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.
Statistical process management: An essential element of quality improvement
NASA Astrophysics Data System (ADS)
Buckner, M. R.
Successful quality improvement requires a balanced program involving the three elements that control quality: organization, people and technology. The focus of the SPC/SPM User's Group is to advance the technology component of Total Quality by networking within the Group and by providing an outreach within Westinghouse to foster the appropriate use of statistic techniques to achieve Total Quality. SPM encompasses the disciplines by which a process is measured against its intrinsic design capability, in the face of measurement noise and other obscuring variability. SPM tools facilitate decisions about the process that generated the data. SPM deals typically with manufacturing processes, but with some flexibility of definition and technique it accommodates many administrative processes as well. The techniques of SPM are those of Statistical Process Control, Statistical Quality Control, Measurement Control, and Experimental Design. In addition, techniques such as job and task analysis, and concurrent engineering are important elements of systematic planning and analysis that are needed early in the design process to ensure success. The SPC/SPM User's Group is endeavoring to achieve its objectives by sharing successes that have occurred within the member's own Westinghouse department as well as within other US and foreign industry. In addition, failures are reviewed to establish lessons learned in order to improve future applications. In broader terms, the Group is interested in making SPM the accepted way of doing business within Westinghouse.
Improved Statistics for Genome-Wide Interaction Analysis
Ueki, Masao; Cordell, Heather J.
2012-01-01
Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670
Nightingale, Gemma; Shehab, Qasem; Kandiah, Chandrakumaran; Rush, Lorraine; Rowe-Jones, Clare; Phillips, Christian H
2018-02-01
To determine whether sexual dysfunction in women with recurrent urinary tract infections (RUTI) improved following treatment with intravesical Hyaluronic Acid (HA) instillations. Ethical approval was obtained for a prospective study to be performed. Patients referred for bladder instillations to treat RUTI, and who were sexually active, were recruited to the study. A selection of validated questionnaires (ICIQ-UI, ICIQ-VS, FSDS-R, ICIQ-FLUTS, O'Leary/Sant and PGI-I) were completed at baseline, three, six and 12 months after initiation of treatment with bladder instillations. Treatment consisted of weekly bladder instillations with a preparation containing HA for four weeks then monthly for two further treatments. Results were populated in SPSS for statistical analysis and statistical significance was powered for 22 patients. Thirty women were included in the study. FSDS-R was used to determine sexual dysfunction and showed that 57% patients with RUTI had significant sexual distress. There was a significant improvement in FSDS-R at three, six and 12 months when compared to baseline (Friedman two-way analysis p < 0.001). ICIQ FLUTS F and I scores, O'Leary/Sant, ICIQ VS and PGI-I also showed a statistically significant improvement throughout the period of follow up. A statistically significant, negative correlation was found between FSDS-R and PGI-I at 12 months (r = -0.468, p = 0.009). We have reinforced previous work showing the association between RUTI and sexual dysfunction, and an improvement in bladder symptoms following treatment with HA. To our knowledge, this is the first study to prove an improvement in sexual dysfunction following intravesical treatment with HA which is sustained for up to 12 months. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Cuomo, Raphael E; Mackey, Tim K
2014-12-02
To explore healthcare policy and system improvements that would more proactively respond to future penetration of counterfeit cancer medications in the USA drug supply chain using geospatial analysis. A statistical and geospatial analysis of areas that received notices from the Food and Drug Administration (FDA) about the possibility of counterfeit Avastin penetrating the US drug supply chain. Data from FDA warning notices were compared to data from 44 demographic variables available from the US Census Bureau via correlation, means testing and geospatial visualisation. Results were interpreted in light of existing literature in order to recommend improvements to surveillance of counterfeit medicines. This study analysed 791 distinct healthcare provider addresses that received FDA warning notices across 30,431 zip codes in the USA. Statistical outputs were Pearson's correlation coefficients and t values. Geospatial outputs were cartographic visualisations. These data were used to generate the overarching study outcome, which was a recommendation for a strategy for drug safety surveillance congruent with existing literature on counterfeit medication. Zip codes with greater numbers of individuals age 65+ and greater numbers of ethnic white individuals were most correlated with receipt of a counterfeit Avastin notice. Geospatial visualisations designed in conjunction with statistical analysis of demographic variables appeared more capable of suggesting areas and populations that may be at risk for undetected counterfeit Avastin penetration. This study suggests that dual incorporation of statistical and geospatial analysis in surveillance of counterfeit medicine may be helpful in guiding efforts to prevent, detect and visualise counterfeit medicines penetrations in the US drug supply chain and other settings. Importantly, the information generated by these analyses could be utilised to identify at-risk populations associated with demographic characteristics. Stakeholders should explore these results as another tool to improve on counterfeit medicine surveillance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge
2016-05-04
Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study's objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect.
Extending Working Life: Which Competencies are Crucial in Near-Retirement Age?
Wiktorowicz, Justyna
2018-01-01
Nowadays, one of the most important economic and social phenomena is population ageing. Due to the low activity rate of older people, one of the most important challenges is to take various actions involving active ageing, which is supposed to extending working life, and along with it-improve the competencies of older people. The aim of this paper is to evaluate the relevance of different competencies for extending working life, with limiting the analysis for Poland. The paper also assesses the competencies of mature Polish people (aged 50+, but still in working age). In the statistical analysis, I used logistic regression, as well as descriptive statistics and appropriate statistical tests. The results show that among the actions aimed at extending working life, the most important are those related to lifelong learning, targeted at improving the competencies of the older generation. The competencies (both soft and hard) of people aged 50+ are more important than their formal education.
Ximenes, Sofia; Silva, Ana; Soares, António; Flores-Colen, Inês; de Brito, Jorge
2016-01-01
Statistical models using multiple linear regression are some of the most widely used methods to study the influence of independent variables in a given phenomenon. This study’s objective is to understand the influence of the various components of aerogel-based renders on their thermal and mechanical performance, namely cement (three types), fly ash, aerial lime, silica sand, expanded clay, type of aerogel, expanded cork granules, expanded perlite, air entrainers, resins (two types), and rheological agent. The statistical analysis was performed using SPSS (Statistical Package for Social Sciences), based on 85 mortar mixes produced in the laboratory and on their values of thermal conductivity and compressive strength obtained using tests in small-scale samples. The results showed that aerial lime assumes the main role in improving the thermal conductivity of the mortars. Aerogel type, fly ash, expanded perlite and air entrainers are also relevant components for a good thermal conductivity. Expanded clay can improve the mechanical behavior and aerogel has the opposite effect. PMID:28773460
Phillips, David E; AbouZahr, Carla; Lopez, Alan D; Mikkelsen, Lene; de Savigny, Don; Lozano, Rafael; Wilmoth, John; Setel, Philip W
2015-10-03
In this Series paper, we examine whether well functioning civil registration and vital statistics (CRVS) systems are associated with improved population health outcomes. We present a conceptual model connecting CRVS to wellbeing, and describe an ecological association between CRVS and health outcomes. The conceptual model posits that the legal identity that civil registration provides to individuals is key to access entitlements and services. Vital statistics produced by CRVS systems provide essential information for public health policy and prevention. These outcomes benefit individuals and societies, including improved health. We use marginal linear models and lag-lead analysis to measure ecological associations between a composite metric of CRVS performance and three health outcomes. Results are consistent with the conceptual model: improved CRVS performance coincides with improved health outcomes worldwide in a temporally consistent manner. Investment to strengthen CRVS systems is not only an important goal for individuals and societies, but also a development imperative that is good for health. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Monteiro, Mayra; Oliveira, Victor; Santos, Francisco; Barros Neto, Eduardo; Silva, Karyn; Silva, Rayane; Henrique, João; Chibério, Abimaelle
2017-08-01
In order to obtain cassava starch films with improved mechanical properties in relation to the synthetic polymer in the packaging production, a complete factorial design 23 was carried out in order to investigate which factor significantly influences the tensile strength of the biofilm. The factors to be investigated were cassava starch, glycerol and modified clay contents. Modified bentonite clay was used as a filling material of the biofilm. Glycerol was the plasticizer used to thermoplastify cassava starch. The factorial analysis suggested a regression model capable of predicting the optimal mechanical property of the cassava starch film from the maximization of the tensile strength. The reliability of the regression model was tested by the correlation established with the experimental data through the following statistical analyse: Pareto graph. The modified clay was the factor of greater statistical significance on the observed response variable, being the factor that contributed most to the improvement of the mechanical property of the starch film. The factorial experiments showed that the interaction of glycerol with both modified clay and cassava starch was significant for the reduction of biofilm ductility. Modified clay and cassava starch contributed to the maximization of biofilm ductility, while glycerol contributed to the minimization.
Statistical considerations in the development of injury risk functions.
McMurry, Timothy L; Poplin, Gerald S
2015-01-01
We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.
BATMAN: Bayesian Technique for Multi-image Analysis
NASA Astrophysics Data System (ADS)
Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.
2017-04-01
This paper describes the Bayesian Technique for Multi-image Analysis (BATMAN), a novel image-segmentation technique based on Bayesian statistics that characterizes any astronomical data set containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (I.e. identical signal within the errors). We illustrate its operation and performance with a set of test cases including both synthetic and real integral-field spectroscopic data. The output segmentations adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. The quality of the recovered signal represents an improvement with respect to the input, especially in regions with low signal-to-noise ratio. However, the algorithm may be sensitive to small-scale random fluctuations, and its performance in presence of spatial gradients is limited. Due to these effects, errors may be underestimated by as much as a factor of 2. Our analysis reveals that the algorithm prioritizes conservation of all the statistically significant information over noise reduction, and that the precise choice of the input data has a crucial impact on the results. Hence, the philosophy of BaTMAn is not to be used as a 'black box' to improve the signal-to-noise ratio, but as a new approach to characterize spatially resolved data prior to its analysis. The source code is publicly available at http://astro.ft.uam.es/SELGIFS/BaTMAn.
NASA Astrophysics Data System (ADS)
Lopez, S. R.; Hogue, T. S.
2011-12-01
Global climate models (GCMs) are primarily used to generate historical and future large-scale circulation patterns at a coarse resolution (typical order of 50,000 km2) and fail to capture climate variability at the ground level due to localized surface influences (i.e topography, marine, layer, land cover, etc). Their inability to accurately resolve these processes has led to the development of numerous 'downscaling' techniques. The goal of this study is to enhance statistical downscaling of daily precipitation and temperature for regions with heterogeneous land cover and topography. Our analysis was divided into two periods, historical (1961-2000) and contemporary (1980-2000), and tested using sixteen predictand combinations from four GCMs (GFDL CM2.0, GFDL CM2.1, CNRM-CM3 and MRI-CGCM2 3.2a. The Southern California area was separated into five county regions: Santa Barbara, Ventura, Los Angeles, Orange and San Diego. Principle component analysis (PCA) was performed on ground-based observations in order to (1) reduce the number of redundant gauges and minimize dimensionality and (2) cluster gauges that behave statistically similarly for post-analysis. Post-PCA analysis included extensive testing of predictor-predictand relationships using an enhanced canonical correlation analysis (ECCA). The ECCA includes obtaining the optimal predictand sets for all models within each spatial domain (county) as governed by daily and monthly overall statistics. Results show all models maintain mean annual and monthly behavior within each county and daily statistics are improved. The level of improvement highly depends on the vegetation extent within each county and the land-to-ocean ratio within the GCM spatial grid. The utilization of the entire historical period also leads to better statistical representation of observed daily precipitation. The validated ECCA technique is being applied to future climate scenarios distributed by the IPCC in order to provide forcing data for regional hydrologic models and assess future water resources in the Southern California region.
Which statistics should tropical biologists learn?
Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián
2011-09-01
Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.
Carmichael, Mary C.; St. Clair, Candace; Edwards, Andrea M.; Barrett, Peter; McFerrin, Harris; Davenport, Ian; Awad, Mohamed; Kundu, Anup; Ireland, Shubha Kale
2016-01-01
Xavier University of Louisiana leads the nation in awarding BS degrees in the biological sciences to African-American students. In this multiyear study with ∼5500 participants, data-driven interventions were adopted to improve student academic performance in a freshman-level general biology course. The three hour-long exams were common and administered concurrently to all students. New exam questions were developed using Bloom’s taxonomy, and exam results were analyzed statistically with validated assessment tools. All but the comprehensive final exam were returned to students for self-evaluation and remediation. Among other approaches, course rigor was monitored by using an identical set of 60 questions on the final exam across 10 semesters. Analysis of the identical sets of 60 final exam questions revealed that overall averages increased from 72.9% (2010) to 83.5% (2015). Regression analysis demonstrated a statistically significant correlation between high-risk students and their averages on the 60 questions. Additional analysis demonstrated statistically significant improvements for at least one letter grade from midterm to final and a 20% increase in the course pass rates over time, also for the high-risk population. These results support the hypothesis that our data-driven interventions and assessment techniques are successful in improving student retention, particularly for our academically at-risk students. PMID:27543637
Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis
Bang, Heejung; Zhao, Hongwei
2014-01-01
Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979
Leveraging Code Comments to Improve Software Reliability
ERIC Educational Resources Information Center
Tan, Lin
2009-01-01
Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…
2007-01-01
Background The US Food and Drug Administration approved the Charité artificial disc on October 26, 2004. This approval was based on an extensive analysis and review process; 20 years of disc usage worldwide; and the results of a prospective, randomized, controlled clinical trial that compared lumbar artificial disc replacement to fusion. The results of the investigational device exemption (IDE) study led to a conclusion that clinical outcomes following lumbar arthroplasty were at least as good as outcomes from fusion. Methods The author performed a new analysis of the Visual Analog Scale pain scores and the Oswestry Disability Index scores from the Charité artificial disc IDE study and used a nonparametric statistical test, because observed data distributions were not normal. The analysis included all of the enrolled subjects in both the nonrandomized and randomized phases of the study. Results Subjects from both the treatment and control groups improved from the baseline situation (P < .001) at all follow-up times (6 weeks to 24 months). Additionally, these pain and disability levels with artificial disc replacement were superior (P < .05) to the fusion treatment at all follow-up times including 2 years. Conclusions The a priori statistical plan for an IDE study may not adequately address the final distribution of the data. Therefore, statistical analyses more appropriate to the distribution may be necessary to develop meaningful statistical conclusions from the study. A nonparametric statistical analysis of the Charité artificial disc IDE outcomes scores demonstrates superiority for lumbar arthroplasty versus fusion at all follow-up time points to 24 months. PMID:25802574
Macfarlane, Sarah B.
2005-01-01
Efforts to strengthen health information systems in low- and middle-income countries should include forging links with systems in other social and economic sectors. Governments are seeking comprehensive socioeconomic data on the basis of which to implement strategies for poverty reduction and to monitor achievement of the Millennium Development Goals. The health sector is looking to take action on the social factors that determine health outcomes. But there are duplications and inconsistencies between sectors in the collection, reporting, storage and analysis of socioeconomic data. National offices of statistics give higher priority to collection and analysis of economic than to social statistics. The Report of the Commission for Africa has estimated that an additional US$ 60 million a year is needed to improve systems to collect and analyse statistics in Africa. Some donors recognize that such systems have been weakened by numerous international demands for indicators, and have pledged support for national initiatives to strengthen statistical systems, as well as sectoral information systems such as those in health and education. Many governments are working to coordinate information systems to monitor and evaluate poverty reduction strategies. There is therefore an opportunity for the health sector to collaborate with other sectors to lever international resources to rationalize definition and measurement of indicators common to several sectors; streamline the content, frequency and timing of household surveys; and harmonize national and subnational databases that store socioeconomic data. Without long-term commitment to improve training and build career structures for statisticians and information technicians working in the health and other sectors, improvements in information and statistical systems cannot be sustained. PMID:16184278
Analysis and meta-analysis of single-case designs: an introduction.
Shadish, William R
2014-04-01
The last 10 years have seen great progress in the analysis and meta-analysis of single-case designs (SCDs). This special issue includes five articles that provide an overview of current work on that topic, including standardized mean difference statistics, multilevel models, Bayesian statistics, and generalized additive models. Each article analyzes a common example across articles and presents syntax or macros for how to do them. These articles are followed by commentaries from single-case design researchers and journal editors. This introduction briefly describes each article and then discusses several issues that must be addressed before we can know what analyses will eventually be best to use in SCD research. These issues include modeling trend, modeling error covariances, computing standardized effect size estimates, assessing statistical power, incorporating more accurate models of outcome distributions, exploring whether Bayesian statistics can improve estimation given the small samples common in SCDs, and the need for annotated syntax and graphical user interfaces that make complex statistics accessible to SCD researchers. The article then discusses reasons why SCD researchers are likely to incorporate statistical analyses into their research more often in the future, including changing expectations and contingencies regarding SCD research from outside SCD communities, changes and diversity within SCD communities, corrections of erroneous beliefs about the relationship between SCD research and statistics, and demonstrations of how statistics can help SCD researchers better meet their goals. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Hashmi, Noreen Rahat; Khan, Shazad Ali
2018-05-31
To check if mobile health (m-Health) short message service (SMS) can improve the knowledge and practice of the American Diabetic Association preventive care guidelines (ADA guidelines) recommendations among physicians. Quasi-experimental pre-post study design with a control group. The participants of the study were 62 medical officers/medical postgraduate trainees from two hospitals in Lahore, Pakistan. Pretested questionnaire was used to collect baseline information about physicians' knowledge and adherence according to the ADA guidelines. All the respondents attended 1-day workshop about the guidelines. The intervention group received regular reminders by SMS about the ADA guidelines for the next 5 months. Postintervention knowledge and practice scores of 13 variables were checked again using the same questionnaire. Statistical analysis included χ 2 and McNemar's tests for categorical variables and t-test for continuous variables. Pearson's correlation analysis was done to check correlation between knowledge and practice scores in the intervention group. P values of <0.05 were considered statistically significant. The total number of participating physicians was 62. Fifty-three (85.5%) respondents completed the study. Composite scores within the intervention group showed statistically significant improvement in knowledge (p<0.001) and practice (p<0.001) postintervention. The overall composite scores preintervention and postintervention also showed statistically significant difference of improvement in knowledge (p=0.002) and practice (p=0.001) between non-intervention and intervention groups. Adherence to individual 13 ADA preventive care guidelines level was noted to be suboptimal at baseline. Statistically significant improvement in the intervention group was seen in the following individual variables: review of symptoms of hypoglycaemia and hyperglycaemia, eye examination, neurological examination, lipid examination, referral to ophthalmologist, and counselling about non-smoking. m-Health technology can be a useful educational tool to help with improving knowledge and practice of diabetic guidelines. Future multicentre trials will help to scale this intervention for wider use in resource-limited countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.
Counsell, Alyssa; Harlow, Lisa L
2017-05-01
With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.
Precipitate statistics in an Al-Mg-Si-Cu alloy from scanning precession electron diffraction data
NASA Astrophysics Data System (ADS)
Sunde, J. K.; Paulsen, Ø.; Wenner, S.; Holmestad, R.
2017-09-01
The key microstructural feature providing strength to age-hardenable Al alloys is nanoscale precipitates. Alloy development requires a reliable statistical assessment of these precipitates, in order to link the microstructure with material properties. Here, it is demonstrated that scanning precession electron diffraction combined with computational analysis enable the semi-automated extraction of precipitate statistics in an Al-Mg-Si-Cu alloy. Among the main findings is the precipitate number density, which agrees well with a conventional method based on manual counting and measurements. By virtue of its data analysis objectivity, our methodology is therefore seen as an advantageous alternative to existing routines, offering reproducibility and efficiency in alloy statistics. Additional results include improved qualitative information on phase distributions. The developed procedure is generic and applicable to any material containing nanoscale precipitates.
2014-12-01
example of maximizing or minimizing decision variables within a model. Carol Stoker and Stephen Mehay present a comparative analysis of marketing and advertising strategies...strategy development process; documenting various recruiting, marketing , and advertising initiatives in each nation; and examining efforts to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David E.; Pelak, Robert A.
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less
Townsend, E; Hawton, K; Altman, D G; Arensman, E; Gunnell, D; Hazell, P; House, A; Van Heeringen, K
2001-08-01
Brief problem-solving therapy is regarded as a pragmatic treatment for deliberate self-harm (DSH) patients. A recent meta-analysis of randomized controlled trials (RCTs) evaluating this approach indicated a trend towards reduced repetition of DSH but the pooled odds ratio was not statistically significant. We have now examined other important outcomes using this procedure, namely depression, hopelessness and improvement in problems. Six trials in which problem-solving therapy was compared with control treatment were identified from an extensive literature review of RCTs of treatments for DSH patients. Data concerning depression, hopelessness and improvement in problems were extracted. Where relevant statistical data (e.g. standard deviations) were missing these were imputed using various statistical methods. Results were pooled using meta-analytical procedures. At follow-up, patients who were offered problem-solving therapy had significantly greater improvement in scores for depression (standardized mean difference = -0.36; 95% CI -0.61 to -0.11) and hopelessness (weighted mean difference =-3.2; 95% CI -4.0 to -2.41), and significantly more reported improvement in their problems (odds ratio = 2.31; 95% CI 1.29 to 4.13), than patients who were in the control treatment groups. Problem-solving therapy for DSH patients appears to produce better results than control treatment with regard to improvement in depression, hopelessness and problems. It is desirable that this finding is confirmed in a large trial, which will also allow adequate testing of the impact of this treatment on repetition of DSH.
Spectral Analysis of B Stars: An Application of Bayesian Statistics
NASA Astrophysics Data System (ADS)
Mugnes, J.-M.; Robert, C.
2012-12-01
To better understand the processes involved in stellar physics, it is necessary to obtain accurate stellar parameters (effective temperature, surface gravity, abundances…). Spectral analysis is a powerful tool for investigating stars, but it is also vital to reduce uncertainties at a decent computational cost. Here we present a spectral analysis method based on a combination of Bayesian statistics and grids of synthetic spectra obtained with TLUSTY. This method simultaneously constrains the stellar parameters by using all the lines accessible in observed spectra and thus greatly reduces uncertainties and improves the overall spectrum fitting. Preliminary results are shown using spectra from the Observatoire du Mont-Mégantic.
Mocellin, Simone; Pasquali, Sandro; Rossi, Carlo R; Nitti, Donato
2010-04-07
Based on previous meta-analyses of randomized controlled trials (RCTs), the use of interferon alpha (IFN-alpha) in the adjuvant setting improves disease-free survival (DFS) in patients with high-risk cutaneous melanoma. However, RCTs have yielded conflicting data on the effect of IFN-alpha on overall survival (OS). We conducted a systematic review and meta-analysis to examine the effect of IFN-alpha on DFS and OS in patients with high-risk cutaneous melanoma. The systematic review was performed by searching MEDLINE, EMBASE, Cancerlit, Cochrane, ISI Web of Science, and ASCO databases. The meta-analysis was performed using time-to-event data from which hazard ratios (HRs) and 95% confidence intervals (CIs) of DFS and OS were estimated. Subgroup and meta-regression analyses to investigate the effect of dose and treatment duration were also performed. Statistical tests were two-sided. The meta-analysis included 14 RCTs, published between 1990 and 2008, and involved 8122 patients, of which 4362 patients were allocated to the IFN-alpha arm. IFN-alpha alone was compared with observation in 12 of the 14 trials, and 17 comparisons (IFN-alpha vs comparator) were generated in total. IFN-alpha treatment was associated with a statistically significant improvement in DFS in 10 of the 17 comparisons (HR for disease recurrence = 0.82, 95% CI = 0.77 to 0.87; P < .001) and improved OS in four of the 14 comparisons (HR for death = 0.89, 95% CI = 0.83 to 0.96; P = .002). No between-study heterogeneity in either DFS or OS was observed. No optimal IFN-alpha dose and/or treatment duration or a subset of patients more responsive to adjuvant therapy was identified using subgroup analysis and meta-regression. In patients with high-risk cutaneous melanoma, IFN-alpha adjuvant treatment showed statistically significant improvement in both DFS and OS.
The impact of varicocelectomy on sperm parameters: a meta-analysis.
Schauer, Ingrid; Madersbacher, Stephan; Jost, Romy; Hübner, Wilhelm Alexander; Imhof, Martin
2012-05-01
We determined the impact of 3 surgical techniques (high ligation, inguinal varicocelectomy and the subinguinal approach) for varicocelectomy on sperm parameters (count and motility) and pregnancy rates. By searching the literature using MEDLINE and the Cochrane Library with the last search performed in February 2011, focusing on the last 20 years, a total of 94 articles published between 1975 and 2011 reporting on sperm parameters before and after varicocelectomy were identified. Inclusion criteria for this meta-analysis were at least 2 semen analyses (before and 3 or more months after the procedure), patient age older than 19 years, clinical subfertility and/or abnormal semen parameters, and a clinically palpable varicocele. To rule out skewing factors a bias analysis was performed, and statistical analysis was done with RevMan5(®) and SPSS 15.0(®). A total of 14 articles were included in the statistical analysis. All 3 surgical approaches led to significant or highly significant postoperative improvement of both parameters with only slight numeric differences among the techniques. This difference did not reach statistical significance for sperm count (p = 0.973) or sperm motility (p = 0.372). After high ligation surgery sperm count increased by 10.85 million per ml (p = 0.006) and motility by 6.80% (p <0.00001) on the average. Inguinal varicocelectomy led to an improvement in sperm count of 7.17 million per ml (p <0.0001) while motility changed by 9.44% (p = 0.001). Subinguinal varicocelectomy provided an increase in sperm count of 9.75 million per ml (p = 0.002) and sperm motility by 12.25% (p = 0.001). Inguinal varicocelectomy showed the highest pregnancy rate of 41.48% compared to 26.90% and 26.56% after high ligation and subinguinal varicocelectomy, respectively, and the difference was statistically significant (p = 0.035). This meta-analysis suggests that varicocelectomy leads to significant improvements in sperm count and motility regardless of surgical technique, with the inguinal approach offering the highest pregnancy rate. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
An improved method for determining force balance calibration accuracy
NASA Technical Reports Server (NTRS)
Ferris, Alice T.
1993-01-01
The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.
ERIC Educational Resources Information Center
Higher Education Funding Council for England, 2016
2016-01-01
The Higher Education Funding Council for England (HEFCE) asked the Higher Education Statistics Agency (HESA) and the Higher Education Academy (HEA) to undertake research to explore the current issues around academic teaching qualifications in the HESA Staff record and to offer recommendations to improve data quality and coverage in future…
Birenbaum, H J; Pfoh, E R; Helou, S; Pane, M A; Marinkovich, G A; Dentry, A; Yeh, Hsin-Chieh; Updegraff, L; Arnold, C; Liverman, S; Cawman, H
2016-05-19
We previously demonstrated a significant reduction in our incidence of chronic lung disease in our NICU using potentially better practices of avoiding delivery room endotracheal intubation and using early nasal CPAP. We sought to demonstrate whether these improvements were sustained and or improved over time. We conducted a retrospective, cross-sectional analysis of infants 501-1500 grams born at our hospital between 2005 and 2013. Infants born during the 2005-2007, 2008-2010 and 2011-2013 epochs were grouped together, respectively. Descriptive analysis was conducted to determine the number and percent of maternal and neonatal characteristics by year grouping. Chi-squared tests were used to determine whether there were any statistically significant changes in characteristics across year groupings.. Two outcome variables were assessed: a diagnosis of chronic lung disease based on the Vermont Oxford Network definition and being discharged home on supplemental oxygen. There was a statistically significant improvement in the incidence of chronic lung disease in infants below 27 weeks' gestation in the three year period in the 2011-2013 cohort compared with those in the 2005-2007 cohort. We also found a statistically significant improvement in the number of infants discharged on home oxygen with birth weights 751-1000 grams and infants with gestational age less than 27 weeks in the 2011-2013 cohort compared to the 2005-2007 cohort. We demonstrated sustained improvement in our incidence of CLD between 2005 and 2013. We speculate that a multifaceted strategy of avoiding intubation and excessive oxygen in the delivery room, the early use of CPAP, as well as the use of volume targeted ventilation, when needed, may help significantly reduce the incidence of CLD.
DOT National Transportation Integrated Search
2014-03-01
Recent research in highway safety has focused on the more advanced and statistically proven techniques of highway : safety analysis. This project focuses on the two most recent safety analysis tools, the Highway Safety Manual (HSM) : and SafetyAnalys...
Opportunities for Applied Behavior Analysis in the Total Quality Movement.
ERIC Educational Resources Information Center
Redmon, William K.
1992-01-01
This paper identifies critical components of recent organizational quality improvement programs and specifies how applied behavior analysis can contribute to quality technology. Statistical Process Control and Total Quality Management approaches are compared, and behavior analysts are urged to build their research base and market behavior change…
Intuitive Analysis of Variance-- A Formative Assessment Approach
ERIC Educational Resources Information Center
Trumpower, David
2013-01-01
This article describes an assessment activity that can show students how much they intuitively understand about statistics, but also alert them to common misunderstandings. How the activity can be used formatively to help improve students' conceptual understanding of analysis of variance is discussed. (Contains 1 figure and 1 table.)
Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.
Echinaka, Yuki; Ozeki, Yukiyasu
2016-10-01
The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.
Use of iPhone technology in improving acetabular component position in total hip arthroplasty.
Tay, Xiau Wei; Zhang, Benny Xu; Gayagay, George
2017-09-01
Improper acetabular cup positioning is associated with high risk of complications after total hip arthroplasty. The aim of our study is to objectively compare 3 methods, namely (1) free hand, (2) alignment jig (Sputnik), and (3) iPhone application to identify an easy, reproducible, and accurate method in improving acetabular cup placement. We designed a simple setup and carried out a simple experiment (see Method section). Using statistical analysis, the difference in inclination angles using iPhone application compared with the freehand method was found to be statistically significant ( F [2,51] = 4.17, P = .02) in the "untrained group". There is no statistical significance detected for the other groups. This suggests a potential role for iPhone applications in junior surgeons in overcoming the steep learning curve.
Automated Box-Cox Transformations for Improved Visual Encoding.
Maciejewski, Ross; Pattath, Avin; Ko, Sungahn; Hafen, Ryan; Cleveland, William S; Ebert, David S
2013-01-01
The concept of preconditioning data (utilizing a power transformation as an initial step) for analysis and visualization is well established within the statistical community and is employed as part of statistical modeling and analysis. Such transformations condition the data to various inherent assumptions of statistical inference procedures, as well as making the data more symmetric and easier to visualize and interpret. In this paper, we explore the use of the Box-Cox family of power transformations to semiautomatically adjust visual parameters. We focus on time-series scaling, axis transformations, and color binning for choropleth maps. We illustrate the usage of this transformation through various examples, and discuss the value and some issues in semiautomatically using these transformations for more effective data visualization.
Biostatistical analysis of quantitative immunofluorescence microscopy images.
Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C
2016-12-01
Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
Padula, William V; Mishra, Manish K; Weaver, Christopher D; Yilmaz, Taygan; Splaine, Mark E
2012-06-01
To demonstrate complementary results of regression and statistical process control (SPC) chart analyses for hospital-acquired pressure ulcers (HAPUs), and identify possible links between changes and opportunities for improvement between hospital microsystems and macrosystems. Ordinary least squares and panel data regression of retrospective hospital billing data, and SPC charts of prospective patient records for a US tertiary-care facility (2004-2007). A prospective cohort of hospital inpatients at risk for HAPUs was the study population. There were 337 HAPU incidences hospital wide among 43 844 inpatients. A probit regression model predicted the correlation of age, gender and length of stay on HAPU incidence (pseudo R(2)=0.096). Panel data analysis determined that for each additional day in the hospital, there was a 0.28% increase in the likelihood of HAPU incidence. A p-chart of HAPU incidence showed a mean incidence rate of 1.17% remaining in statistical control. A t-chart showed the average time between events for the last 25 HAPUs was 13.25 days. There was one 57-day period between two incidences during the observation period. A p-chart addressing Braden scale assessments showed that 40.5% of all patients were risk stratified for HAPUs upon admission. SPC charts complement standard regression analysis. SPC amplifies patient outcomes at the microsystem level and is useful for guiding quality improvement. Macrosystems should monitor effective quality improvement initiatives in microsystems and aid the spread of successful initiatives to other microsystems, followed by system-wide analysis with regression. Although HAPU incidence in this study is below the national mean, there is still room to improve HAPU incidence in this hospital setting since 0% incidence is theoretically achievable. Further assessment of pressure ulcer incidence could illustrate improvement in the quality of care and prevent HAPUs.
Bianchi, Bernardo; Ferri, Andrea; Ferrari, Silvano; Copelli, Chiara; Sesenna, Enrico
2011-04-01
The purpose of this article was to analyze the efficacy of facelift incision, sternocleidomastoid muscle flap, and superficial musculoaponeurotic system flap for improving the esthetic results in patients undergoing partial parotidectomy for benign parotid tumor resection. The usefulness of partial parotidectomy is discussed, and a statistical evaluation of the esthetic results was performed. From January 1, 1996, to January 1, 2007, 274 patients treated for benign parotid tumors were studied. Of these, 172 underwent partial parotidectomy. The 172 patients were divided into 4 groups: partial parotidectomy with classic or modified Blair incision without reconstruction (group 1), partial parotidectomy with facelift incision and without reconstruction (group 2), partial parotidectomy with facelift incision associated with sternocleidomastoid muscle flap (group 3), and partial parotidectomy with facelift incision associated with superficial musculoaponeurotic system flap (group 4). Patients were considered, after a follow-up of at least 18 months, for functional and esthetic evaluation. The functional outcome was assessed considering the facial nerve function, Frey syndrome, and recurrence. The esthetic evaluation was performed by inviting the patients and a blind panel of 1 surgeon and 2 secretaries of the department to give a score of 1 to 10 to assess the final cosmetic outcome. The statistical analysis was finally performed using the Mann-Whitney U test for nonparametric data to compare the different group results. P less than .05 was considered significant. No recurrence developed in any of the 4 groups or in any of the 274 patients during the follow-up period. The statistical analysis, comparing group 1 and the other groups, revealed a highly significant statistical difference (P < .0001) for all groups. Also, when group 2 was compared with groups 3 and 4, the difference was highly significantly different statistically (P = .0018 for group 3 and P = .0005 for group 4). Finally, when groups 3 and 4 were compared, the difference was not statistically significant (P = .3467). Partial parotidectomy is the real key point for improving esthetic results in benign parotid surgery. The evaluation of functional complications and the recurrence rate in this series of patients has confirmed that this technique can be safely used for parotid benign tumor resection. The use of a facelift incision alone led to a high statistically significant improvement in the esthetic outcome. When the facelift incision was used with reconstructive techniques, such as the sternocleidomastoid muscle flap or the superficial musculoaponeurotic system flap, the esthetic results improved further. Finally, no statistically significant difference resulted comparing the use of the superficial musculoaponeurotic system and the sternocleidomastoid muscle flap. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Improving Incremental Balance in the GSI 3DVAR Analysis System
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Yang, Runhua; Kleist, Daryl T.; Parrish, David F.; Derber, John C.; Treadon, Russ
2008-01-01
The Gridpoint Statistical Interpolation (GSI) analysis system is a unified global/regional 3DVAR analysis code that has been under development for several years at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center. It has recently been implemented into operations at NCEP in both the global and North American data assimilation systems (GDAS and NDAS). An important aspect of this development has been improving the balance of the analysis produced by GSI. The improved balance between variables has been achieved through the inclusion of a Tangent Linear Normal Mode Constraint (TLNMC). The TLNMC method has proven to be very robust and effective. The TLNMC as part of the global GSI system has resulted in substantial improvement in data assimilation both at NCEP and at the NASA Global Modeling and Assimilation Office (GMAO).
Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G
2017-07-01
Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.
75 FR 61136 - Notice of Proposed Information Collection Requests
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-04
... EdFacts data as well as data from surveys of school principals and special education designees about their school improvement practices. The study will use descriptive statistics and regression analysis to...
Assessment of a Learning Strategy among Spine Surgeons.
Gotfryd, Alberto Ofenhejm; Corredor, Jose Alfredo; Teixeira, William Jacobsen; Martins, Delio Eulálio; Milano, Jeronimo; Iutaka, Alexandre Sadao
2017-02-01
Pilot test, observational study. To evaluate objectively the knowledge transfer provided by theoretical and practical activities during AOSpine courses for spine surgeons. During two AOSpine principles courses, 62 participants underwent precourse assessment, which consisted of questions about their professional experience, preferences regarding adolescent idiopathic scoliosis (AIS) classification, and classifying the curves by means of the Lenke classification of two AIS clinical cases. Two learning strategies were used during the course. A postcourse questionnaire was applied to reclassify the same deformity cases. Differences in the correct answers of clinical cases between pre- and postcourse were analyzed, revealing the number of participants whose accuracy in classification improved after the course. Analysis showed a decrease in the number of participants with wrong answers in both cases after the course. In the first case, statistically significant differences were observed in both curve pattern (83.3%, p = 0.005) and lumbar spine modifier (46.6%, p = 0.049). No statistically significant improvement was seen in the sagittal thoracic modifier (33.3%, p = 0.309). In the second case, statistical improvement was obtained in curve pattern (27.4%, p = 0.018). No statistically significant improvement was seen regarding lumbar spine modifier (9.8%, p = 0.121) and sagittal thoracic modifier (12.9%, p = 0.081). This pilot test showed objectively that learning strategies used during AOSpine courses improved the participants' knowledge. Teaching strategies must be continually improved to ensure an optimal level of knowledge transfer.
Assessment of a Learning Strategy among Spine Surgeons
Gotfryd, Alberto Ofenhejm; Teixeira, William Jacobsen; Martins, Delio Eulálio; Milano, Jeronimo; Iutaka, Alexandre Sadao
2017-01-01
Study Design Pilot test, observational study. Objective To evaluate objectively the knowledge transfer provided by theoretical and practical activities during AOSpine courses for spine surgeons. Methods During two AOSpine principles courses, 62 participants underwent precourse assessment, which consisted of questions about their professional experience, preferences regarding adolescent idiopathic scoliosis (AIS) classification, and classifying the curves by means of the Lenke classification of two AIS clinical cases. Two learning strategies were used during the course. A postcourse questionnaire was applied to reclassify the same deformity cases. Differences in the correct answers of clinical cases between pre- and postcourse were analyzed, revealing the number of participants whose accuracy in classification improved after the course. Results Analysis showed a decrease in the number of participants with wrong answers in both cases after the course. In the first case, statistically significant differences were observed in both curve pattern (83.3%, p = 0.005) and lumbar spine modifier (46.6%, p = 0.049). No statistically significant improvement was seen in the sagittal thoracic modifier (33.3%, p = 0.309). In the second case, statistical improvement was obtained in curve pattern (27.4%, p = 0.018). No statistically significant improvement was seen regarding lumbar spine modifier (9.8%, p = 0.121) and sagittal thoracic modifier (12.9%, p = 0.081). Conclusion This pilot test showed objectively that learning strategies used during AOSpine courses improved the participants' knowledge. Teaching strategies must be continually improved to ensure an optimal level of knowledge transfer. PMID:28451507
Ha Dinh, Thi Thuy; Bonner, Ann; Clark, Robyn; Ramsbotham, Joanne; Hines, Sonia
2016-01-01
Chronic diseases are increasing worldwide and have become a significant burden to those affected by those diseases. Disease-specific education programs have demonstrated improved outcomes, although people do forget information quickly or memorize it incorrectly. The teach-back method was introduced in an attempt to reinforce education to patients. To date, the evidence regarding the effectiveness of health education employing the teach-back method in improved care has not yet been reviewed systematically. This systematic review examined the evidence on using the teach-back method in health education programs for improving adherence and self-management of people with chronic disease. Adults aged 18 years and over with one or more than one chronic disease.All types of interventions which included the teach-back method in an education program for people with chronic diseases. The comparator was chronic disease education programs that did not involve the teach-back method.Randomized and non-randomized controlled trials, cohort studies, before-after studies and case-control studies.The outcomes of interest were adherence, self-management, disease-specific knowledge, readmission, knowledge retention, self-efficacy and quality of life. Searches were conducted in CINAHL, MEDLINE, EMBASE, Cochrane CENTRAL, Web of Science, ProQuest Nursing and Allied Health Source, and Google Scholar databases. Search terms were combined by AND or OR in search strings. Reference lists of included articles were also searched for further potential references. Two reviewers conducted quality appraisal of papers using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument. Data were extracted using the Joanna Briggs Institute Meta-Analysis of Statistics Assessment and Review Instrument data extraction instruments. There was significant heterogeneity in selected studies, hence a meta-analysis was not possible and the results were presented in narrative form. Of the 21 articles retrieved in full, 12 on the use of the teach-back method met the inclusion criteria and were selected for analysis. Four studies confirmed improved disease-specific knowledge in intervention participants. One study showed a statistically significant improvement in adherence to medication and diet among type 2 diabetics patients in the intervention group compared to the control group (p < 0.001). Two studies found statistically significant improvements in self-efficacy (p = 0.0026 and p < 0.001) in the intervention groups. One study examined quality of life in heart failure patients but the results did not improve from the intervention (p = 0.59). Five studies found a reduction in readmission rates and hospitalization but these were not always statistically significant. Two studies showed improvement in daily weighing among heart failure participants, and in adherence to diet, exercise and foot care among those with type 2 diabetes. Overall, the teach-back method showed positive effects in a wide range of health care outcomes although these were not always statistically significant. Studies in this systematic review revealed improved outcomes in disease-specific knowledge, adherence, self-efficacy and the inhaler technique. There was a positive but inconsistent trend also seen in improved self-care and reduction of hospital readmission rates. There was limited evidence on improvement in quality of life or disease related knowledge retention.Evidence from the systematic review supports the use of the teach-back method in educating people with chronic disease to maximize their disease understanding and promote knowledge, adherence, self-efficacy and self-care skills.Future studies are required to strengthen the evidence on effects of the teach-back method. Larger randomized controlled trials will be needed to determine the effectiveness of the teach-back method in quality of life, reduction of readmission, and hospitalizations.
[Again review of research design and statistical methods of Chinese Journal of Cardiology].
Kong, Qun-yu; Yu, Jin-ming; Jia, Gong-xian; Lin, Fan-li
2012-11-01
To re-evaluate and compare the research design and the use of statistical methods in Chinese Journal of Cardiology. Summary the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology all over the year of 2011, and compared the result with the evaluation of 2008. (1) There is no difference in the distribution of the design of researches of between the two volumes. Compared with the early volume, the use of survival regression and non-parameter test are increased, while decreased in the proportion of articles with no statistical analysis. (2) The proportions of articles in the later volume are significant lower than the former, such as 6(4%) with flaws in designs, 5(3%) with flaws in the expressions, 9(5%) with the incomplete of analysis. (3) The rate of correction of variance analysis has been increased, so as the multi-group comparisons and the test of normality. The error rate of usage has been decreased form 17% to 25% without significance in statistics due to the ignorance of the test of homogeneity of variance. Many improvements showed in Chinese Journal of Cardiology such as the regulation of the design and statistics. The homogeneity of variance should be paid more attention in the further application.
[Notes on vital statistics for the study of perinatal health].
Juárez, Sol Pía
2014-01-01
Vital statistics, published by the National Statistics Institute in Spain, are a highly important source for the study of perinatal health nationwide. However, the process of data collection is not well-known and has implications both for the quality and interpretation of the epidemiological results derived from this source. The aim of this study was to present how the information is collected and some of the associated problems. This study is the result of an analysis of the methodological notes from the National Statistics Institute and first-hand information obtained from hospitals, the Central Civil Registry of Madrid, and the Madrid Institute for Statistics. Greater integration between these institutions is required to improve the quality of birth and stillbirth statistics. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.
NASA Astrophysics Data System (ADS)
Mori, Kaya; Chonko, James C.; Hailey, Charles J.
2005-10-01
We have reanalyzed the 260 ks XMM-Newton observation of 1E 1207.4-5209. There are several significant improvements over previous work. First, a much broader range of physically plausible spectral models was used. Second, we have used a more rigorous statistical analysis. The standard F-distribution was not employed, but rather the exact finite statistics F-distribution was determined by Monte Carlo simulations. This approach was motivated by the recent work of Protassov and coworkers and Freeman and coworkers. They demonstrated that the standard F-distribution is not even asymptotically correct when applied to assess the significance of additional absorption features in a spectrum. With our improved analysis we do not find a third and fourth spectral feature in 1E 1207.4-5209 but only the two broad absorption features previously reported. Two additional statistical tests, one line model dependent and the other line model independent, confirmed our modified F-test analysis. For all physically plausible continuum models in which the weak residuals are strong enough to fit, the residuals occur at the instrument Au M edge. As a sanity check we confirmed that the residuals are consistent in strength and position with the instrument Au M residuals observed in 3C 273.
Using statistical text classification to identify health information technology incidents
Chai, Kevin E K; Anthony, Stephen; Coiera, Enrico; Magrabi, Farah
2013-01-01
Objective To examine the feasibility of using statistical text classification to automatically identify health information technology (HIT) incidents in the USA Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database. Design We used a subset of 570 272 incidents including 1534 HIT incidents reported to MAUDE between 1 January 2008 and 1 July 2010. Text classifiers using regularized logistic regression were evaluated with both ‘balanced’ (50% HIT) and ‘stratified’ (0.297% HIT) datasets for training, validation, and testing. Dataset preparation, feature extraction, feature selection, cross-validation, classification, performance evaluation, and error analysis were performed iteratively to further improve the classifiers. Feature-selection techniques such as removing short words and stop words, stemming, lemmatization, and principal component analysis were examined. Measurements κ statistic, F1 score, precision and recall. Results Classification performance was similar on both the stratified (0.954 F1 score) and balanced (0.995 F1 score) datasets. Stemming was the most effective technique, reducing the feature set size to 79% while maintaining comparable performance. Training with balanced datasets improved recall (0.989) but reduced precision (0.165). Conclusions Statistical text classification appears to be a feasible method for identifying HIT reports within large databases of incidents. Automated identification should enable more HIT problems to be detected, analyzed, and addressed in a timely manner. Semi-supervised learning may be necessary when applying machine learning to big data analysis of patient safety incidents and requires further investigation. PMID:23666777
Statistical Analysis of Adaptive Beam-Forming Methods
1988-05-01
minimum amount of computing resources? * What are the tradeoffs being made when a system design selects block averaging over exponential averaging? Will...understood by many signal processing practitioners, however, is how system parameters and the number of sensors effect the distribution of the... system performance improve and if so by how much? b " It is well known that the noise sampled at adjacent sensors is not statistically independent
Breast cancer statistics and prediction methodology: a systematic review and analysis.
Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal
2015-01-01
Breast cancer is a menacing cancer, primarily affecting women. Continuous research is going on for detecting breast cancer in the early stage as the possibility of cure in early stages is bright. There are two main objectives of this current study, first establish statistics for breast cancer and second to find methodologies which can be helpful in the early stage detection of the breast cancer based on previous studies. The breast cancer statistics for incidence and mortality of the UK, US, India and Egypt were considered for this study. The finding of this study proved that the overall mortality rates of the UK and US have been improved because of awareness, improved medical technology and screening, but in case of India and Egypt the condition is less positive because of lack of awareness. The methodological findings of this study suggest a combined framework based on data mining and evolutionary algorithms. It provides a strong bridge in improving the classification and detection accuracy of breast cancer data.
Comparisons of modeled height predictions to ocular height estimates
W.A. Bechtold; S.J. Zarnoch; W.G. Burkman
1998-01-01
Equations used by USDA Forest Service Forest Inventory and Analysis projects to predict individual tree heights on the basis of species and d.b.h. were improved by the addition of mean overstory height. However, ocular estimates of total height by field crews were more accurate than the statistically improved models, especially for hardwood species. Height predictions...
Mixed-Methods Research in the Discipline of Nursing.
Beck, Cheryl Tatano; Harrison, Lisa
2016-01-01
In this review article, we examined the prevalence and characteristics of 294 mixed-methods studies in the discipline of nursing. Creswell and Plano Clark's typology was most frequently used along with concurrent timing. Bivariate statistics was most often the highest level of statistics reported in the results. As for qualitative data analysis, content analysis was most frequently used. The majority of nurse researchers did not specifically address the purpose, paradigm, typology, priority, timing, interaction, or integration of their mixed-methods studies. Strategies are suggested for improving the design, conduct, and reporting of mixed-methods studies in the discipline of nursing.
The data life cycle applied to our own data.
Goben, Abigail; Raszewski, Rebecca
2015-01-01
Increased demand for data-driven decision making is driving the need for librarians to be facile with the data life cycle. This case study follows the migration of reference desk statistics from handwritten to digital format. This shift presented two opportunities: first, the availability of a nonsensitive data set to improve the librarians' understanding of data-management and statistical analysis skills, and second, the use of analytics to directly inform staffing decisions and departmental strategic goals. By working through each step of the data life cycle, library faculty explored data gathering, storage, sharing, and analysis questions.
Janssen, Dirk P
2012-03-01
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Ubhi, T; Bhakta, B; Ives, H; Allgar, V; Roussounis, S
2000-01-01
BACKGROUND—Cerebral palsy is the commonest cause of severe physical disability in childhood. For many years treatment has centred on the use of physiotherapy and orthotics to overcome the problems of leg spasticity, which interferes with walking and can lead to limb deformity. Intramuscular botulinum toxin (BT-A) offers a targeted form of therapy to reduce spasticity in specific muscle groups. AIMS—To determine whether intramuscular BT-A can improve walking in children with cerebral palsy. DESIGN—Randomised, double blind, placebo controlled trial. METHODS—Forty patients with spastic diplegia or hemiplegia were enrolled. Twenty two received botulinum toxin and 18 received placebo. The primary outcome measure was video gait analysis and secondary outcome measures were gross motor function measure (GMFM), physiological cost index (PCI), and passive ankle dorsiflexion. RESULTS—Video gait analysis showed clinically and statistically significant improvement in initial foot contact following BT-A at six weeks and 12 weeks compared to placebo. Forty eight per cent of BT-A treated children showed clinical improvement in VGA compared to 17% of placebo treated children. The GMFM (walking dimension) showed a statistically significant improvement in favour of the botulinum toxin treated group. Changes in PCI and passive ankle dorsiflexion were not statistically significant. CONCLUSION—The study gives further support to the use of intramuscular botulinum toxin type A as an adjunct to conventional physiotherapy and orthoses to reduce spasticity and improve functional mobility in children with spastic diplegic or hemiplegic cerebral palsy. PMID:11087280
GIS Tools For Improving Pedestrian & Bicycle Safety
DOT National Transportation Integrated Search
2000-07-01
Geographic Information System (GIS) software turns statistical data, such as accidents, and geographic data, such as roads and crash locations, into meaningful information for spatial analysis and mapping. In this project, GIS-based analytical techni...
Mukhopadhyay, Somnath; Sen, Swarnali; Datta, Himadri
2015-01-01
To compare the role of topically applied serum therapy with preservative-free artificial tear (AT) drops in patients with moderate to severe dry eye in Hansen's disease along with change in tear protein profile. 144 consecutive patients were randomly divided into three groups. After a baseline examination of clinical parameters, each of the patients received designated modality of topical therapy six times a day for 6 weeks. Post-treatment documentation of clinical parameters was done at 6 weeks, and then at 12 weeks after discontinuation of topical therapy. Analysis of three tear proteins using gel electrophoresis (sodium dodecyl sulfate polyacrylamide gel electrophoresis) was done at baseline, at the first and second post-treatment visits. In the cord blood serum (CBS) group, except for McMonnies score and staining score, all other clinical parameters showed continued improvement in the first and second post-treatment analyses. In the autologous serum (ALS) group, all the clinical parameters except Schirmer's I showed significant improvement in the first post-treatment analysis .This was sustained at a significant level in the second analysis except for tear film break-up time (TBUT) and conjunctival impression cytology grading. In the AT group, all the parameters improved at a non-significant level except for TBUT in the first analysis. In the next analysis, apart from McMonnies score and TBUT, other clinical parameters did not improve. In the ALS and CBS groups, tear lysozyme, lactoferrin levels improved in both post-treatment measurements (statistically insignificant).Total tear protein continued to increase at statistically significant levels in the first and second post-treatment analyses in the CBS group and at a statistically insignificant level in the ALS group. In the AT group, the three tear proteins continued to decrease in both the analyses. In moderate to severe dry eye in Hansen's disease, serum therapy in comparison with AT drops, improves clinical parameters and causes betterment in tear protein profile. CTRI/2013/07/003802. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Common Scientific and Statistical Errors in Obesity Research
George, Brandon J.; Beasley, T. Mark; Brown, Andrew W.; Dawson, John; Dimova, Rositsa; Divers, Jasmin; Goldsby, TaShauna U.; Heo, Moonseong; Kaiser, Kathryn A.; Keith, Scott; Kim, Mimi Y.; Li, Peng; Mehta, Tapan; Oakes, J. Michael; Skinner, Asheley; Stuart, Elizabeth; Allison, David B.
2015-01-01
We identify 10 common errors and problems in the statistical analysis, design, interpretation, and reporting of obesity research and discuss how they can be avoided. The 10 topics are: 1) misinterpretation of statistical significance, 2) inappropriate testing against baseline values, 3) excessive and undisclosed multiple testing and “p-value hacking,” 4) mishandling of clustering in cluster randomized trials, 5) misconceptions about nonparametric tests, 6) mishandling of missing data, 7) miscalculation of effect sizes, 8) ignoring regression to the mean, 9) ignoring confirmation bias, and 10) insufficient statistical reporting. We hope that discussion of these errors can improve the quality of obesity research by helping researchers to implement proper statistical practice and to know when to seek the help of a statistician. PMID:27028280
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Sidik, S. M.; Burr, J. C., Jr.
1972-01-01
Air quality data for Cleveland, Ohio, for the period of 1967 to 1971 were collated and subjected to statistical analysis. The total suspended particulate component is lognormally distributed; while sulfur dioxide and nitrogen dioxide are reasonably approximated by lognormal distributions. Only sulfur dioxide, in some residential neighborhoods, meets Ohio air quality standards. Air quality has definitely improved in the industrial valley, while in the rest of the city, only sulfur dioxide has shown consistent improvement. A pollution index is introduced which displays directly the degree to which the environmental air conforms to mandated standards.
Simulation on a car interior aerodynamic noise control based on statistical energy analysis
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Dengfeng; Ma, Zhengdong
2012-09-01
How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.
Nearfield Summary and Statistical Analysis of the Second AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Park, Michael A.; Nemec, Marian
2017-01-01
A summary is provided for the Second AIAA Sonic Boom Workshop held 8-9 January 2017 in conjunction with AIAA SciTech 2017. The workshop used three required models of increasing complexity: an axisymmetric body, a wing body, and a complete configuration with flow-through nacelle. An optional complete configuration with propulsion boundary conditions is also provided. These models are designed with similar nearfield signatures to isolate geometry and shock/expansion interaction effects. Eleven international participant groups submitted nearfield signatures with forces, pitching moment, and iterative convergence norms. Statistics and grid convergence of these nearfield signatures are presented. These submissions are propagated to the ground, and noise levels are computed. This allows the grid convergence and the statistical distribution of a noise level to be computed. While progress is documented since the first workshop, improvement to the analysis methods for a possible subsequent workshop are provided. The complete configuration with flow-through nacelle showed the most dramatic improvement between the two workshops. The current workshop cases are more relevant to vehicles with lower loudness and have the potential for lower annoyance than the first workshop cases. The models for this workshop with quieter ground noise levels than the first workshop exposed weaknesses in analysis, particularly in convective discretization.
Universal Recurrence Time Statistics of Characteristic Earthquakes
NASA Astrophysics Data System (ADS)
Goltz, C.; Turcotte, D. L.; Abaimov, S.; Nadeau, R. M.
2006-12-01
Characteristic earthquakes are defined to occur quasi-periodically on major faults. Do recurrence time statistics of such earthquakes follow a particular statistical distribution? If so, which one? The answer is fundamental and has important implications for hazard assessment. The problem cannot be solved by comparing the goodness of statistical fits as the available sequences are too short. The Parkfield sequence of M ≍ 6 earthquakes, one of the most extensive reliable data sets available, has grown to merely seven events with the last earthquake in 2004, for example. Recently, however, advances in seismological monitoring and improved processing methods have unveiled so-called micro-repeaters, micro-earthquakes which recur exactly in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Micro-repeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. Due to their recent discovery, however, available sequences contain less than 20 events at present. In this paper we present results for the analysis of recurrence times for several micro-repeater sequences from Parkfield and adjacent regions. To improve the statistical significance of our findings, we combine several sequences into one by rescaling the individual sets by their respective mean recurrence intervals and Weibull exponents. This novel approach of rescaled combination yields the most extensive data set possible. We find that the resulting statistics can be fitted well by an exponential distribution, confirming the universal applicability of the Weibull distribution to characteristic earthquakes. A similar result is obtained from rescaled combination, however, with regard to the lognormal distribution.
Kepler Planet Detection Metrics: Statistical Bootstrap Test
NASA Technical Reports Server (NTRS)
Jenkins, Jon M.; Burke, Christopher J.
2016-01-01
This document describes the data produced by the Statistical Bootstrap Test over the final three Threshold Crossing Event (TCE) deliveries to NExScI: SOC 9.1 (Q1Q16)1 (Tenenbaum et al. 2014), SOC 9.2 (Q1Q17) aka DR242 (Seader et al. 2015), and SOC 9.3 (Q1Q17) aka DR253 (Twicken et al. 2016). The last few years have seen significant improvements in the SOC science data processing pipeline, leading to higher quality light curves and more sensitive transit searches. The statistical bootstrap analysis results presented here and the numerical results archived at NASAs Exoplanet Science Institute (NExScI) bear witness to these software improvements. This document attempts to introduce and describe the main features and differences between these three data sets as a consequence of the software changes.
NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Qunwei; He, Ming
2018-05-01
In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.
Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.
Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew
2012-08-08
Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.
Ahmed, K S
1979-01-01
In Bangladesh the Population Control and Family Planning Division of the Ministry of Health and Population Control has decided to delegate increased financial and administrative powers to the officers of the family planning program at the district level and below. Currently, about 20,000 family planning workers and officials are at work in rural areas. The government believes that the success of the entire family planning program depends on the performance of workers in rural areas, because that is where about 90% of the population lives. Awareness of the need to improve statistical data in Bangladesh has been increasing, particularly in regard to the development of rural areas. An accurate statistical profile of rural Bangladesh is crucial to the formation, implementation and evaluation of rural development programs. A Seminar on Statistics for Rural Development will be held from June 18-20, 1980. The primary objectives of the Seminar are to make an exhaustive analysis of the current availability of statistics required for rural development programs and to consider methodological and operational improvements toward building up an adequate data base.
Good experimental design and statistics can save animals, but how can it be promoted?
Festing, Michael F W
2004-06-01
Surveys of published papers show that there are many errors both in the design of the experiments and in the statistical analysis of the resulting data. This must result in a waste of animals and scientific resources, and it is surely unethical. Scientific quality might be improved, to some extent, by journal editors, but they are constrained by lack of statistical referees and inadequate statistical training of those referees that they do use. Other parties, such as welfare regulators, ethical review committees and individual scientists also have an interest in scientific quality, but they do not seem to be well placed to make the required changes. However, those who fund research would have the power to do something if they could be convinced that it is in their best interests to do so. More examples of the way in which better experimental design has led to improved experiments would be helpful in persuading these funding organisations to take further action.
A quality improvement management model for renal care.
Vlchek, D L; Day, L M
1991-04-01
The purpose of this article is to explore the potential for applying the theory and tools of quality improvement (total quality management) in the renal care setting. We believe that the coupling of the statistical techniques used in the Deming method of quality improvement, with modern approaches to outcome and process analysis, will provide the renal care community with powerful tools, not only for improved quality (i.e., reduced morbidity and mortality), but also for technology evaluation and resource allocation.
Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin
2014-06-05
In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Time independent summary statistics may aid the understanding of drugs' action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies.
2014-01-01
Background In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves’ dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. Results First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Conclusion Time independent summary statistics may aid the understanding of drugs’ action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies. PMID:24902483
Kleikers, Pamela W M; Hooijmans, Carlijn; Göb, Eva; Langhauser, Friederike; Rewell, Sarah S J; Radermacher, Kim; Ritskes-Hoitinga, Merel; Howells, David W; Kleinschnitz, Christoph; Schmidt, Harald H H W
2015-08-27
Biomedical research suffers from a dramatically poor translational success. For example, in ischemic stroke, a condition with a high medical need, over a thousand experimental drug targets were unsuccessful. Here, we adopt methods from clinical research for a late-stage pre-clinical meta-analysis (MA) and randomized confirmatory trial (pRCT) approach. A profound body of literature suggests NOX2 to be a major therapeutic target in stroke. Systematic review and MA of all available NOX2(-/y) studies revealed a positive publication bias and lack of statistical power to detect a relevant reduction in infarct size. A fully powered multi-center pRCT rejects NOX2 as a target to improve neurofunctional outcomes or achieve a translationally relevant infarct size reduction. Thus stringent statistical thresholds, reporting negative data and a MA-pRCT approach can ensure biomedical data validity and overcome risks of bias.
Enhanced Higgs boson to τ(+)τ(-) search with deep learning.
Baldi, P; Sadowski, P; Whiteson, D
2015-03-20
The Higgs boson is thought to provide the interaction that imparts mass to the fundamental fermions, but while measurements at the Large Hadron Collider (LHC) are consistent with this hypothesis, current analysis techniques lack the statistical power to cross the traditional 5σ significance barrier without more data. Deep learning techniques have the potential to increase the statistical power of this analysis by automatically learning complex, high-level data representations. In this work, deep neural networks are used to detect the decay of the Higgs boson to a pair of tau leptons. A Bayesian optimization algorithm is used to tune the network architecture and training algorithm hyperparameters, resulting in a deep network of eight nonlinear processing layers that improves upon the performance of shallow classifiers even without the use of features specifically engineered by physicists for this application. The improvement in discovery significance is equivalent to an increase in the accumulated data set of 25%.
Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D
2018-02-01
OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Differentiation of chocolates according to the cocoa's geographical origin using chemometrics.
Cambrai, Amandine; Marcic, Christophe; Morville, Stéphane; Sae Houer, Pierre; Bindler, Françoise; Marchioni, Eric
2010-02-10
The determination of the geographical origin of cocoa used to produce chocolate has been assessed through the analysis of the volatile compounds of chocolate samples. The analysis of the volatile content and their statistical processing by multivariate analyses tended to form independent groups for both Africa and Madagascar, even if some of the chocolate samples analyzed appeared in a mixed zone together with those from America. This analysis also allowed a clear separation between Caribbean chocolates and those from other origins. Height compounds (such as linalool or (E,E)-2,4-decadienal) characteristic of chocolate's different geographical origins were also identified. The method described in this work (hydrodistillation, GC analysis, and statistic treatment) may improve the control of the geographical origin of chocolate during its long production process.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
Burke, F J T; Ravaghi, V; Mackenzie, L; Priest, N; Falcon, H C
2017-04-21
Aim To assess the performance and thereby the progress of the FDs when they carried out a number of simulated clinical exercises at the start and at the end of their FD year.Methods A standardised simulated clinical restorative dentistry training exercise was carried out by a group of 61 recently qualified dental graduates undertaking a 12 months' duration foundation training programme in England, at both the start and end of the programme. Participants completed a Class II cavity preparation and amalgam restoration, a Class IV composite resin restoration and two preparations for a porcelain-metal full crown. The completed preparations and restorations were independently assessed by an experienced consultant in restorative dentistry, using a scoring system based on previously validated criteria. The data were subjected to statistical analysis.Results There was wide variation in individual performance. Overall, there was a small but not statistically significant improvement in performance by the end of the programme. A statistically significant improvement was observed for the amalgam preparation and restoration, and, overall, for one of the five geographical sub-groups in the study. Possible reasons for the variable performance and improvement are discussed.Conclusions There was variability in the performance of the FDs. The operative performance of FDs at the commencement and end of their FD year indicated an overall moderately improved performance over the year and a statistically significant improvement in their performance with regard to amalgam restoration.
Luo, Li; Zhu, Yun
2012-01-01
Abstract The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T2, collapsing method, multivariate and collapsing (CMC) method, individual χ2 test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets. PMID:22651812
Luo, Li; Zhu, Yun; Xiong, Momiao
2012-06-01
The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.
Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A
2006-11-01
Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.
A comprehensive study on pavement edge line implementation.
DOT National Transportation Integrated Search
2014-04-01
The previous 2011 study Safety Improvement from Edge Lines on Rural Two-Lane Highways analyzed the crash data of : three years before and one year after edge line implementation by using the latest safety analysis statistical method. It : concl...
Rodríguez-Arias, Miquel Angel; Rodó, Xavier
2004-03-01
Here we describe a practical, step-by-step primer to scale-dependent correlation (SDC) analysis. The analysis of transitory processes is an important but often neglected topic in ecological studies because only a few statistical techniques appear to detect temporary features accurately enough. We introduce here the SDC analysis, a statistical and graphical method to study transitory processes at any temporal or spatial scale. SDC analysis, thanks to the combination of conventional procedures and simple well-known statistical techniques, becomes an improved time-domain analogue of wavelet analysis. We use several simple synthetic series to describe the method, a more complex example, full of transitory features, to compare SDC and wavelet analysis, and finally we analyze some selected ecological series to illustrate the methodology. The SDC analysis of time series of copepod abundances in the North Sea indicates that ENSO primarily is the main climatic driver of short-term changes in population dynamics. SDC also uncovers some long-term, unexpected features in the population. Similarly, the SDC analysis of Nicholson's blowflies data locates where the proposed models fail and provides new insights about the mechanism that drives the apparent vanishing of the population cycle during the second half of the series.
GWAMA: software for genome-wide association meta-analysis.
Mägi, Reedik; Morris, Andrew P
2010-05-28
Despite the recent success of genome-wide association studies in identifying novel loci contributing effects to complex human traits, such as type 2 diabetes and obesity, much of the genetic component of variation in these phenotypes remains unexplained. One way to improving power to detect further novel loci is through meta-analysis of studies from the same population, increasing the sample size over any individual study. Although statistical software analysis packages incorporate routines for meta-analysis, they are ill equipped to meet the challenges of the scale and complexity of data generated in genome-wide association studies. We have developed flexible, open-source software for the meta-analysis of genome-wide association studies. The software incorporates a variety of error trapping facilities, and provides a range of meta-analysis summary statistics. The software is distributed with scripts that allow simple formatting of files containing the results of each association study and generate graphical summaries of genome-wide meta-analysis results. The GWAMA (Genome-Wide Association Meta-Analysis) software has been developed to perform meta-analysis of summary statistics generated from genome-wide association studies of dichotomous phenotypes or quantitative traits. Software with source files, documentation and example data files are freely available online at http://www.well.ox.ac.uk/GWAMA.
NASA Astrophysics Data System (ADS)
Kim, J.; Kim, J. H.; Jee, G.; Lee, C.; Kim, Y.
2017-12-01
Spectral Airglow Temperature Imager (SATI) installed at King Sejong Station (62.22S, 58.78W), Antarctica, has been continuously measured the airglow emissions from OH (6-2) Meinel and O2 (0-1) atmospheric bands since 2002, in order to investigate the dynamics of the polar MLT region. The measurements allow us to derive the rotational temperature at peak emission heights known as about 87 km and 94 km for OH and O2 airglows, respectively. In this study, we briefly introduce improved analysis technique that modified original analysis code. The major change compared to original program is the improvement of the function to find the exact center position in the observed image. In addition to brief introduction of the improved technique, we also present the results statistically investigating the periodic variations on the temperatures of two layers during the period of 2002 through 2011 and compare our results with those from the temperatures measured by satellite.
Outcomes assessment following treatment of spasmodic dysphonia with botulinum toxin.
Courey, M S; Garrett, C G; Billante, C R; Stone, R E; Portell, M D; Smith, T L; Netterville, J L
2000-09-01
Spasmodic dysphonia (SD), a disabling focal dystonia involving the laryngeal musculature, is most commonly treated by the intramuscular injection of botulinum toxin (BTX). Although the treatment is well tolerated and generally produces clinical voice improvement, it has never been statistically shown to alter the patient's perception of voice quality or general health. Declining resources for medical care mandate that treatment outcomes be documented. A prospective analysis of the effects of BTX on the patient's perception of voice and general health was undertaken. The Voice Handicap Index (VHI) and Short Form 36 (SF-36) surveys were administered to patients before treatment and 1 month after. Pretreatment and posttreatment scores were analyzed with a Student's t-test. On the VHI, improvements in the patients' perception of their functional, physical, and emotional voice handicap reached statistical significance (p < or = .0005). On the SF-36, patients had statistically significant improvements in mental health (p < or = .03) and social functioning (p < or = .04). Treatment of SD with BTX significantly lessened the patients' perception of dysphonia. In addition, it improved their social functioning and their perception of their mental health. These outcome measures justify the continued treatment of SD with BTX.
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
Shitara, Kohei; Matsuo, Keitaro; Oze, Isao; Mizota, Ayako; Kondo, Chihiro; Nomura, Motoo; Yokota, Tomoya; Takahari, Daisuke; Ura, Takashi; Muro, Kei
2011-08-01
We performed a systematic review and meta-analysis to determine the impact of neutropenia or leukopenia experienced during chemotherapy on survival. Eligible studies included prospective or retrospective analyses that evaluated neutropenia or leukopenia as a prognostic factor for overall survival or disease-free survival. Statistical analyses were conducted to calculate a summary hazard ratio and 95% confidence interval (CI) using random-effects or fixed-effects models based on the heterogeneity of the included studies. Thirteen trials were selected for the meta-analysis, with a total of 9,528 patients. The hazard ratio of death was 0.69 (95% CI, 0.64-0.75) for patients with higher-grade neutropenia or leukopenia compared to patients with lower-grade or lack of cytopenia. Our analysis was also stratified by statistical method (any statistical method to decrease lead-time bias; time-varying analysis or landmark analysis), but no differences were observed. Our results indicate that neutropenia or leukopenia experienced during chemotherapy is associated with improved survival in patients with advanced cancer or hematological malignancies undergoing chemotherapy. Future prospective analyses designed to investigate the potential impact of chemotherapy dose adjustment coupled with monitoring of neutropenia or leukopenia on survival are warranted.
Photographic and photometric enhancement of Lunar Orbiter products, projects A, B and C
NASA Technical Reports Server (NTRS)
1972-01-01
A detailed discussion is presented of the framelet joining, photometric data improvement, and statistical error analysis. The Lunar Orbiter film handling system, readout system, and the digitization are described, along with the technique of joining adjacent framelets by a using a digital computer. Time and cost estimates are given. The problems and techniques involved in improving the digitized data are discussed. It was found that spectacular improvements are possible. Program documentations are included.
Alteration of histological gastritis after cure of Helicobacter pylori infection.
Hojo, M; Miwa, H; Ohkusa, T; Ohkura, R; Kurosawa, A; Sato, N
2002-11-01
It is still disputed whether gastric atrophy or intestinal metaplasia improves after the cure of Helicobacter pylori infection. To clarify the histological changes after the cure of H. pylori infection through a literature survey. Fifty-one selected reports from 1066 relevant articles were reviewed. The extracted data were pooled according to histological parameters of gastritis based on the (updated) Sydney system. Activity improved more rapidly than inflammation. Eleven of 25 reports described significant improvement of atrophy. Atrophy was not improved in one of four studies with a large sample size (> 100 samples) and in two of five studies with a long follow-up period (> 12 months), suggesting that disagreement between the studies was not totally due to sample size or follow-up period. Methodological flaws, such as patient selection, and statistical analysis based on the assumption that atrophy improves continuously and generally in all patients might be responsible for the inconsistent results. Four of 28 studies described significant improvement of intestinal metaplasia [corrected]. Activity and inflammation were improved after the cure of H. pylori infection. Atrophy did not improve generally among all patients, but improved in certain patients. Improvement of intestinal metaplasia was difficult to analyse due to methodological problems including statistical power.
The Practicality of Statistical Physics Handout Based on KKNI and the Constructivist Approach
NASA Astrophysics Data System (ADS)
Sari, S. Y.; Afrizon, R.
2018-04-01
Statistical physics lecture shows that: 1) the performance of lecturers, social climate, students’ competence and soft skills needed at work are in enough category, 2) students feel difficulties in following the lectures of statistical physics because it is abstract, 3) 40.72% of students needs more understanding in the form of repetition, practice questions and structured tasks, and 4) the depth of statistical physics material needs to be improved gradually and structured. This indicates that learning materials in accordance of The Indonesian National Qualification Framework or Kerangka Kualifikasi Nasional Indonesia (KKNI) with the appropriate learning approach are needed to help lecturers and students in lectures. The author has designed statistical physics handouts which have very valid criteria (90.89%) according to expert judgment. In addition, the practical level of handouts designed also needs to be considered in order to be easy to use, interesting and efficient in lectures. The purpose of this research is to know the practical level of statistical physics handout based on KKNI and a constructivist approach. This research is a part of research and development with 4-D model developed by Thiagarajan. This research activity has reached part of development test at Development stage. Data collection took place by using a questionnaire distributed to lecturers and students. Data analysis using descriptive data analysis techniques in the form of percentage. The analysis of the questionnaire shows that the handout of statistical physics has very practical criteria. The conclusion of this study is statistical physics handouts based on the KKNI and constructivist approach have been practically used in lectures.
An Update to the Budget and Economic Outlook: 2015 to 2025
2015-08-01
Economic Analysis; Bureau of Labor Statistics ; Federal Reserve. Notes: Real gross domestic product is the output of the economy adjusted to remove the...the next few years, in CBO’s view, will be continued improvements in households’ creditworthiness and in the availability of credit. Delinquency rates...Budget Office; Bureau of Labor Statistics . Notes: The rate of short-term unemployment is the percentage of the labor force that has been out of work for
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard
2013-09-06
Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.
Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew
2017-09-01
Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.
Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N
2015-09-01
The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
ERIC Educational Resources Information Center
Thompson, Terry J.; Gould, Karen J.
2005-01-01
In recent years the Metropolitan School District of Wayne Township in Indianapolis has been awash in data. In attempts to improve levels of student achievement, the authors collected all manner of statistical details about students and schools and attempted to perform data analysis as part of the school improvement process. The authors were never…
NASA Technical Reports Server (NTRS)
Rana, D. S.
1980-01-01
The data reduction capabilities of the current data reduction programs were assessed and a search for a more comprehensive system with higher data analytic capabilities was made. Results of the investigation are presented.
Control of hot mix production by cold feed only : final report.
DOT National Transportation Integrated Search
1978-04-01
This report is concerned with an analysis of the gradation control possible with recently improved aggregate cold feed systems. The gradation control for three mix types produced in a screenless batch plant was monitored and statistically compared wi...
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Chae, Su Jin; Jeong, So Mi; Chung, Yoon-Sok
2017-09-01
This study is aimed at identifying the relationships between medical school students' academic burnout, empathy, and calling, and determining whether their calling has a mediating effect on the relationship between academic burnout and empathy. A mixed method study was conducted. One hundred twenty-seven medical students completed a survey. Scales measuring academic burnout, medical students' empathy, and calling were utilized. For statistical analysis, correlation analysis, descriptive statistics analysis, and hierarchical multiple regression analyses were conducted. For qualitative approach, eight medical students participated in a focus group interview. The study found that empathy has a statistically significant, negative correlation with academic burnout, while having a significant, positive correlation with calling. Sense of calling proved to be an effective mediator of the relationship between academic burnout and empathy. This result demonstrates that calling is a key variable that mediates the relationship between medical students' academic burnout and empathy. As such, this study provides baseline data for an education that could improve medical students' empathy skills.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Vajawat, Mayuri; Deepika, P. C.; Kumar, Vijay; Rajeshwari, P.
2015-01-01
Aim: To compare the efficacy of powered toothbrushes in improving gingival health and reducing salivary red complex counts as compared to manual toothbrushes, among autistic individuals. Materials and Methods: Forty autistics was selected. Test group received powered toothbrushes, and control group received manual toothbrushes. Plaque index and gingival index were recorded. Unstimulated saliva was collected for analysis of red complex organisms using polymerase chain reaction. Results: A statistically significant reduction in the plaque scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.002 for controls). This reduction was statistically more significant in the test group (P = 0.024). A statistically significant reduction in the gingival scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.001 for controls). This reduction was statistically more significant in the test group (P = 0.042). No statistically significant reduction in the detection rate of red complex organisms were seen at 4 weeks in both the groups. Conclusion: Powered toothbrushes result in a significant overall improvement in gingival health when constant reinforcement of oral hygiene instructions is given. PMID:26681855
Akifuddin, Syed; Khatoon, Farheen
2015-12-01
Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher's exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction.
Analysis of visual quality improvements provided by known tools for HDR content
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo
2016-09-01
In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.
Lee, O-Sung; Ahn, Soyeon; Lee, Yong Seuk
2017-07-01
The purpose of this systematic review and meta-analysis was to evaluate the effectiveness and safety of early weight-bearing by comparing clinical and radiological outcomes between early and traditional delayed weight-bearing after OWHTO. A rigorous and systematic approach was used. The methodological quality was also assessed. Results that are possible to be compared in two or more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random-effects model was used to calculate the effect size. Six articles were included in the final analysis. All case groups were composed of early full weight-bearing within 2 weeks. All control groups were composed of late full weight-bearing between 6 weeks and 2 months. Pooled analysis was possible for the improvement in Lysholm score, but there was no statistically significant difference shown between groups. Other clinical results were also similar between groups. Four studies reported mechanical femorotibial angle (mFTA) and this result showed no statistically significant difference between groups in the pooled analysis. Furthermore, early weight-bearing showed more favorable results in some radiologic results (osseointegration and patellar height) and complications (thrombophlebitis and recurrence). Our analysis supports that early full weight-bearing after OWHTO using a locking plate leads to improvement in outcomes and was comparable to the delayed weight-bearing in terms of clinical and radiological outcomes. On the contrary, early weight-bearing was more favorable with respect to some radiologic parameters and complications compared with delayed weight-bearing.
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
Surzhikov, V D; Surzhikov, D V
2014-01-01
The search and measurement of causal relationships between exposure to air pollution and health state of the population is based on the system analysis and risk assessment to improve the quality of research. With this purpose there is applied the modern statistical analysis with the use of criteria of independence, principal component analysis and discriminate function analysis. As a result of analysis out of all atmospheric pollutants there were separated four main components: for diseases of the circulatory system main principal component is implied with concentrations of suspended solids, nitrogen dioxide, carbon monoxide, hydrogen fluoride, for the respiratory diseases the main c principal component is closely associated with suspended solids, sulfur dioxide and nitrogen dioxide, charcoal black. The discriminant function was shown to be used as a measure of the level of air pollution.
Statistics, Structures & Satisfied Customers: Using Web Log Data to Improve Site Performance.
ERIC Educational Resources Information Center
Peacock, Darren
This paper explores some of the ways in which the National Museum of Australia is using Web analysis tools to shape its future directions in the delivery of online services. In particular, it explores the potential of quantitative analysis, based on Web server log data, to convert these ephemeral traces of user experience into a strategic…
Piotrowski, T; Rodrigues, G; Bajon, T; Yartsev, S
2014-03-01
Multi-institutional collaborations allow for more information to be analyzed but the data from different sources may vary in the subgroup sizes and/or conditions of measuring. Rigorous statistical analysis is required for pooling the data in a larger set. Careful comparison of all the components of the data acquisition is indispensable: identical conditions allow for enlargement of the database with improved statistical analysis, clearly defined differences provide opportunity for establishing a better practice. The optimal sequence of required normality, asymptotic normality, and independence tests is proposed. An example of analysis of six subgroups of position corrections in three directions obtained during image guidance procedures for 216 prostate cancer patients from two institutions is presented. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.
2014-03-01
The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.
NASA Technical Reports Server (NTRS)
Gyekenyesi, John P.; Nemeth, Noel N.
1987-01-01
The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Parmigiani, Leandro; Furtado, Rita N V; Lopes, Roberta V; Ribeiro, Luiza H C; Natour, Jamil
2010-11-01
Compare the medium-term effectiveness and tolerance between joint lavage (JL) in combination with triamcinolone hexacetonide (TH) intra-articular injection (IAI) and IAI with TH alone for treatment of primary osteoarthritis (OA) of the knee. A randomized, double-blind, controlled study was carried out on 60 patients with primary OA of the knee, randomized into two intervention groups: JL/TH group, joint lavage in combination with TH intra-articular injection and TH group, TH intra-articular injection. Patients were followed for 12 weeks by a blind observer using the following outcome measurements: visual analogue scale for pain at rest and in movement, goniometry, WOMAC, Lequesne's index, timed 50-ft walk, perception of improvement, Likert scale for improvement assessment, use of nonsteroidal anti-inflammatory drugs and analgesics, and local side effects. There were no statistical differences in the inter-group analysis for any of the variables studied over the 12-week period. Although both groups demonstrated statistical improvement in the intra-group evaluation (except for Likert scale according to patient and the use of anti-inflammatory drugs). In the Kellgren-Lawrence scale (KL) 2 and 3 sub-analysis, there was a statistical difference regarding joint flexion among patients classified as KL 2, favoring the TH group (p=0.03). For the KL 3 patients, there were statistical differences favoring the JL/TH group regarding Lequesne (p=0.021), WOMAC pain score (p=0.01), and Likert scale according to the patient (p=0.028) and the physician (p=0.034). The combination of joint lavage and IAI with TH was not more effective than IAI with TH alone in the treatment of primary OA of the knee. However, KL 3 patients may receive a major benefit from this combination.
Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju
2015-01-01
The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Colegrave, Nick
2017-01-01
A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912
Rushton, Paul R P; Grevitt, Michael P
2013-04-20
Review and statistical analysis of studies evaluating the effect of surgery on the health-related quality of life of adolescents with adolescent idiopathic scoliosis, using Scoliosis Research Society (SRS) outcomes. Apply published minimum clinical important differences (MCID) values for the SRS22r questionnaire to the literature to identify what areas of health-related quality of life are consistently affected by surgery and whether changes are clinically meaningful. The interpretation of published studies using the SRS outcomes has been limited by the lack of MCID values for the questionnaire domains. The recent publication of these data allows the clinical importance of any changes in these studies to be examined for the first time. A literature search was undertaken to locate suitable studies that were then analyzed. Statistically significant differences from baseline to 2 years postoperatively were ascertained by narratively reporting the analyses within included studies. When possible, clinically significant changes were assessed using 95% confidence intervals for the change in mean domain score. If the lower bound of the confidence intervals for the change exceeded the MCID for that domain, the change was considered clinically significant. The numbers of cohorts available for the different analyses varied (5-16). Eighty-one percent and 94% of included cohorts experienced statistically significant improvements in pain and self-image domains. In terms of clinical significance, it was only self-image that regularly improved by more than MCID, doing so in 4 of 5 included cohorts (80%) compared with 1 of 12 cohorts (8%) for pain. No clinically relevant changes occurred in mental health or activity domains. Evidence suggests that surgery can lead to clinically important improvement in patient self-image. Surgeons and patients should be aware of the limited evidence for improvements in domains other than self-image after surgery. Surgical decision-making will also be influenced by the natural history of adolescent idiopathic scoliosis.
Winzer, Klaus-Jürgen; Buchholz, Anika; Schumacher, Martin; Sauerbrei, Willi
2016-01-01
Background Prognostic factors and prognostic models play a key role in medical research and patient management. The Nottingham Prognostic Index (NPI) is a well-established prognostic classification scheme for patients with breast cancer. In a very simple way, it combines the information from tumor size, lymph node stage and tumor grade. For the resulting index cutpoints are proposed to classify it into three to six groups with different prognosis. As not all prognostic information from the three and other standard factors is used, we will consider improvement of the prognostic ability using suitable analysis approaches. Methods and Findings Reanalyzing overall survival data of 1560 patients from a clinical database by using multivariable fractional polynomials and further modern statistical methods we illustrate suitable multivariable modelling and methods to derive and assess the prognostic ability of an index. Using a REMARK type profile we summarize relevant steps of the analysis. Adding the information from hormonal receptor status and using the full information from the three NPI components, specifically concerning the number of positive lymph nodes, an extended NPI with improved prognostic ability is derived. Conclusions The prognostic ability of even one of the best established prognostic index in medicine can be improved by using suitable statistical methodology to extract the full information from standard clinical data. This extended version of the NPI can serve as a benchmark to assess the added value of new information, ranging from a new single clinical marker to a derived index from omics data. An established benchmark would also help to harmonize the statistical analyses of such studies and protect against the propagation of many false promises concerning the prognostic value of new measurements. Statistical methods used are generally available and can be used for similar analyses in other diseases. PMID:26938061
Harris, Alex H S; Reeder, Rachelle; Hyun, Jenny K
2009-10-01
Journal editors and statistical reviewers are often in the difficult position of catching serious problems in submitted manuscripts after the research is conducted and data have been analyzed. We sought to learn from editors and reviewers of major psychiatry journals what common statistical and design problems they most often find in submitted manuscripts and what they wished to communicate to authors regarding these issues. Our primary goal was to facilitate communication between journal editors/reviewers and researchers/authors and thereby improve the scientific and statistical quality of research and submitted manuscripts. Editors and statistical reviewers of 54 high-impact psychiatry journals were surveyed to learn what statistical or design problems they encounter most often in submitted manuscripts. Respondents completed the survey online. The authors analyzed survey text responses using content analysis procedures to identify major themes related to commonly encountered statistical or research design problems. Editors and reviewers (n=15) who handle manuscripts from 39 different high-impact psychiatry journals responded to the survey. The most commonly cited problems regarded failure to map statistical models onto research questions, improper handling of missing data, not controlling for multiple comparisons, not understanding the difference between equivalence and difference trials, and poor controls in quasi-experimental designs. The scientific quality of psychiatry research and submitted reports could be greatly improved if researchers became sensitive to, or sought consultation on frequently encountered methodological and analytic issues.
Rule-based statistical data mining agents for an e-commerce application
NASA Astrophysics Data System (ADS)
Qin, Yi; Zhang, Yan-Qing; King, K. N.; Sunderraman, Rajshekhar
2003-03-01
Intelligent data mining techniques have useful e-Business applications. Because an e-Commerce application is related to multiple domains such as statistical analysis, market competition, price comparison, profit improvement and personal preferences, this paper presents a hybrid knowledge-based e-Commerce system fusing intelligent techniques, statistical data mining, and personal information to enhance QoS (Quality of Service) of e-Commerce. A Web-based e-Commerce application software system, eDVD Web Shopping Center, is successfully implemented uisng Java servlets and an Oracle81 database server. Simulation results have shown that the hybrid intelligent e-Commerce system is able to make smart decisions for different customers.
Mihic, Marko M; Todorovic, Marija Lj; Obradovic, Vladimir Lj; Mitrovic, Zorica M
2016-01-01
Social services aimed at the elderly are facing great challenges caused by progressive aging of the global population but also by the constant pressure to spend funds in a rational manner. This paper focuses on analyzing the investments into human resources aimed at enhancing home care for the elderly since many countries have recorded progress in the area over the past years. The goal of this paper is to stress the significance of performing an economic analysis of the investment. This paper combines statistical analysis methods such as correlation and regression analysis, methods of economic analysis, and scenario method. The economic analysis of investing in human resources for home care service in Serbia showed that the both scenarios of investing in either additional home care hours or more beneficiaries are cost-efficient. However, the optimal solution with the positive (and the highest) value of economic net present value criterion is to invest in human resources to boost the number of home care hours from 6 to 8 hours per week and increase the number of the beneficiaries to 33%. This paper shows how the statistical and economic analysis results can be used to evaluate different scenarios and enable quality decision-making based on exact data in order to improve health and quality of life of the elderly and spend funds in a rational manner.
Balanced mechanical resonator for powder handling device
NASA Technical Reports Server (NTRS)
Sarrazin, Philippe C. (Inventor); Brunner, Will M. (Inventor)
2012-01-01
A system incorporating a balanced mechanical resonator and a method for vibration of a sample composed of granular material to generate motion of a powder sample inside the sample holder for obtaining improved analysis statistics, without imparting vibration to the sample holder support.
A clinical research analytics toolkit for cohort study.
Yu, Yiqin; Zhu, Yu; Sun, Xingzhi; Tao, Ying; Zhang, Shuo; Xu, Linhao; Pan, Yue
2012-01-01
This paper presents a clinical informatics toolkit that can assist physicians to conduct cohort studies effectively and efficiently. The toolkit has three key features: 1) support of procedures defined in epidemiology, 2) recommendation of statistical methods in data analysis, and 3) automatic generation of research reports. On one hand, our system can help physicians control research quality by leveraging the integrated knowledge of epidemiology and medical statistics; on the other hand, it can improve productivity by reducing the complexities for physicians during their cohort studies.
The Impact of Measurement Noise in GPA Diagnostic Analysis of a Gas Turbine Engine
NASA Astrophysics Data System (ADS)
Ntantis, Efstratios L.; Li, Y. G.
2013-12-01
The performance diagnostic analysis of a gas turbine is accomplished by estimating a set of internal engine health parameters from available sensor measurements. No physical measuring instruments however can ever completely eliminate the presence of measurement uncertainties. Sensor measurements are often distorted by noise and bias leading to inaccurate estimation results. This paper explores the impact of measurement noise on Gas Turbine GPA analysis. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different levels of measurement noise. Conclusively, to improve the reliability of the diagnostic results, a statistical analysis of the data scattering caused by sensor uncertainties is made. The diagnostic tool used to deal with the statistical analysis of measurement noise impact is a model-based method utilizing a non-linear GPA.
NASA Technical Reports Server (NTRS)
Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.
1994-01-01
Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
Carmichael, Mary C; St Clair, Candace; Edwards, Andrea M; Barrett, Peter; McFerrin, Harris; Davenport, Ian; Awad, Mohamed; Kundu, Anup; Ireland, Shubha Kale
2016-01-01
Xavier University of Louisiana leads the nation in awarding BS degrees in the biological sciences to African-American students. In this multiyear study with ∼5500 participants, data-driven interventions were adopted to improve student academic performance in a freshman-level general biology course. The three hour-long exams were common and administered concurrently to all students. New exam questions were developed using Bloom's taxonomy, and exam results were analyzed statistically with validated assessment tools. All but the comprehensive final exam were returned to students for self-evaluation and remediation. Among other approaches, course rigor was monitored by using an identical set of 60 questions on the final exam across 10 semesters. Analysis of the identical sets of 60 final exam questions revealed that overall averages increased from 72.9% (2010) to 83.5% (2015). Regression analysis demonstrated a statistically significant correlation between high-risk students and their averages on the 60 questions. Additional analysis demonstrated statistically significant improvements for at least one letter grade from midterm to final and a 20% increase in the course pass rates over time, also for the high-risk population. These results support the hypothesis that our data-driven interventions and assessment techniques are successful in improving student retention, particularly for our academically at-risk students. © 2016 M. C. Carmichael et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Defining the ecological hydrology of Taiwan Rivers using multivariate statistical methods
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Wu, Tzu-Ching; Tsai, Wen-Ping; Herricks, Edwin E.
2009-09-01
SummaryThe identification and verification of ecohydrologic flow indicators has found new support as the importance of ecological flow regimes is recognized in modern water resources management, particularly in river restoration and reservoir management. An ecohydrologic indicator system reflecting the unique characteristics of Taiwan's water resources and hydrology has been developed, the Taiwan ecohydrological indicator system (TEIS). A major challenge for the water resources community is using the TEIS to provide environmental flow rules that improve existing water resources management. This paper examines data from the extensive network of flow monitoring stations in Taiwan using TEIS statistics to define and refine environmental flow options in Taiwan. Multivariate statistical methods were used to examine TEIS statistics for 102 stations representing the geographic and land use diversity of Taiwan. The Pearson correlation coefficient showed high multicollinearity between the TEIS statistics. Watersheds were separated into upper and lower-watershed locations. An analysis of variance indicated significant differences between upstream, more natural, and downstream, more developed, locations in the same basin with hydrologic indicator redundancy in flow change and magnitude statistics. Issues of multicollinearity were examined using a Principal Component Analysis (PCA) with the first three components related to general flow and high/low flow statistics, frequency and time statistics, and quantity statistics. These principle components would explain about 85% of the total variation. A major conclusion is that managers must be aware of differences among basins, as well as differences within basins that will require careful selection of management procedures to achieve needed flow regimes.
Performance of Between-Study Heterogeneity Measures in the Cochrane Library.
Ma, Xiaoyue; Lin, Lifeng; Qu, Zhiyong; Zhu, Motao; Chu, Haitao
2018-05-29
The growth in comparative effectiveness research and evidence-based medicine has increased attention to systematic reviews and meta-analyses. Meta-analysis synthesizes and contrasts evidence from multiple independent studies to improve statistical efficiency and reduce bias. Assessing heterogeneity is critical for performing a meta-analysis and interpreting results. As a widely used heterogeneity measure, the I statistic quantifies the proportion of total variation across studies that is due to real differences in effect size. The presence of outlying studies can seriously exaggerate the I statistic. Two alternative heterogeneity measures, the Ir and Im, have been recently proposed to reduce the impact of outlying studies. To evaluate these measures' performance empirically, we applied them to 20,599 meta-analyses in the Cochrane Library. We found that the Ir and Im have strong agreement with the I, while they are more robust than the I when outlying studies appear.
A Comparison of Analytical and Data Preprocessing Methods for Spectral Fingerprinting
LUTHRIA, DEVANAND L.; MUKHOPADHYAY, SUDARSAN; LIN, LONG-ZE; HARNLY, JAMES M.
2013-01-01
Spectral fingerprinting, as a method of discriminating between plant cultivars and growing treatments for a common set of broccoli samples, was compared for six analytical instruments. Spectra were acquired for finely powdered solid samples using Fourier transform infrared (FT-IR) and Fourier transform near-infrared (NIR) spectrometry. Spectra were also acquired for unfractionated aqueous methanol extracts of the powders using molecular absorption in the ultraviolet (UV) and visible (VIS) regions and mass spectrometry with negative (MS−) and positive (MS+) ionization. The spectra were analyzed using nested one-way analysis of variance (ANOVA) and principal component analysis (PCA) to statistically evaluate the quality of discrimination. All six methods showed statistically significant differences between the cultivars and treatments. The significance of the statistical tests was improved by the judicious selection of spectral regions (IR and NIR), masses (MS+ and MS−), and derivatives (IR, NIR, UV, and VIS). PMID:21352644
Lucas, Nathanael Cc; Hume, Carl G; Al-Chanati, Abdal; Diprose, William; Roberts, Sally; Freeman, Josh; Mogol, Vernon; Hoskins, David; Hamblin, Richard; Frampton, Chris; Bagg, Warwick; Merry, Alan F
2017-01-13
Hand hygiene is important in reducing healthcare-associated infections. The World Health Organization has defined 'five moments' when hand hygiene compliance is required. During 2013, New Zealand national data showed poor compliance with these moments by medical students. To improve medical students' compliance with the five moments. In this prospective student-led quality improvement initiative, student investigators developed, implemented and evaluated a multi-modal intervention comprising a three-month social media campaign, a competition and an entertaining educational video. Data on individual patient-medical student interactions were collected covertly by observers at baseline and at one week, six weeks and three months after initiation of the intervention. During the campaign, compliance improved in moment 2, but not significantly in moments 1, 3, 4 or 5. Statistical analysis of amalgamated data was limited by non-independent data points-a consideration apparently not always addressed in previous studies. The initiative produced improvements in compliance by medical students with one hand hygiene moment. Statistical analysis of amalgamated data for all five moments should allow for the non-independence of each occasion in which clinicians interact with a patient. More work is needed to ensure excellent hand hygiene practices of future doctors.
Statistical Significance of Optical Map Alignments
Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.
2012-01-01
Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568
Equivalent statistics and data interpretation.
Francis, Gregory
2017-08-01
Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris
2017-04-01
Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.
Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze
2014-08-01
Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.
Experimental Design in Clinical 'Omics Biomarker Discovery.
Forshed, Jenny
2017-11-03
This tutorial highlights some issues in the experimental design of clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses, and how to select samples to improve the chance of answering the clinical question at issue. This includes the importance of defining clinical aim and end point, knowing the variability in the results, randomization of samples, sample size, statistical power, and how to avoid confounding factors by including clinical data in the sample selection, that is, how to avoid unpleasant surprises at the point of statistical analysis. The aim of this Tutorial is to help translational clinical and preclinical biomarker candidate research and to improve the validity and potential of future biomarker candidate findings.
Fairchild, Amanda J.; Abara, Winston E.; Gottschall, Amanda C.; Tein, Jenn-Yun; Prinz, Ronald J.
2015-01-01
The purpose of this article is to introduce and describe a statistical model that researchers can use to evaluate underlying mechanisms of behavioral onset and other event occurrence outcomes. Specifically, the article develops a framework for estimating mediation effects with outcomes measured in discrete-time epochs by integrating the statistical mediation model with discrete-time survival analysis. The methodology has the potential to help strengthen health research by targeting prevention and intervention work more effectively as well as by improving our understanding of discretized periods of risk. The model is applied to an existing longitudinal data set to demonstrate its use, and programming code is provided to facilitate its implementation. PMID:24296470
Results of a joint NOAA/NASA sounder simulation study
NASA Technical Reports Server (NTRS)
Phillips, N.; Susskind, Joel; Mcmillin, L.
1988-01-01
This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.
NASA Astrophysics Data System (ADS)
Seaward, James Nicholas
International development organizations have recently ramped up efforts to promote the use of improved cookstoves (ICS) in developing countries, aiming to reduce the harmful environmental and public health impacts of the burning of biomass for cooking and heating. I hypothesize that ICS use also has additional benefits---economic and social benefits---that can contribute to women's economic empowerment in the developing world. To explore the relationship between ICS use and women's economic empowerment, I use Ordinary Least Squares and Logit models based on data from the India Human Development Survey (IHDS) to analyze differences between women living in households that use ICS and those living in homes that use traditional cookstoves. My regression results reveal that ICS use has a statistically significant and negative effect on the amount of time women and girls spend on fuel collection and a statistically significant and positive effect on the likelihood of women's participation in side businesses, but does not have a statistically significant effect on the likelihood of lost productivity. My analysis shows promise that in addition to health and environmental benefits, fuel-efficient cooking technologies can also have social and economic impacts that are especially beneficial to women. It is my hope that the analysis provided in this paper will be used to further the dialogue about the importance of women's access to modern energy services in the fight to improve women's living standards in the developing world.
NASA Astrophysics Data System (ADS)
Jahani, Nariman; Cohen, Eric; Hsieh, Meng-Kang; Weinstein, Susan P.; Pantalone, Lauren; Davatzikos, Christos; Kontos, Despina
2018-02-01
We examined the ability of DCE-MRI longitudinal features to give early prediction of recurrence-free survival (RFS) in women undergoing neoadjuvant chemotherapy for breast cancer, in a retrospective analysis of 106 women from the ISPY 1 cohort. These features were based on the voxel-wise changes seen in registered images taken before treatment and after the first round of chemotherapy. We computed the transformation field using a robust deformable image registration technique to match breast images from these two visits. Using the deformation field, parametric response maps (PRM) — a voxel-based feature analysis of longitudinal changes in images between visits — was computed for maps of four kinetic features (signal enhancement ratio, peak enhancement, and wash-in/wash-out slopes). A two-level discrete wavelet transform was applied to these PRMs to extract heterogeneity information about tumor change between visits. To estimate survival, a Cox proportional hazard model was applied with the C statistic as the measure of success in predicting RFS. The best PRM feature (as determined by C statistic in univariable analysis) was determined for each of the four kinetic features. The baseline model, incorporating functional tumor volume, age, race, and hormone response status, had a C statistic of 0.70 in predicting RFS. The model augmented with the four PRM features had a C statistic of 0.76. Thus, our results suggest that adding information on the texture of voxel-level changes in tumor kinetic response between registered images of first and second visits could improve early RFS prediction in breast cancer after neoadjuvant chemotherapy.
Stewart, Sarah; Pearson, Janet; Rome, Keith; Dalbeth, Nicola; Vandal, Alain C
2018-01-01
Statistical techniques currently used in musculoskeletal research often inefficiently account for paired-limb measurements or the relationship between measurements taken from multiple regions within limbs. This study compared three commonly used analysis methods with a mixed-models approach that appropriately accounted for the association between limbs, regions, and trials and that utilised all information available from repeated trials. Four analysis were applied to an existing data set containing plantar pressure data, which was collected for seven masked regions on right and left feet, over three trials, across three participant groups. Methods 1-3 averaged data over trials and analysed right foot data (Method 1), data from a randomly selected foot (Method 2), and averaged right and left foot data (Method 3). Method 4 used all available data in a mixed-effects regression that accounted for repeated measures taken for each foot, foot region and trial. Confidence interval widths for the mean differences between groups for each foot region were used as a criterion for comparison of statistical efficiency. Mean differences in pressure between groups were similar across methods for each foot region, while the confidence interval widths were consistently smaller for Method 4. Method 4 also revealed significant between-group differences that were not detected by Methods 1-3. A mixed effects linear model approach generates improved efficiency and power by producing more precise estimates compared to alternative approaches that discard information in the process of accounting for paired-limb measurements. This approach is recommended in generating more clinically sound and statistically efficient research outputs. Copyright © 2017 Elsevier B.V. All rights reserved.
Wrestling with Philosophy: Improving Scholarship in Higher Education
ERIC Educational Resources Information Center
Kezar, Adrianna
2004-01-01
Method is usually viewed as completely separate from philosophy or theory, focusing instead on techniques and procedures of interviewing, focus groups, observation, or statistical analysis. Several texts on methodology published recently have added significant sections on philosophy, such as Creswell's (1998) Qualitative inquiry and research…
NASA Astrophysics Data System (ADS)
Rosenheim, B. E.; Firesinger, D.; Roberts, M. L.; Burton, J. R.; Khan, N.; Moyer, R. P.
2016-12-01
Radiocarbon (14C) sediment core chronologies benefit from a high density of dates, even when precision of individual dates is sacrificed. This is demonstrated by a combined approach of rapid 14C analysis of CO2 gas generated from carbonates and organic material coupled with Bayesian statistical modeling. Analysis of 14C is facilitated by the gas ion source on the Continuous Flow Accelerator Mass Spectrometry (CFAMS) system at the Woods Hole Oceanographic Institution's National Ocean Sciences Accelerator Mass Spectrometry facility. This instrument is capable of producing a 14C determination of +/- 100 14C y precision every 4-5 minutes, with limited sample handling (dissolution of carbonates and/or combustion of organic carbon in evacuated containers). Rapid analysis allows over-preparation of samples to include replicates at each depth and/or comparison of different sample types at particular depths in a sediment or peat core. Analysis priority is given to depths that have the least chronologic precision as determined by Bayesian modeling of the chronology of calibrated ages. Use of such a statistical approach to determine the order in which samples are run ensures that the chronology constantly improves so long as material is available for the analysis of chronologic weak points. Ultimately, accuracy of the chronology is determined by the material that is actually being dated, and our combined approach allows testing of different constituents of the organic carbon pool and the carbonate minerals within a core. We will present preliminary results from a deep-sea sediment core abundant in deep-sea foraminifera as well as coastal wetland peat cores to demonstrate statistical improvements in sediment- and peat-core chronologies obtained by increasing the quantity and decreasing the quality of individual dates.
Soto-Gordoa, Myriam; Arrospide, Arantzazu; Merino Hernández, Marisa; Mora Amengual, Joana; Fullaondo Zabala, Ane; Larrañaga, Igor; de Manuel, Esteban; Mar, Javier
2017-01-01
To develop a framework for the management of complex health care interventions within the Deming continuous improvement cycle and to test the framework in the case of an integrated intervention for multimorbid patients in the Basque Country within the CareWell project. Statistical analysis alone, although necessary, may not always represent the practical significance of the intervention. Thus, to ascertain the true economic impact of the intervention, the statistical results can be integrated into the budget impact analysis. The intervention of the case study consisted of a comprehensive approach that integrated new provider roles and new technological infrastructure for multimorbid patients, with the aim of reducing patient decompensations by 10% over 5 years. The study period was 2012 to 2020. Given the aging of the general population, the conventional scenario predicts an increase of 21% in the health care budget for care of multimorbid patients during the study period. With a successful intervention, this figure should drop to 18%. The statistical analysis, however, showed no significant differences in costs either in primary care or in hospital care between 2012 and 2014. The real costs in 2014 were by far closer to those in the conventional scenario than to the reductions expected in the objective scenario. The present implementation should be reappraised, because the present expenditure did not move closer to the objective budget. This work demonstrates the capacity of budget impact analysis to enhance the implementation of complex interventions. Its integration in the context of the continuous improvement cycle is transferable to other contexts in which implementation depth and time are important. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Mild cognitive impairment and fMRI studies of brain functional connectivity: the state of the art
Farràs-Permanyer, Laia; Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel
2015-01-01
In the last 15 years, many articles have studied brain connectivity in Mild Cognitive Impairment patients with fMRI techniques, seemingly using different connectivity statistical models in each investigation to identify complex connectivity structures so as to recognize typical behavior in this type of patient. This diversity in statistical approaches may cause problems in results comparison. This paper seeks to describe how researchers approached the study of brain connectivity in MCI patients using fMRI techniques from 2002 to 2014. The focus is on the statistical analysis proposed by each research group in reference to the limitations and possibilities of those techniques to identify some recommendations to improve the study of functional connectivity. The included articles came from a search of Web of Science and PsycINFO using the following keywords: f MRI, MCI, and functional connectivity. Eighty-one papers were found, but two of them were discarded because of the lack of statistical analysis. Accordingly, 79 articles were included in this review. We summarized some parts of the articles, including the goal of every investigation, the cognitive paradigm and methods used, brain regions involved, use of ROI analysis and statistical analysis, emphasizing on the connectivity estimation model used in each investigation. The present analysis allowed us to confirm the remarkable variability of the statistical analysis methods found. Additionally, the study of brain connectivity in this type of population is not providing, at the moment, any significant information or results related to clinical aspects relevant for prediction and treatment. We propose to follow guidelines for publishing fMRI data that would be a good solution to the problem of study replication. The latter aspect could be important for future publications because a higher homogeneity would benefit the comparison between publications and the generalization of results. PMID:26300802
Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen
2017-01-01
Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate Speech), but significant associations were found for UTM with all eleven autism-related assessments with cross-validation R2 values ranging from 0.12–0.48. PMID:28068407
Moore, Sarah J; Herst, Patries M; Louwe, Robert J W
2018-05-01
A remarkable improvement in patient positioning was observed after the implementation of various process changes aiming to increase the consistency of patient positioning throughout the radiotherapy treatment chain. However, no tool was available to describe these changes over time in a standardised way. This study reports on the feasibility of Statistical Process Control (SPC) to highlight changes in patient positioning accuracy and facilitate correlation of these changes with the underlying process changes. Metrics were designed to quantify the systematic and random patient deformation as input for the SPC charts. These metrics were based on data obtained from multiple local ROI matches for 191 patients who were treated for head-and-neck cancer during the period 2011-2016. SPC highlighted a significant improvement in patient positioning that coincided with multiple intentional process changes. The observed improvements could be described as a combination of a reduction in outliers and a systematic improvement in the patient positioning accuracy of all patients. SPC is able to track changes in the reproducibility of patient positioning in head-and-neck radiation oncology, and distinguish between systematic and random process changes. Identification of process changes underlying these trends requires additional statistical analysis and seems only possible when the changes do not overlap in time. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
Research on the raw data processing method of the hydropower construction project
NASA Astrophysics Data System (ADS)
Tian, Zhichao
2018-01-01
In this paper, based on the characteristics of the fixed data, this paper compares the various mathematical statistics analysis methods and chooses the improved Grabs criterion to analyze the data, and through the analysis of the data processing, the data processing method is not suitable. It is proved that this method can be applied to the processing of fixed raw data. This paper provides a reference for reasonably determining the effective quota analysis data.
Zhang, Chunmei; Liu, Xiangjuan; Wang, Xiaomeng; Wang, Qi; Zhang, Yun; Ge, Zhiming
2015-11-01
A growing number of patients with chronic artery disease suffer from angina, despite the optimal medical management (ie, β-blockers, calcium channel blockers, and long-acting nitrates) and revascularization. Currently, enhanced external counterpulsation (EECP) therapy has been verified as a noninvasive, safe therapy for refractory angina. The study was designed to evaluate the efficacy of EECP in patients with chronic refractory angina according to Canadian Cardiovascular Society (CCS) angina class.We identified systematic literature through MEDLINE, EMBASE, the Cochrane Clinical Trials Register Database, and the ClinicalTrials. gov Website from 1990 to 2015. Studies were considered eligible if they were prospective and reported data on CCS class before and after EECP treatment. Meta-analysis was performed to assess the efficacy of EECP therapy by at least 1 CCS angina class improvement, and proportion along with the 95% confidence interval (CI) was calculated. Statistical heterogeneity was calculated by I statistic and the Q statistic. Sensitivity analysis was addressed to test the influence of trials on the overall pooled results. Subgroup analysis was applied to explore potential reasons for heterogeneity.Eighteen studies were enrolled in our meta-analysis. Pooled analysis showed 85% of patients underwent EECP had a reduction by at least one CCS class (95%CI 0.81-0.88, I = 58.5%, P < 0.001). The proportion of patients enrolled at primarily different studies with chronic heart failure (CHF) improved by at least 1 CCS class was about 84% after EECP (95%CI 0.81-0.88, I = 32.7%, P = 0.1668). After 3 large studies were excluded, the pooled proportion was 82% (95%CI 0.79-0.86, I = 18%, P = 0.2528). Funnel plot indicated that some asymmetry while the Begg and Egger bias statistic showed no publication bias (P = 0.1495 and 0.2859, respectively).Our study confirmed that EECP provided an effective treatment for patients who were unresponsive to medical management and/or invasive therapy. However, the long-term benefits of EECP therapy needed further studies to evaluate in the management of chronic refractory angina.
Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-04-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.
Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-01-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472
Statistical Coupling Analysis-Guided Library Design for the Discovery of Mutant Luciferases.
Liu, Mira D; Warner, Elliot A; Morrissey, Charlotte E; Fick, Caitlyn W; Wu, Taia S; Ornelas, Marya Y; Ochoa, Gabriela V; Zhang, Brendan S; Rathbun, Colin M; Porterfield, William B; Prescher, Jennifer A; Leconte, Aaron M
2018-02-06
Directed evolution has proven to be an invaluable tool for protein engineering; however, there is still a need for developing new approaches to continue to improve the efficiency and efficacy of these methods. Here, we demonstrate a new method for library design that applies a previously developed bioinformatic method, Statistical Coupling Analysis (SCA). SCA uses homologous enzymes to identify amino acid positions that are mutable and functionally important and engage in synergistic interactions between amino acids. We use SCA to guide a library of the protein luciferase and demonstrate that, in a single round of selection, we can identify luciferase mutants with several valuable properties. Specifically, we identify luciferase mutants that possess both red-shifted emission spectra and improved stability relative to those of the wild-type enzyme. We also identify luciferase mutants that possess a >50-fold change in specificity for modified luciferins. To understand the mutational origin of these improved mutants, we demonstrate the role of mutations at N229, S239, and G246 in altered function. These studies show that SCA can be used to guide library design and rapidly identify synergistic amino acid mutations from a small library.
Richardson, Karen J; Sengstack, Patricia; Doucette, Jeffrey N; Hammond, William E; Schertz, Matthew; Thompson, Julie; Johnson, Constance
2016-02-01
The primary aim of this performance improvement project was to determine whether the electronic health record implementation of stroke-specific nursing documentation flowsheet templates and clinical decision support alerts improved the nursing documentation of eligible stroke patients in seven stroke-certified emergency departments. Two system enhancements were introduced into the electronic record in an effort to improve nursing documentation: disease-specific documentation flowsheets and clinical decision support alerts. Using a pre-post design, project measures included six stroke management goals as defined by the National Institute of Neurological Disorders and Stroke and three clinical decision support measures based on entry of orders used to trigger documentation reminders for nursing: (1) the National Institutes of Health's Stroke Scale, (2) neurological checks, and (3) dysphagia screening. Data were reviewed 6 months prior (n = 2293) and 6 months following the intervention (n = 2588). Fisher exact test was used for statistical analysis. Statistical significance was found for documentation of five of the six stroke management goals, although effect sizes were small. Customizing flowsheets to meet the needs of nursing workflow showed improvement in the completion of documentation. The effects of the decision support alerts on the completeness of nursing documentation were not statistically significant (likely due to lack of order entry). For example, an order for the National Institutes of Health Stroke Scale was entered only 10.7% of the time, which meant no alert would fire for nursing in the postintervention group. Future work should focus on decision support alerts that trigger reminders for clinicians to place relevant orders for this population.
Khatoon, Farheen
2015-01-01
Background Health care faces challenges due to complications, inefficiencies and other concerns that threaten the safety of patients. Aim The purpose of his study was to identify causes of complications encountered after administration of local anaesthesia for dental and oral surgical procedures and to reduce the incidence of complications by introduction of six sigma methodology. Materials and Methods DMAIC (Define, Measure, Analyse, Improve and Control) process of Six Sigma was taken into consideration to reduce the incidence of complications encountered after administration of local anaesthesia injections for dental and oral surgical procedures using failure mode and effect analysis. Pareto analysis was taken into consideration to analyse the most recurring complications. Paired z-sample test using Minitab Statistical Inference and Fisher’s exact test was used to statistically analyse the obtained data. The p-value <0.05 was considered as significant value. Results Total 54 systemic and 62 local complications occurred during three months of analyse and measure phase. Syncope, failure of anaesthesia, trismus, auto mordeduras and pain at injection site was found to be most recurring complications. Cumulative defective percentage was 7.99 in case of pre-improved data and decreased to 4.58 in the control phase. Estimate for difference was 0.0341228 and 95% lower bound for difference was 0.0193966. p-value was found to be highly significant with p= 0.000. Conclusion The application of six sigma improvement methodology in healthcare tends to deliver consistently better results to the patients as well as hospitals and results in better patient compliance as well as satisfaction. PMID:26816989
Rhee, Chanseok; Visintini, Sarah; Dunning, Cynthia E; Oxner, William M; Glennie, R Andrew
2017-10-01
It is controversial whether the surgical restoration of sagittal balance and spinopelvic angulation in a single level lumbar degenerative spondylolisthesis results in clinical improvements. The purpose of this study to systematically review the available literature to determine whether the surgical correction of malalignment in lumbar degenerative spondylolisthesis correlates with improvements in patient-reported clinical outcomes. Literature searches were performed via Ovid Medline, Embase, CENTRAL and Web of Science using search terms "lumbar," "degenerative/spondylolisthesis" and "surgery/surgical/surgeries/fusion". This resulted in 844 articles and after reviewing the abstracts and full-texts, 13 articles were included for summary and final analysis. There were two Level II articles, four Level III articles and five Level IV articles. Most commonly used patient-reported outcome measures (PROMs) were Oswestery disability index (ODI) and visual analogue scale (VAS). Four articles were included for the final statistical analysis. There was no statistically significant difference between the patient groups who achieved successful surgical correction of malalignment and those who did not for either ODI (mean difference -0.94, CI -8.89-7.00) or VAS (mean difference 1.57, CI -3.16-6.30). Two studies assessed the efficacy of manual reduction of lumbar degenerative spondylolisthesis and their clinical outcomes after the operation, and there was no statistically significant improvement. Overall, the restoration of focal lumbar lordosis and restoration of sagittal balance for single-level lumbar degenerative spondylolisthesis does not seem to yield clinical improvements but well-powered studies on this specific topic is lacking in the current literature. Future well-powered studies are needed for a more definitive conclusion. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lohuis, Peter J F M; Datema, Frank R
2015-04-01
Tongue-in-groove (TIG) is a conservative but powerful surgical suture technique to control shape, rotation, and projection of the nasal tip. In this study, statistical analyses were performed to determine the aesthetical and functional effectiveness of TIG rhinoplasty. Prospective cohort study including 110 Caucasian or Mediterranean aesthetic rhinoplasty patients treated by one surgeon between 2007 and 2012, with a 1-year follow-up. Data were collected using the Utrecht Questionnaire, a validated instrument routinely offered to our patients before and 1 year after surgery. Aesthetic results were reflected by change in subjective body image in relation to nasal appearance, scored on five aesthetic questions and a 10-cm visual analog scale (VAS). Functional results were determined by change in subjective nasal airway patency, scored on a 10-cm VAS for both sides. The mean aesthetic sum score (5, low burden-25, high burden) significantly improved from 14.01 to 6.54 (P <.01). The mean aesthetic VAS score (0, very ugly-10, very nice) significantly improved from 3.35 to 7.78 (P < .01). The mean functional VAS score (0, very bad-10, very good) showed a small but significant improvement on both sides (left, 6.83-7.96; right, 6.88-7.80; P <.01). Statistical analysis of quantified subjective data on nasal aesthetics and function show that TIG is a reliable technique that can help to deliver consistently good results in Caucasian and Mediterranean patients seeking aesthetic rhinoplasty. A small additional functional improvement can be expected. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Astrophysics Data System (ADS)
J-Me, Teh; Noh, Norlaili Mohd.; Aziz, Zalina Abdul
2015-05-01
In the chip industry today, the key goal of a chip development organization is to develop and market chips within a short time frame to gain foothold on market share. This paper proposes a design flow around the area of parasitic extraction to improve the design cycle time. The proposed design flow utilizes the usage of metal fill emulation as opposed to the current flow which performs metal fill insertion directly. By replacing metal fill structures with an emulation methodology in earlier iterations of the design flow, this is targeted to help reduce runtime in fill insertion stage. Statistical design of experiments methodology utilizing the randomized complete block design was used to select an appropriate emulated metal fill width to improve emulation accuracy. The experiment was conducted on test cases of different sizes, ranging from 1000 gates to 21000 gates. The metal width was varied from 1 x minimum metal width to 6 x minimum metal width. Two-way analysis of variance and Fisher's least significant difference test were used to analyze the interconnect net capacitance values of the different test cases. This paper presents the results of the statistical analysis for the 45 nm process technology. The recommended emulated metal fill width was found to be 4 x the minimum metal width.
NASA Astrophysics Data System (ADS)
Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.
2018-03-01
Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.
An R package for analyzing and modeling ranking data
2013-01-01
Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645
An R package for analyzing and modeling ranking data.
Lee, Paul H; Yu, Philip L H
2013-05-14
In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.
Sochacki, Kyle R; Jack, Robert A; Hirase, Takashi; McCulloch, Patrick C; Lintner, David M; Liberman, Shari R; Harris, Joshua D
2017-12-01
The purpose of this investigation was to determine whether arthroscopic debridement of primary elbow osteoarthritis results in statistically significant and clinically relevant improvement in (1) elbow range of motion and (2) clinical outcomes with (3) low complication and reoperation rates. A systematic review was registered with PROSPERO and performed using PRISMA guidelines. Databases were searched for studies that investigated the outcomes of arthroscopic debridement for the treatment of primary osteoarthritis of the elbow in adult human patients. Study methodological quality was analyzed. Studies that included post-traumatic arthritis were excluded. Elbow motion and all elbow-specific patient-reported outcome scores were eligible for analysis. Comparisons between preoperative and postoperative values from each study were made using 2-sample Z-tests (http://in-silico.net/tools/statistics/ztest) using a P value < .05. Nine articles (209 subjects, 213 elbows, 187 males, 22 females, mean age 45.7 ± 7.1 years, mean follow-up 41.7 ± 16.3. months; 75% right, 25% left; 79% dominant elbow, 21% nondominant) were analyzed. Elbow extension (23.4°-10.7°, Δ 12.7°), flexion (115.9°-128.7°, Δ 12.8°), and global arc of motion (94.5°-117.6°, Δ 23.1°) had statistically significant and clinically relevant improvement following arthroscopic debridement (P < .0001 for all). There was also a statistically significant (P < .0001) and clinically relevant improvement in the Mayo Elbow Performance Score (60.7-84.6, Δ 23.9) postoperatively. Six patients (2.8%) had postoperative complications. Nine (4.2%) underwent reoperation. Elbow arthroscopic debridement for primary degenerative osteoarthritis results in statistically significant and clinically relevant improvement in elbow range of motion and clinical outcomes with low complication and reoperation rates. Systematic review of level IV studies. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
A DMAIC approach for process capability improvement an engine crankshaft manufacturing process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, P. Srinivasa
2014-05-01
The define-measure-analyze-improve-control (DMAIC) approach is a five-strata approach, namely DMAIC. This approach is the scientific approach for reducing the deviations and improving the capability levels of the manufacturing processes. The present work elaborates on DMAIC approach applied in reducing the process variations of the stub-end-hole boring operation of the manufacture of crankshaft. This statistical process control study starts with selection of the critical-to-quality (CTQ) characteristic in the define stratum. The next stratum constitutes the collection of dimensional measurement data of the CTQ characteristic identified. This is followed by the analysis and improvement strata where the various quality control tools like Ishikawa diagram, physical mechanism analysis, failure modes effects analysis and analysis of variance are applied. Finally, the process monitoring charts are deployed at the workplace for regular monitoring and control of the concerned CTQ characteristic. By adopting DMAIC approach, standard deviation is reduced from 0.003 to 0.002. The process potential capability index ( C P) values improved from 1.29 to 2.02 and the process performance capability index ( C PK) values improved from 0.32 to 1.45, respectively.
2013-01-01
Background Over the last years, the introduction of robotic technologies into Parkinson’s disease rehabilitation settings has progressed from concept to reality. However, the benefit of robotic training remains elusive. This pilot randomized controlled observer trial is aimed at investigating the feasibility, the effectiveness and the efficacy of new end-effector robot training in people with mild Parkinson’s disease. Methods Design. Pilot randomized controlled trial. Setting. Robot assisted gait training (EG) compared to treadmill training (CG). Participants. Twenty cognitively intact participants with mild Parkinson’s disease and gait disturbance. Interventions. The EG underwent a rehabilitation programme of robot assisted walking for 40 minutes, 5 times a week for 4 weeks. The CG received a treadmill training programme for 40 minutes, 5 times a week for 4 weeks. Main outcome measures. The outcome measure of efficacy was recorded by gait analysis laboratory. The assessments were performed at the beginning (T0) and at the end of the treatment (T1). The main outcome was the change in velocity. The feasibility of the intervention was assessed by recording exercise adherence and acceptability by specific test. Results Robot training was feasible, acceptable, safe, and the participants completed 100% of the prescribed training sessions. A statistically significant improvement in gait index was found in favour of the EG (T0 versus T1). In particular, the statistical analysis of primary outcome (gait speed) using the Friedman test showed statistically significant improvements for the EG (p = 0,0195). The statistical analysis performed by Friedman test of Step length left (p = 0,0195) and right (p = 0,0195) and Stride length left (p = 0,0078) and right (p = 0,0195) showed a significant statistical gain. No statistically significant improvements on the CG were found. Conclusions Robot training is a feasible and safe form of rehabilitative exercise for cognitively intact people with mild PD. This original approach can contribute to increase a short time lower limb motor recovery in idiopathic PD patients. The focus on the gait recovery is a further characteristic that makes this research relevant to clinical practice. On the whole, the simplicity of treatment, the lack of side effects, and the positive results from patients support the recommendation to extend the use of this treatment. Further investigation regarding the long-time effectiveness of robot training is warranted. Trial registration ClinicalTrials.gov NCT01668407 PMID:23706025
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
Online and offline tools for head movement compensation in MEG.
Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert
2013-03-01
Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Evidential Value That Exercise Improves BMI z-Score in Overweight and Obese Children and Adolescents
Kelley, George A.; Kelley, Kristi S.
2015-01-01
Background. Given the cardiovascular disease (CVD) related importance of understanding the true effects of exercise on adiposity in overweight and obese children and adolescents, this study examined whether there is evidential value to rule out excessive and inappropriate reporting of statistically significant results, a major problem in the published literature, with respect to exercise-induced improvements in BMI z-score among overweight and obese children and adolescents. Methods. Using data from a previous meta-analysis of 10 published studies that included 835 overweight and obese children and adolescents, a novel, recently developed approach (p-curve) was used to test for evidential value and rule out selective reporting of findings. Chi-squared tests (χ 2) were used to test for statistical significance with alpha (p) values <0.05 considered statistically significant. Results. Six of 10 findings (60%) were statistically significant. Statistically significant right-skew to rule out selective reporting was found (χ 2 = 38.8, p = 0.0001). Conversely, studies neither lacked evidential value (χ 2 = 6.8, p = 0.87) nor lacked evidential value and were intensely p-hacked (χ 2 = 4.3, p = 0.98). Conclusion. Evidential value results confirm that exercise reduces BMI z-score in overweight and obese children and adolescents, an important therapeutic strategy for treating and preventing CVD. PMID:26509145
Kelley, George A; Kelley, Kristi S
2015-01-01
Background. Given the cardiovascular disease (CVD) related importance of understanding the true effects of exercise on adiposity in overweight and obese children and adolescents, this study examined whether there is evidential value to rule out excessive and inappropriate reporting of statistically significant results, a major problem in the published literature, with respect to exercise-induced improvements in BMI z-score among overweight and obese children and adolescents. Methods. Using data from a previous meta-analysis of 10 published studies that included 835 overweight and obese children and adolescents, a novel, recently developed approach (p-curve) was used to test for evidential value and rule out selective reporting of findings. Chi-squared tests (χ (2)) were used to test for statistical significance with alpha (p) values <0.05 considered statistically significant. Results. Six of 10 findings (60%) were statistically significant. Statistically significant right-skew to rule out selective reporting was found (χ (2) = 38.8, p = 0.0001). Conversely, studies neither lacked evidential value (χ (2) = 6.8, p = 0.87) nor lacked evidential value and were intensely p-hacked (χ (2) = 4.3, p = 0.98). Conclusion. Evidential value results confirm that exercise reduces BMI z-score in overweight and obese children and adolescents, an important therapeutic strategy for treating and preventing CVD.
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Gogoi, Usha Rani; Das, Kakali; Ghosh, Anjan Kumar; Bhattacharjee, Debotosh; Majumdar, Gautam
2016-05-01
The non-invasive, painless, radiation-free and cost-effective infrared breast thermography (IBT) makes a significant contribution to improving the survival rate of breast cancer patients by early detecting the disease. This paper presents a set of standard breast thermogram acquisition protocols to improve the potentiality and accuracy of infrared breast thermograms in early breast cancer detection. By maintaining all these protocols, an infrared breast thermogram acquisition setup has been established at the Regional Cancer Centre (RCC) of Government Medical College (AGMC), Tripura, India. The acquisition of breast thermogram is followed by the breast thermogram interpretation, for identifying the presence of any abnormality. However, due to the presence of complex vascular patterns, accurate interpretation of breast thermogram is a very challenging task. The bilateral symmetry of the thermal patterns in each breast thermogram is quantitatively computed by statistical feature analysis. A series of statistical features are extracted from a set of 20 thermograms of both healthy and unhealthy subjects. Finally, the extracted features are analyzed for breast abnormality detection. The key contributions made by this paper can be highlighted as -- a) the designing of a standard protocol suite for accurate acquisition of breast thermograms, b) creation of a new breast thermogram dataset by maintaining the protocol suite, and c) statistical analysis of the thermograms for abnormality detection. By doing so, this proposed work can minimize the rate of false findings in breast thermograms and thus, it will increase the utilization potentiality of breast thermograms in early breast cancer detection.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
Using simulation modeling to improve patient flow at an outpatient orthopedic clinic.
Rohleder, Thomas R; Lewkonia, Peter; Bischak, Diane P; Duffy, Paul; Hendijani, Rosa
2011-06-01
We report on the use of discrete event simulation modeling to support process improvements at an orthopedic outpatient clinic. The clinic was effective in treating patients, but waiting time and congestion in the clinic created patient dissatisfaction and staff morale issues. The modeling helped to identify improvement alternatives including optimized staffing levels, better patient scheduling, and an emphasis on staff arriving promptly. Quantitative results from the modeling provided motivation to implement the improvements. Statistical analysis of data taken before and after the implementation indicate that waiting time measures were significantly improved and overall patient time in the clinic was reduced.
Analysis of repeated measurement data in the clinical trials
Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa
2013-01-01
Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038
NASA Astrophysics Data System (ADS)
Karuppiah, R.; Faldi, A.; Laurenzi, I.; Usadi, A.; Venkatesh, A.
2014-12-01
An increasing number of studies are focused on assessing the environmental footprint of different products and processes, especially using life cycle assessment (LCA). This work shows how combining statistical methods and Geographic Information Systems (GIS) with environmental analyses can help improve the quality of results and their interpretation. Most environmental assessments in literature yield single numbers that characterize the environmental impact of a process/product - typically global or country averages, often unchanging in time. In this work, we show how statistical analysis and GIS can help address these limitations. For example, we demonstrate a method to separately quantify uncertainty and variability in the result of LCA models using a power generation case study. This is important for rigorous comparisons between the impacts of different processes. Another challenge is lack of data that can affect the rigor of LCAs. We have developed an approach to estimate environmental impacts of incompletely characterized processes using predictive statistical models. This method is applied to estimate unreported coal power plant emissions in several world regions. There is also a general lack of spatio-temporal characterization of the results in environmental analyses. For instance, studies that focus on water usage do not put in context where and when water is withdrawn. Through the use of hydrological modeling combined with GIS, we quantify water stress on a regional and seasonal basis to understand water supply and demand risks for multiple users. Another example where it is important to consider regional dependency of impacts is when characterizing how agricultural land occupation affects biodiversity in a region. We developed a data-driven methodology used in conjuction with GIS to determine if there is a statistically significant difference between the impacts of growing different crops on different species in various biomes of the world.
Connell, J.F.; Bailey, Z.C.
1989-01-01
A total of 338 single-well aquifer tests from Bear Creek and Melton Valley, Tennessee were statistically grouped to estimate hydraulic conductivities for the geologic formations in the valleys. A cross-sectional simulation model linked to a regression model was used to further refine the statistical estimates for each of the formations and to improve understanding of ground-water flow in Bear Creek Valley. Median hydraulic-conductivity values were used as initial values in the model. Model-calculated estimates of hydraulic conductivity were generally lower than the statistical estimates. Simulations indicate that (1) the Pumpkin Valley Shale controls groundwater flow between Pine Ridge and Bear Creek; (2) all the recharge on Chestnut Ridge discharges to the Maynardville Limestone; (3) the formations having smaller hydraulic gradients may have a greater tendency for flow along strike; (4) local hydraulic conditions in the Maynardville Limestone cause inaccurate model-calculated estimates of hydraulic conductivity; and (5) the conductivity of deep bedrock neither affects the results of the model nor does it add information on the flow system. Improved model performance would require: (1) more water level data for the Copper Ridge Dolomite; (2) improved estimates of hydraulic conductivity in the Copper Ridge Dolomite and Maynardville Limestone; and (3) more water level data and aquifer tests in deep bedrock. (USGS)
Linear and Order Statistics Combiners for Pattern Classification
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)
2001-01-01
Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.
Papakostas, George I; Clain, Alisabet; Ameral, Victoria E; Baer, Lee; Brintz, Carrie; Smith, Ward T; Londborg, Peter D; Glaudin, Vincent; Painter, John R; Fava, Maurizio
2010-01-01
Anxious depression, defined as major depressive disorder (MDD) accompanied by high levels of anxiety, seems to be both common and difficult to treat, with antidepressant monotherapy often yielding modest results. We sought to examine the relative benefits of antidepressant-anxiolytic cotherapy versus antidepressant monotherapy for patients with anxious depression versus without anxious depression. We conducted a post-hoc analysis of an existing dataset (N=80), from a 3-week, randomized, double-blind trial which demonstrated cotherapy with fluoxetine and clonazepam to result in superior efficacy than fluoxetine monotherapy in MDD. The present analysis involved examining whether anxious depression status served as a predictor and moderator of symptom improvement. Anxious depression status was not found to predict symptom improvement, or serve as a moderator of clinical improvement to cotherapy versus monotherapy. However, the advantage in remission rates in favor of cotherapy versus monotherapy was, numerically, much larger for patients with anxious depression (32.2%) than it was for patients without anxious MDD (9.7%). The respective number needed to treat statistic for these two differences in response rates were, approximately, one in three for patients with anxious depression versus one in 10 for patients without anxious depression. The efficacy of fluoxetine-clonazepam cotherapy compared with fluoxetine monotherapy was numerically but not statistically enhanced for patients with anxious depression than those without anxious depression.
Lanting, Sean M; Johnson, Nathan A; Baker, Michael K; Caterson, Ian D; Chuter, Vivienne H
2017-02-01
This study aimed to review the efficacy of exercise training for improving cutaneous microvascular reactivity in response to local stimulus in human adults. Systematic review with meta-analysis. A systematic search of Medline, Cinahl, AMED, Web of Science, Scopus, and Embase was conducted up to June 2015. Included studies were controlled trials assessing the effect of an exercise training intervention on cutaneous microvascular reactivity as instigated by local stimulus such as local heating, iontophoresis and post-occlusive reactive hyperaemia. Studies where the control was only measured at baseline or which included participants with vasospastic disorders were excluded. Two authors independently reviewed and selected relevant controlled trials and extracted data. Quality was assessed using the Downs and Black checklist. Seven trials were included, with six showing a benefit of exercise training but only two reaching statistical significance with effect size ranging from -0.14 to 1.03. The meta-analysis revealed that aerobic exercise had a moderate statistically significant effect on improving cutaneous microvascular reactivity (effect size (ES)=0.43, 95% CI: 0.08-0.78, p=0.015). Individual studies employing an exercise training intervention have tended to have small sample sizes and hence lacked sufficient power to detect clinically meaningful benefits to cutaneous microvascular reactivity. Pooled analysis revealed a clear benefit of exercise training on improving cutaneous microvascular reactivity in older and previously inactive adult cohorts. Exercise training may provide a cost-effective option for improving cutaneous microvascular reactivity in adults and may be of benefit to those with cardiovascular disease and metabolic disorders such as diabetes. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Boriani, Filippo; Villani, Riccardo; Morselli, Paolo Giovanni
2014-10-01
Obesity is increasingly frequent in our society and is associated closely with metabolic disorders. As some studies have suggested, removal of fat tissue through liposuction and dermolipectomies may be of some benefit in the improvement of metabolic indices. This article aimed to review the published literature on this topic and to evaluate metabolic variations meta-analytically after liposuction, dermolipectomy, or both. Through a literature search with the PubMed/Medline database, 14 studies were identified. All articles were analyzed, and several metabolic variables were chosen in the attempt to meta-analyze the effect of adipose tissue removal through the various studies. All statistical calculations were performed with Review Manager (RevMan), version 5.0. Several cardiovascular and metabolic variables are described as prone to variations after body-contouring procedures when a significant amount of adipose tissue has been excised. Four of the studies included in the analysis reported improvements in all the parameters examined. Seven articles showed improvement in some variables and no improvement in others, whereas three studies showed no beneficial variation in any of the considered indicators after body-contouring procedures. Fasting plasma insulin was identified as the only variable for which a meta-analysis of five included studies was possible. The meta-analysis showed a statistically significant reduction in fasting plasma insulin resulting from large-volume liposuction in obese healthy women. Many beneficial metabolic effects resulting from dermolipectomy and liposuction procedures are described in the literature. In particular, fasting plasma insulin and thus insulin sensitivity seem to be positively influenced. Further research, including prospective clinical studies, is necessary for better exploration of the effects that body-contouring plastic surgery procedures have on metabolic parameters.
Maciejewski, Conrad C; Haines, Trevor; Rourke, Keith F
2017-05-01
To identify factors that predict patient satisfaction after urethroplasty by prospectively examining patient-reported quality of life scores using 3 validated instruments. A 3-part prospective survey consisting of the International Prostate Symptom Score (IPSS), the International Index of Erectile Function (IIEF) score, and a urethroplasty quality of life survey was completed by patients who underwent urethroplasty preoperatively and at 6 months postoperatively. The quality of life score included questions on genitourinary pain, urinary tract infection (UTI), postvoid dribbling, chordee, shortening, overall satisfaction, and overall health. Data were analyzed using descriptive statistics, paired t test, univariate and multivariate logistic regression analyses, and Wilcoxon signed-rank analysis. Patients were enrolled in the study from February 2011 to December 2014, and a total of 94 patients who underwent a total of 102 urethroplasties completed the study. Patients reported statistically significant improvements in IPSS (P < .001). Ordinal linear regression analysis revealed no association between age, IPSS, or IIEF score and patient satisfaction. Wilcoxon signed-rank analysis revealed significant improvements in pain scores (P = .02), UTI (P < .001), perceived overall health (P = .01), and satisfaction (P < .001). Univariate logistic regression identified a length >4 cm and the absence of UTI, pain, shortening, and chordee as predictors of patient satisfaction. Multivariate analysis of quality of life domain scores identified absence of shortening and absence of chordee as independent predictors of patient satisfaction following urethroplasty (P < .01). Patient voiding function and quality of life improve significantly following urethroplasty, but improvement in voiding function is not associated with patient satisfaction. Chordee status and perceived penile shortening impact patient satisfaction, and should be included in patient-reported outcome measures. Copyright © 2017 Elsevier Inc. All rights reserved.
Hilgers, Ralf-Dieter; Bogdan, Malgorzata; Burman, Carl-Fredrik; Dette, Holger; Karlsson, Mats; König, Franz; Male, Christoph; Mentré, France; Molenberghs, Geert; Senn, Stephen
2018-05-11
IDeAl (Integrated designs and analysis of small population clinical trials) is an EU funded project developing new statistical design and analysis methodologies for clinical trials in small population groups. Here we provide an overview of IDeAl findings and give recommendations to applied researchers. The description of the findings is broken down by the nine scientific IDeAl work packages and summarizes results from the project's more than 60 publications to date in peer reviewed journals. In addition, we applied text mining to evaluate the publications and the IDeAl work packages' output in relation to the design and analysis terms derived from in the IRDiRC task force report on small population clinical trials. The results are summarized, describing the developments from an applied viewpoint. The main result presented here are 33 practical recommendations drawn from the work, giving researchers a comprehensive guidance to the improved methodology. In particular, the findings will help design and analyse efficient clinical trials in rare diseases with limited number of patients available. We developed a network representation relating the hot topics developed by the IRDiRC task force on small population clinical trials to IDeAl's work as well as relating important methodologies by IDeAl's definition necessary to consider in design and analysis of small-population clinical trials. These network representation establish a new perspective on design and analysis of small-population clinical trials. IDeAl has provided a huge number of options to refine the statistical methodology for small-population clinical trials from various perspectives. A total of 33 recommendations developed and related to the work packages help the researcher to design small population clinical trial. The route to improvements is displayed in IDeAl-network representing important statistical methodological skills necessary to design and analysis of small-population clinical trials. The methods are ready for use.
Using Statistical Analysis Software to Advance Nitro Plasticizer Wettability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shear, Trevor Allan
Statistical analysis in science is an extremely powerful tool that is often underutilized. Additionally, it is frequently the case that data is misinterpreted or not used to its fullest extent. Utilizing the advanced software JMP®, many aspects of experimental design and data analysis can be evaluated and improved. This overview will detail the features of JMP® and how they were used to advance a project, resulting in time and cost savings, as well as the collection of scientifically sound data. The project analyzed in this report addresses the inability of a nitro plasticizer to coat a gold coated quartz crystalmore » sensor used in a quartz crystal microbalance. Through the use of the JMP® software, the wettability of the nitro plasticizer was increased by over 200% using an atmospheric plasma pen, ensuring good sample preparation and reliable results.« less
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
NASA Astrophysics Data System (ADS)
Vinh, T.
1980-08-01
There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.
2017-01-01
Purpose This study is aimed at identifying the relationships between medical school students’ academic burnout, empathy, and calling, and determining whether their calling has a mediating effect on the relationship between academic burnout and empathy. Methods A mixed method study was conducted. One hundred twenty-seven medical students completed a survey. Scales measuring academic burnout, medical students’ empathy, and calling were utilized. For statistical analysis, correlation analysis, descriptive statistics analysis, and hierarchical multiple regression analyses were conducted. For qualitative approach, eight medical students participated in a focus group interview. Results The study found that empathy has a statistically significant, negative correlation with academic burnout, while having a significant, positive correlation with calling. Sense of calling proved to be an effective mediator of the relationship between academic burnout and empathy. Conclusion This result demonstrates that calling is a key variable that mediates the relationship between medical students’ academic burnout and empathy. As such, this study provides baseline data for an education that could improve medical students’ empathy skills. PMID:28870019
Phospholipid Fatty Acid Analysis: Past, Present and Future
NASA Astrophysics Data System (ADS)
Findlay, R. H.
2008-12-01
With their 1980 publication, Bobbie and White initiated the use of phospholipid fatty acids for the study of microbial communities. This method, integrated with a previously published biomass assay based on the colorimetric detection of orthophosphate liberated from phospholipids, provided the first quantitative method for determining microbial community structure. The method is based on a quantitative extraction of lipids from the sample matrix, isolation of the phospholipids, conversion of the phospholipid fatty acids to their corresponding fatty acid methyl esters (known by the acronym FAME) and the separation, identification and quantification of the FAME by gas chromatography. Early laboratory and field samples focused on correlating individual fatty acids to particular groups of microorganisms. Subsequent improvements to the methodology include reduced solvent volumes for extractions, improved sensitivity in the detection of orthophosphate and the use of solid phase extraction technology. Improvements in the field of gas chromatography also increased accessibility of the technique and it has been widely applied to water, sediment, soil and aerosol samples. Whole cell fatty acid analysis, a related but not equal technique, is currently used for phenotypic characterization in bacterial species descriptions and is the basis for a commercial, rapid bacterial identification system. In the early 1990ês application of multivariate statistical analysis, first cluster analysis and then principal component analysis, further improved the usefulness of the technique and allowed the development of a functional group approach to interpretation of phospholipid fatty acid profiles. Statistical techniques currently applied to the analysis of phospholipid fatty acid profiles include constrained ordinations and neutral networks. Using redundancy analysis, a form of constrained ordination, we have recently shown that both cation concentration and dissolved organic matter (DOM) quality are determinates of microbial community structure in forested headwater streams. One of the most exciting recent developments in phospholipid fatty acid analysis is the application of compound specific stable isotope analysis. We are currently applying this technique to stream sediments to help determine which microorganisms are involved in the initial processing of DOM and the technique promises to be a useful tool for assigning ecological function to microbial populations.
NASA Astrophysics Data System (ADS)
Iwaki, Y.
2010-07-01
The Quality Assurance (QA) of measurand has been discussed over many years by Quality Engineering (QE). It is need to more discuss about ISO standard. It is mining to find out root fault element for improvement of measured accuracy, and it remove. The accuracy assurance needs to investigate the Reference Material (RM) for calibration and an improvement accuracy of data processing. This research follows the accuracy improvement in field of data processing by how to improve of accuracy. As for the fault element relevant to measurement accuracy, in many cases, two or more element is buried exist. The QE is to assume the generating frequency of fault state, and it is solving from higher ranks for fault factor first by "Failure Mode and Effects Analysis (FMEA)". Then QE investigate the root cause over the fault element by "Root Cause Analysis (RCA)" and "Fault Tree Analysis (FTA)" and calculate order to the generating element of assume specific fault. These days comes, the accuracy assurance of measurement result became duty in the Professional Test (PT). ISO standard was legislated by ISO-GUM (Guide of express Uncertainty in Measurement) as guidance of an accuracy assurance in 1993 [1] for QA. Analysis method of ISO-GUM is changed into Exploratory Data Analysis (EDA) from Analysis of Valiance (ANOVA). EDA calculate one by one until an assurance performance is obtained according to "Law of the propagation of uncertainty". If the truth value was unknown, ISO-GUM is changed into reference value. A reference value set up by the EDA and it does check with a Key Comparison (KC) method. KC is comparing between null hypothesis and frequency hypothesis. It performs operation of assurance by ISO-GUM in order of standard uncertainty, the combined uncertainty of many fault elements and an expansion uncertain for assurance. An assurance value is authorized by multiplying the final expansion uncertainty [2] by K of coverage factor. K-value is calculated from the Effective Free Degree (EFD) which thought the number of samples is important. Free degree is based on maximum likelihood method of an improved information criterion (AIC) for a Quality Control (QC). The assurance performance of ISO-GUM is come out by set up of the confidence interval [3] and is decided. The result of research of "Decided level/Minimum Detectable Concentration (DL/MDC)" was able to profit by the operation. QE has developed for the QC of industry. However, these have been processed by regression analysis by making frequency probability of a statistic value into normalized distribution. The occurrence probability of the statistics value of a fault element which is accompanied element by a natural phenomenon becomes an abnormal distribution in many cases. The abnormal distribution needs to obtain an assurance value by other method than statistical work of type B in ISO-GUM. It is tried fusion the improvement of worker by QE became important for reservation of the reliability of measurement accuracy and safety. This research was to make the result of Blood Chemical Analysis (BCA) in the field of clinical test.
Mihic, Marko M; Todorovic, Marija Lj; Obradovic, Vladimir Lj; Mitrovic, Zorica M
2016-01-01
Background Social services aimed at the elderly are facing great challenges caused by progressive aging of the global population but also by the constant pressure to spend funds in a rational manner. Purpose This paper focuses on analyzing the investments into human resources aimed at enhancing home care for the elderly since many countries have recorded progress in the area over the past years. The goal of this paper is to stress the significance of performing an economic analysis of the investment. Methods This paper combines statistical analysis methods such as correlation and regression analysis, methods of economic analysis, and scenario method. Results The economic analysis of investing in human resources for home care service in Serbia showed that the both scenarios of investing in either additional home care hours or more beneficiaries are cost-efficient. However, the optimal solution with the positive (and the highest) value of economic net present value criterion is to invest in human resources to boost the number of home care hours from 6 to 8 hours per week and increase the number of the beneficiaries to 33%. Conclusion This paper shows how the statistical and economic analysis results can be used to evaluate different scenarios and enable quality decision-making based on exact data in order to improve health and quality of life of the elderly and spend funds in a rational manner. PMID:26869778
Reinventing Biostatistics Education for Basic Scientists
Weissgerber, Tracey L.; Garovic, Vesna D.; Milin-Lazovic, Jelena S.; Winham, Stacey J.; Obradovic, Zoran; Trzeciakowski, Jerome P.; Milic, Natasa M.
2016-01-01
Numerous studies demonstrating that statistical errors are common in basic science publications have led to calls to improve statistical training for basic scientists. In this article, we sought to evaluate statistical requirements for PhD training and to identify opportunities for improving biostatistics education in the basic sciences. We provide recommendations for improving statistics training for basic biomedical scientists, including: 1. Encouraging departments to require statistics training, 2. Tailoring coursework to the students’ fields of research, and 3. Developing tools and strategies to promote education and dissemination of statistical knowledge. We also provide a list of statistical considerations that should be addressed in statistics education for basic scientists. PMID:27058055
Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.
Fisher, Aaron; Anderson, G. Brooke; Peng, Roger
2014-01-01
Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457
Design and analysis of multiple diseases genome-wide association studies without controls.
Chen, Zhongxue; Huang, Hanwen; Ng, Hon Keung Tony
2012-11-15
In genome-wide association studies (GWAS), multiple diseases with shared controls is one of the case-control study designs. If data obtained from these studies are appropriately analyzed, this design can have several advantages such as improving statistical power in detecting associations and reducing the time and cost in the data collection process. In this paper, we propose a study design for GWAS which involves multiple diseases but without controls. We also propose corresponding statistical data analysis strategy for GWAS with multiple diseases but no controls. Through a simulation study, we show that the statistical association test with the proposed study design is more powerful than the test with single disease sharing common controls, and it has comparable power to the overall test based on the whole dataset including the controls. We also apply the proposed method to a real GWAS dataset to illustrate the methodologies and the advantages of the proposed design. Some possible limitations of this study design and testing method and their solutions are also discussed. Our findings indicate that the proposed study design and statistical analysis strategy could be more efficient than the usual case-control GWAS as well as those with shared controls. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, S.; Favalli, Andrea; Weaver, Brian Phillip
2015-10-06
In this paper we develop and investigate several criteria for assessing how well a proposed spectral form fits observed spectra. We consider the classical improved figure of merit (FOM) along with several modifications, as well as criteria motivated by Poisson regression from the statistical literature. We also develop a new FOM that is based on the statistical idea of the bootstrap. A spectral simulator has been developed to assess the performance of these different criteria under multiple data configurations.
Research Using Government Data Sets: An Underutilised Resource
ERIC Educational Resources Information Center
Knipe, Sally
2011-01-01
The use of existing data for education research activities can be a valuable resource. Improvement in statistical analysis and data management and retrieval techniques, as well as access to government data bases, has expanded opportunities for researchers seeking to investigate issues that are institutional in nature, such as participation…
A Fishy Problem for Advanced Students
ERIC Educational Resources Information Center
Patterson, Richard A.
1977-01-01
While developing a research course for gifted high school students, improvements were made in a local pond. Students worked for a semester learning research techniques, statistical analysis, and limnology. At the end of the course, the three students produced a joint scientific paper detailing their study of the pond. (MA)
Using Symbolic-Logic Matrices To Improve Confirmatory Factor Analysis Techniques.
ERIC Educational Resources Information Center
Creighton, Theodore B.; Coleman, Donald G.; Adams, R. C.
A continuing and vexing problem associated with survey instrument development is the creation of items, initially, that correlate favorably a posteriori with constructs being measured. This study tests the use of symbolic-logic matrices developed by D. G. Coleman (1979) in creating factorially "pure" statistically discrete constructs in…
Effectiveness of Simulation in a Hybrid and Online Networking Course.
ERIC Educational Resources Information Center
Cameron, Brian H.
2003-01-01
Reports on a study that compares the performance of students enrolled in two sections of a Web-based computer networking course: one utilizing a simulation package and the second utilizing a static, graphical software package. Analysis shows statistically significant improvements in performance in the simulation group compared to the…
Spatial variability effects on precision and power of forage yield estimation
USDA-ARS?s Scientific Manuscript database
Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...
75 FR 9257 - SBA Lender Risk Rating System
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-01
... Liquidation Rate; 3. Gross Delinquency Rate; 4. Gross Past-Due Rate; 5. Six (6) Month Net Flow Indicator; 6.... The statistical analysis performed showed that incorporating the Portfolio Size/Age component improved...) Month Delinquency Rate; 3. Gross Delinquency Rate; 4. Gross Past-Due Rate; 5. Average Small Business...
Systems Analysis of Alternative Architectures for Riverine Warfare in 2010
2006-12-01
propose system of systems improvements for the RF in 2010. With the RF currently working to establish a command structure, train and equip its forces...opposing force. Measures of performance such as time to first enemy detection and loss exchange ratio were collected from MANA. A detailed statistical
How to Engage Medical Students in Chronobiology: An Example on Autorhythmometry
ERIC Educational Resources Information Center
Rol de Lama, M. A.; Lozano, J. P.; Ortiz, V.; Sanchez-Vazquez, F. J.; Madrid, J. A.
2005-01-01
This contribution describes a new laboratory experience that improves medical students' learning of chronobiology by introducing them to basic chronobiology concepts as well as to methods and statistical analysis tools specific for circadian rhythms. We designed an autorhythmometry laboratory session where students simultaneously played the role…
Impact of Accreditation on Public and Private Universities: A Comparative Study
ERIC Educational Resources Information Center
Dattey, Kwame; Westerheijden, Don F.; Hofman, Wiecher H. Adriaan
2014-01-01
Based on two cycles of assessments for accreditation, this study assesses the differential impacts of accreditation on public and private universities in Ghana. Analysis of the evaluator reports indicates no statistically significant difference--improvement or deterioration--between the two cycles of evaluations for both types of institutions. A…
Improving Student Retention and Performance in Quantitative Courses Using Clickers
ERIC Educational Resources Information Center
Liu, Wallace C.; Stengel, Donald N.
2011-01-01
Clickers offer instructors of mathematics-related courses an opportunity to involve students actively in class sessions while diminishing the embarrassment of being wrong. This paper reports on the use of clickers in two university-level courses in quantitative analysis and business statistics. Results for student retention and examination…
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
Fan, Tao; Zhao, XinGang; Zhao, HaiJun; Liang, Cong; Wang, YinQian; Gai, QiFei; Zhang, Fangyi
2015-10-01
It is well established that syringomyelia can cause neurological symptoms and deficit by accumulation of fluid within syrinx cavities that lead to internal compression within the spinal cord. When other intervention treating the underlying etiology failed to yield any improvement, the next option would be a procedure to divert the fluid from the syrinx cavity, such as syringo-subarachnoid, syringo-peritoneal or syringo-pleural shunting. The indications and long term efficacy of these direct shunting procedures are still questionable and controversial. To investigate the clinical indication, outcome and complication of syringe-pleural shunt (SPS) as an alternative for treatment of syringomyelia. We reported a retrospective 26 cases of syringomyelia were found to have indication for a diversion procedure. SPS was offered. Patients' symptoms, mJOA score, and MRI were collected to evaluate the change of the syringomyelia and prognosis of the patients. 2-tailed wilcoxon signed-rank test was used to perform the statistical analysis of the mJOA scores. All 26 patients underwent SPS. The clinical information was collected, the mean follow-up time was 27.4 months, 2-tailed wilcoxon signed-rank test was used to perform the statistical analysis of the mJOA scores. The key surgical technique, outcome and complications of SPS were reported in detail. No mortality and severe complications occurred. Postoperative MRIs revealed near-complete resolution of syrinx in 14 patients, significant shrinkage of syrinx in 10 patients, no obvious reduction or unchanged in remaining 2 patient. Postoperatively, the symptoms improved in 24 cases (92.3%). Statistical analysis of the mJOA scores showed a statistical significance (P<0.001) between the preoperative group and the 2-week postoperative group. No further significant improvement between 2 weeks to the final follow up at 27 months. Collapse or remarkable shrinkage of the syrinx by SPS could ameliorate or at least stabilize the symptoms for the patient. We recommend small laminectomy and a less than 3mm myelotomy either at PML or DREZ. The SPS procedure can be an effective and relatively long-lived treatment for the idiopathic syringomyelia and those that failed other options. Copyright © 2015 Elsevier B.V. All rights reserved.
Nariai, N; Kim, S; Imoto, S; Miyano, S
2004-01-01
We propose a statistical method to estimate gene networks from DNA microarray data and protein-protein interactions. Because physical interactions between proteins or multiprotein complexes are likely to regulate biological processes, using only mRNA expression data is not sufficient for estimating a gene network accurately. Our method adds knowledge about protein-protein interactions to the estimation method of gene networks under a Bayesian statistical framework. In the estimated gene network, a protein complex is modeled as a virtual node based on principal component analysis. We show the effectiveness of the proposed method through the analysis of Saccharomyces cerevisiae cell cycle data. The proposed method improves the accuracy of the estimated gene networks, and successfully identifies some biological facts.
[Analysis on theses of the Chinese Journal of Parasitology and Parasitic Diseases in 2009-2012].
Yi, Feng-Yun; Qu, Lin-Ping; Yan, He; Sheng, Hui-Feng
2013-12-01
The published articles at the Chinese Journal of Parasitology and Parasitic Diseases in 2009-2012 were statistically analyzed. Among 547 papers published in the four years, original articles occupied 45.3% (248/547). The number of authors was 2712, with an average cooperation degree of 5.0, and the co-authorship accounted for 95.4% of the papers. Authors were mainly from colleges/universities (51.9%, 284/547), institutions for disease control (34.4%, 188/547) and hospitals health centers (13.7%, 75/547). The average publishing delay was 212, 141, 191 and 207 d in 2009-2012. Statistical analysis reflected the characteristics and academic level for improving the quality of the journal, and revealed the latest development and trends.
Treated cabin acoustic prediction using statistical energy analysis
NASA Technical Reports Server (NTRS)
Yoerkie, Charles A.; Ingraham, Steven T.; Moore, James A.
1987-01-01
The application of statistical energy analysis (SEA) to the modeling and design of helicopter cabin interior noise control treatment is demonstrated. The information presented here is obtained from work sponsored at NASA Langley for the development of analytic modeling techniques and the basic understanding of cabin noise. Utility and executive interior models are developed directly from existing S-76 aircraft designs. The relative importance of panel transmission loss (TL), acoustic leakage, and absorption to the control of cabin noise is shown using the SEA modeling parameters. It is shown that the major cabin noise improvement below 1000 Hz comes from increased panel TL, while above 1000 Hz it comes from reduced acoustic leakage and increased absorption in the cabin and overhead cavities.
Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D
2008-09-01
Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.
Analysis of Machine Learning Techniques for Heart Failure Readmissions.
Mortazavi, Bobak J; Downing, Nicholas S; Bucholz, Emily M; Dharmarajan, Kumar; Manhapra, Ajay; Li, Shu-Xia; Negahban, Sahand N; Krumholz, Harlan M
2016-11-01
The current ability to predict readmissions in patients with heart failure is modest at best. It is unclear whether machine learning techniques that address higher dimensional, nonlinear relationships among variables would enhance prediction. We sought to compare the effectiveness of several machine learning algorithms for predicting readmissions. Using data from the Telemonitoring to Improve Heart Failure Outcomes trial, we compared the effectiveness of random forests, boosting, random forests combined hierarchically with support vector machines or logistic regression (LR), and Poisson regression against traditional LR to predict 30- and 180-day all-cause readmissions and readmissions because of heart failure. We randomly selected 50% of patients for a derivation set, and a validation set comprised the remaining patients, validated using 100 bootstrapped iterations. We compared C statistics for discrimination and distributions of observed outcomes in risk deciles for predictive range. In 30-day all-cause readmission prediction, the best performing machine learning model, random forests, provided a 17.8% improvement over LR (mean C statistics, 0.628 and 0.533, respectively). For readmissions because of heart failure, boosting improved the C statistic by 24.9% over LR (mean C statistic 0.678 and 0.543, respectively). For 30-day all-cause readmission, the observed readmission rates in the lowest and highest deciles of predicted risk with random forests (7.8% and 26.2%, respectively) showed a much wider separation than LR (14.2% and 16.4%, respectively). Machine learning methods improved the prediction of readmission after hospitalization for heart failure compared with LR and provided the greatest predictive range in observed readmission rates. © 2016 American Heart Association, Inc.
Improving the performance of a filling line based on simulation
NASA Astrophysics Data System (ADS)
Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.
2016-08-01
The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.
Improving suicide mortality statistics in Tarragona (Catalonia, Spain) between 2004-2012.
Barbería, Eneko; Gispert, Rosa; Gallo, Belén; Ribas, Gloria; Puigdefàbregas, Anna; Freitas, Adriana; Segú, Elena; Torralba, Pilar; García-Sayago, Francisco; Estarellas, Aina
2016-07-20
Monitoring and preventing suicidal behaviour requires, among other data, knowing suicide deaths precisely. They often appear under-reported or misclassified in the official mortality statistics. The aim of this study is to analyse the under-reporting found in the suicide mortality statistics of Tarragona (a province of Catalonia, Spain). The analysis takes into account all suicide deaths that occurred in the Tarragona Area of the Catalan Institute of Legal Medicine and Forensic Sciences (TA-CILMFS) between 2004 and 2012. The sources of information were the death data files of the Catalan Mortality Register, as well as the Autopsies Files of the TA-CILMFS. Suicide rates and socio-demographic profiles were statistically compared between the suicide initially reported and the final one. The mean percentage of non-reported cases in the period was 16.2%, with a minimum percentage of 2.2% in 2005 and a maximum of 26.8% in 2009. The crude mortality rate by suicide rose from 6.6 to 7.9 per 100,000 inhabitants once forensic data were incorporated. Small differences were detected between the socio-demographic profile of the suicide initially reported and the final one. Supplementary information was obtained on the suicide method, which revealed a significant increase in poisoning and suicides involving trains. An exhaustive review of suicide deaths data from forensic sources has led to an improvement in the under-reported statistical information. It also improves the knowledge of the method of suicide and personal characteristics. Copyright © 2016 SEP y SEPB. Publicado por Elsevier España, S.L.U. All rights reserved.
Kim, Won Hwa; Singh, Vikas; Chung, Moo K.; Hinrichs, Chris; Pachauri, Deepti; Okonkwo, Ozioma C.; Johnson, Sterling C.
2014-01-01
Statistical analysis on arbitrary surface meshes such as the cortical surface is an important approach to understanding brain diseases such as Alzheimer’s disease (AD). Surface analysis may be able to identify specific cortical patterns that relate to certain disease characteristics or exhibit differences between groups. Our goal in this paper is to make group analysis of signals on surfaces more sensitive. To do this, we derive multi-scale shape descriptors that characterize the signal around each mesh vertex, i.e., its local context, at varying levels of resolution. In order to define such a shape descriptor, we make use of recent results from harmonic analysis that extend traditional continuous wavelet theory from the Euclidean to a non-Euclidean setting (i.e., a graph, mesh or network). Using this descriptor, we conduct experiments on two different datasets, the Alzheimer’s Disease NeuroImaging Initiative (ADNI) data and images acquired at the Wisconsin Alzheimer’s Disease Research Center (W-ADRC), focusing on individuals labeled as having Alzheimer’s disease (AD), mild cognitive impairment (MCI) and healthy controls. In particular, we contrast traditional univariate methods with our multi-resolution approach which show increased sensitivity and improved statistical power to detect a group-level effects. We also provide an open source implementation. PMID:24614060
Hilarion, Pilar; Groene, Oliver; Colom, Joan; Lopez, Rosa M; Suñol, Rosa
2010-10-23
The Health Department of the Regional Government of Catalonia, Spain, issued a quality plan for substance abuse centers. The objective of this paper is to evaluate the impact of a multidimensional quality improvement initiative in the field of substance abuse care and to discuss potentials and limitations for further quality improvement. The study uses an uncontrolled, sector-wide pre-post design. All centers providing services for persons with substance abuse issues in the Autonomous Community of Catalonia participated in this assessment. Measures of compliance were developed based on indicators reported in the literature and by broad stakeholder involvement. We compared pre-post differences in dimension-specific and overall compliance-scores using one-way ANOVA for repeated measures and the Friedman statistic. We described the spread of the data using the inter-quartile range and the Fligner-Killen statistic. Finally, we adjusted compliance scores for location and size using linear and logistic regression models. We performed a baseline and follow up assessment in 22 centers for substance abuse care and observed substantial and statistically significant improvements for overall compliance (pre: 60.9%; post: 79.1%) and for compliance in the dimensions 'care pathway' (pre: 66.5%; post: 83.5%) and 'organization and management' (pre: 50.5%; post: 77.2%). We observed improvements in the dimension 'environment and infrastructure' (pre: 81.8%; post: 95.5%) and in the dimension 'relations and user rights' (pre: 66.5%; post: 72.5%); however, these were not statistically significant. The regression analysis suggests that improvements in compliance are positively influenced by being located in the Barcelona region in case of the dimension 'relations and user rights'. The positive results of this quality improvement initiative are possibly associated with the successful involvement of stakeholders, the consciously constructed feedback reports on individual and sector-wide performance and the support of evidence-based guidance wherever possible. Further research should address how contextual issues shape the uptake and effectiveness of quality improvement actions and how such quality improvements can be sustained.
San-Martín, Montserrat; Delgado-Bolton, Roberto; Vivanco, Luis
2017-01-01
Background: Empathy in the context of patient care is defined as a predominantly cognitive attribute that involves an understanding of the patient's experiences, concerns, and perspectives, combined with a capacity to communicate this understanding and an intention to help. In medical education, it is recognized that empathy can be improved by interventional approaches. In this sense, a semiotic-based curriculum could be an important didactic tool for improving medical empathy. The main purpose of this study was to determine if in medical schools where a semiotic-based curriculum is offered, the empathetic orientation of medical students improves as a consequence of the acquisition and development of students' communication skills that are required in clinician-patient encounters. Design: This quasi-experimental study was conducted in three medical schools of the Dominican Republic that offer three different medical curricula: (i) a theoretical and practical semiotic-based curriculum; (ii) a theoretical semiotic-based curriculum; and (iii) a curriculum without semiotic courses. The Jefferson scale of empathy was administered in two different moments to students enrolled in pre-clinical cycles of those institutions. Data was subjected to comparative statistical analysis and logistic regression analysis. Results: The study included 165 students (55 male and 110 female). Comparison analysis showed statistically significant differences in the development of empathy among groups ( p < 0.001). Logistic regression confirmed that gender, age, and a semiotic-based curriculum contributed toward the enhancement of empathy. Conclusion: These findings demonstrate the importance of medical semiotics as a didactic teaching method for improving beginners' empathetic orientation in patients' care.
Moradkhani, Shirin; Salehi, Iraj; Abdolmaleki, Somayeh; Komaki, Alireza
2015-01-01
Background: Medicinal plants, owing to their different mechanisms such as antioxidants effects, may improve learning and memory impairments in diabetic rats. Calendula officinalis (CO), has a significant antioxidant activity. Aims: To examine the effect of hydroalcoholic extract of CO on passive avoidance learning (PAL) and memory in streptozotocin (STZ)-induced diabetic male rats. Settings and Design: A total of 32 adult male Wistar rats were randomly allocated to four groups: Control, diabetic, control + extract of CO and diabetic control + extract of CO groups with free access to regular rat diet. Subjects and Methods: Diabetes in diabetic rats was induced by single intraperitoneal injection of 60 mg/kg STZ. After confirmation of diabetes, oral administration of 300 mg/kg CO extract to extract-treated groups have been done. PAL was tested 8 weeks after onset of treatment, and blood glucose and body weight were measured in all groups at the beginning and end of the experiment. Statistical Analysis Used: The statistical analysis of data was performed by ANOVA followed by least significant difference post-hoc analysis. Results: Diabetes decreased learning and memory. Effect of CO extract in retention test (after 24 and 48 h) has been shown a significant decrease in step-through latency and increase in time spent in the dark compartment part. Also the extract partially improved hyperglycemia and reduced body weight. Conclusion: Taken together, CO extract can improve PAL and memory impairments in STZ-diabetic rats. This improvement may be due to its antioxidant, anticholinergic activities or its power to reduce hyperglycemia. PMID:26120230
San-Martín, Montserrat; Delgado-Bolton, Roberto; Vivanco, Luis
2017-01-01
Background: Empathy in the context of patient care is defined as a predominantly cognitive attribute that involves an understanding of the patient’s experiences, concerns, and perspectives, combined with a capacity to communicate this understanding and an intention to help. In medical education, it is recognized that empathy can be improved by interventional approaches. In this sense, a semiotic-based curriculum could be an important didactic tool for improving medical empathy. The main purpose of this study was to determine if in medical schools where a semiotic-based curriculum is offered, the empathetic orientation of medical students improves as a consequence of the acquisition and development of students’ communication skills that are required in clinician–patient encounters. Design: This quasi-experimental study was conducted in three medical schools of the Dominican Republic that offer three different medical curricula: (i) a theoretical and practical semiotic-based curriculum; (ii) a theoretical semiotic-based curriculum; and (iii) a curriculum without semiotic courses. The Jefferson scale of empathy was administered in two different moments to students enrolled in pre-clinical cycles of those institutions. Data was subjected to comparative statistical analysis and logistic regression analysis. Results: The study included 165 students (55 male and 110 female). Comparison analysis showed statistically significant differences in the development of empathy among groups (p < 0.001). Logistic regression confirmed that gender, age, and a semiotic-based curriculum contributed toward the enhancement of empathy. Conclusion: These findings demonstrate the importance of medical semiotics as a didactic teaching method for improving beginners’ empathetic orientation in patients’ care. PMID:29209252
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
An analysis of landing rates and separations at the Dallas/Fort Worth International Airport
NASA Technical Reports Server (NTRS)
Ballin, Mark G.; Erzberger, Heinz
1996-01-01
Advanced air traffic management systems such as the Center/TRACON Automation System (CTAS) should yield a wide range of benefits, including reduced aircraft delays and controller workload. To determine the traffic-flow benefits achievable from future terminal airspace automation, live radar information was used to perform an analysis of current aircraft landing rates and separations at the Dallas/Fort Worth International Airport. Separation statistics that result when controllers balance complex control procedural constraints in order to maintain high landing rates are presented. In addition, the analysis estimates the potential for airport capacity improvements by determining the unused landing opportunities that occur during rush traffic periods. Results suggest a large potential for improving the accuracy and consistency of spacing between arrivals on final approach, and they support earlier simulation findings that improved air traffic management would increase capacity and reduce delays.
Improved biliary detection and diagnosis through intelligent machine analysis.
Logeswaran, Rajasvaran
2012-09-01
This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Providing peak river flow statistics and forecasting in the Niger River basin
NASA Astrophysics Data System (ADS)
Andersson, Jafet C. M.; Ali, Abdou; Arheimer, Berit; Gustafsson, David; Minoungou, Bernard
2017-08-01
Flooding is a growing concern in West Africa. Improved quantification of discharge extremes and associated uncertainties is needed to improve infrastructure design, and operational forecasting is needed to provide timely warnings. In this study, we use discharge observations, a hydrological model (Niger-HYPE) and extreme value analysis to estimate peak river flow statistics (e.g. the discharge magnitude with a 100-year return period) across the Niger River basin. To test the model's capacity of predicting peak flows, we compared 30-year maximum discharge and peak flow statistics derived from the model vs. derived from nine observation stations. The results indicate that the model simulates peak discharge reasonably well (on average + 20%). However, the peak flow statistics have a large uncertainty range, which ought to be considered in infrastructure design. We then applied the methodology to derive basin-wide maps of peak flow statistics and their associated uncertainty. The results indicate that the method is applicable across the hydrologically active part of the river basin, and that the uncertainty varies substantially depending on location. Subsequently, we used the most recent bias-corrected climate projections to analyze potential changes in peak flow statistics in a changed climate. The results are generally ambiguous, with consistent changes only in very few areas. To test the forecasting capacity, we ran Niger-HYPE with a combination of meteorological data sets for the 2008 high-flow season and compared with observations. The results indicate reasonable forecasting capacity (on average 17% deviation), but additional years should also be evaluated. We finish by presenting a strategy and pilot project which will develop an operational flood monitoring and forecasting system based in-situ data, earth observations, modelling, and extreme statistics. In this way we aim to build capacity to ultimately improve resilience toward floods, protecting lives and infrastructure in the region.
The quality improvement attitude survey: Development and preliminary psychometric characteristics.
Dunagan, Pamela B
2017-12-01
To report the development of a tool to measure nurse's attitudes about quality improvement in their practice setting and to examine preliminary psychometric characteristics of the Quality Improvement Nursing Attitude Scale. Human factors such as nursing attitudes of complacency have been identified as root causes of sentinel events. Attitudes of nurses concerning use of Quality and Safety Education for nurse's competencies can be most challenging to teach and to change. No tool has been developed measuring attitudes of nurses concerning their role in quality improvement. A descriptive study design with preliminary psychometric evaluation was used to examine the preliminary psychometric characteristics of the Quality Improvement Nursing Attitude Scale. Registered bedside clinical nurses comprised the sample for the study (n = 57). Quantitative data were analysed using descriptive statistics and Cronbach's alpha reliability. Total score and individual item statistics were evaluated. Two open-ended items were used to collect statements about nurses' feelings regarding their experience in quality improvement efforts. Strong support for the internal consistency reliability and face validity of the Quality Improvement Nursing Attitude Scale was found. Total scale scores were high indicating nurse participants valued Quality and Safety Education for Nurse competencies in practice. However, item-level statistics indicated nurses felt powerless when other nurses deviate from care standards. Additionally, the sample indicated they did not consistently report patient safety issues and did not have a feeling of value in efforts to improve care. Findings suggested organisational culture fosters nurses' reporting safety issues and feeling valued in efforts to improve care. Participants' narrative comments and item analysis revealed the need to generate new items for the Quality Improvement Nursing Attitude Scale focused on nurses' perception of their importance in quality and safety and their power to enact principles. The Quality Improvement Nursing Attitude Scale-Revised edition was designed to help in understanding nurses' attitudes and values. It can be used to further explore broad concepts of quality improvement efforts. © 2017 John Wiley & Sons Ltd.
Visscher, P M; Haley, C S; Ewald, H; Mors, O; Egeland, J; Thiel, B; Ginns, E; Muir, W; Blackwood, D H
2005-02-05
To test the hypothesis that the same genetic loci confer susceptibility to, or protection from, disease in different populations, and that a combined analysis would improve the map resolution of a common susceptibility locus, we analyzed data from three studies that had reported linkage to bipolar disorder in a small region on chromosome 4p. Data sets comprised phenotypic information and genetic marker data on Scottish, Danish, and USA extended pedigrees. Across the three data sets, 913 individuals appeared in the pedigrees, 462 were classified, either as unaffected (323) or affected (139) with unipolar or bipolar disorder. A consensus linkage map was created from 14 microsatellite markers in a 33 cM region. Phenotypic and genetic data were analyzed using a variance component (VC) and allele sharing method. All previously reported elevated test statistics in the region were confirmed with one or both analysis methods, indicating the presence of one or more susceptibility genes to bipolar disorder in the three populations in the studied chromosome segment. When the results from both the VC and allele sharing method were considered, there was strong evidence for a susceptibility locus in the data from Scotland, some evidence in the data from Denmark and relatively less evidence in the data from the USA. The test statistics from the Scottish data set dominated the test statistics from the other studies, and no improved map resolution for a putative genetic locus underlying susceptibility in all three studies was obtained. Studies reporting linkage to the same region require careful scrutiny and preferably joint or meta analysis on the same basis in order to ensure that the results are truly comparable. (c) 2004 Wiley-Liss, Inc.
[Quality assessment in anesthesia].
Kupperwasser, B
1996-01-01
Quality assessment (assurance/improvement) is the set of methods used to measure and improve the delivered care and the department's performance against pre-established criteria or standards. The four stages of the self-maintained quality assessment cycle are: problem identification, problem analysis, problem correction and evaluation of corrective actions. Quality assessment is a measurable entity for which it is necessary to define and calibrate measurement parameters (indicators) from available data gathered from the hospital anaesthesia environment. Problem identification comes from the accumulation of indicators. There are four types of quality indicators: structure, process, outcome and sentinel indicators. The latter signal a quality defect, are independent of outcomes, are easier to analyse by statistical methods and closely related to processes and main targets of quality improvement. The three types of methods to analyse the problems (indicators) are: peer review, quantitative methods and risks management techniques. Peer review is performed by qualified anaesthesiologists. To improve its validity, the review process should be explicited and conclusions based on standards of practice and literature references. The quantitative methods are statistical analyses applied to the collected data and presented in a graphic format (histogram, Pareto diagram, control charts). The risks management techniques include: a) critical incident analysis establishing an objective relationship between a 'critical' event and the associated human behaviours; b) system accident analysis, based on the fact that accidents continue to occur despite safety systems and sophisticated technologies, checks of all the process components leading to the impredictable outcome and not just the human factors; c) cause-effect diagrams facilitate the problem analysis in reducing its causes to four fundamental components (persons, regulations, equipment, process). Definition and implementation of corrective measures, based on the findings of the two previous stages, are the third step of the evaluation cycle. The Hawthorne effect is an outcome improvement, before the implementation of any corrective actions. Verification of the implemented actions is the final and mandatory step closing the evaluation cycle.
Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491
Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir
2010-07-15
With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-01-01
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798
New insights into time series analysis. II - Non-correlated observations
NASA Astrophysics Data System (ADS)
Ferreira Lopes, C. E.; Cross, N. J. G.
2017-08-01
Context. Statistical parameters are used to draw conclusions in a vast number of fields such as finance, weather, industrial, and science. These parameters are also used to identify variability patterns on photometric data to select non-stochastic variations that are indicative of astrophysical effects. New, more efficient, selection methods are mandatory to analyze the huge amount of astronomical data. Aims: We seek to improve the current methods used to select non-stochastic variations on non-correlated data. Methods: We used standard and new data-mining parameters to analyze non-correlated data to find the best way to discriminate between stochastic and non-stochastic variations. A new approach that includes a modified Strateva function was performed to select non-stochastic variations. Monte Carlo simulations and public time-domain data were used to estimate its accuracy and performance. Results: We introduce 16 modified statistical parameters covering different features of statistical distribution such as average, dispersion, and shape parameters. Many dispersion and shape parameters are unbound parameters, I.e. equations that do not require the calculation of average. Unbound parameters are computed with single loop and hence decreasing running time. Moreover, the majority of these parameters have lower errors than previous parameters, which is mainly observed for distributions with few measurements. A set of non-correlated variability indices, sample size corrections, and a new noise model along with tests of different apertures and cut-offs on the data (BAS approach) are introduced. The number of mis-selections are reduced by about 520% using a single waveband and 1200% combining all wavebands. On the other hand, the even-mean also improves the correlated indices introduced in Paper I. The mis-selection rate is reduced by about 18% if the even-mean is used instead of the mean to compute the correlated indices in the WFCAM database. Even-statistics allows us to improve the effectiveness of both correlated and non-correlated indices. Conclusions: The selection of non-stochastic variations is improved by non-correlated indices. The even-averages provide a better estimation of mean and median for almost all statistical distributions analyzed. The correlated variability indices, which are proposed in the first paper of this series, are also improved if the even-mean is used. The even-parameters will also be useful for classifying light curves in the last step of this project. We consider that the first step of this project, where we set new techniques and methods that provide a huge improvement on the efficiency of selection of variable stars, is now complete. Many of these techniques may be useful for a large number of fields. Next, we will commence a new step of this project regarding the analysis of period search methods.
Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian; Elmore, Ryan; Getman, Dan
This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).
Spatio-temporal analysis of annual rainfall in Crete, Greece
NASA Astrophysics Data System (ADS)
Varouchakis, Emmanouil A.; Corzo, Gerald A.; Karatzas, George P.; Kotsopoulou, Anastasia
2018-03-01
Analysis of rainfall data from the island of Crete, Greece was performed to identify key hydrological years and return periods as well as to analyze the inter-annual behavior of the rainfall variability during the period 1981-2014. The rainfall spatial distribution was also examined in detail to identify vulnerable areas of the island. Data analysis using statistical tools and spectral analysis were applied to investigate and interpret the temporal course of the available rainfall data set. In addition, spatial analysis techniques were applied and compared to determine the rainfall spatial distribution on the island of Crete. The analysis presented that in contrast to Regional Climate Model estimations, rainfall rates have not decreased, while return periods vary depending on seasonality and geographic location. A small but statistical significant increasing trend was detected in the inter-annual rainfall variations as well as a significant rainfall cycle almost every 8 years. In addition, statistically significant correlation of the island's rainfall variability with the North Atlantic Oscillation is identified for the examined period. On the other hand, regression kriging method combining surface elevation as secondary information improved the estimation of the annual rainfall spatial variability on the island of Crete by 70% compared to ordinary kriging. The rainfall spatial and temporal trends on the island of Crete have variable characteristics that depend on the geographical area and on the hydrological period.
Impact of electronic health records on the patient experience in a hospital setting.
Migdal, Christopher W; Namavar, Aram A; Mosley, Virgie N; Afsar-manesh, Nasim
2014-10-01
The impact of electronic health records (EHRs) and their effects on optimizing the patient experience has been debated nationally. Currently, there is a paucity of data in this area, and existing research offers conflicting results. Since 2006, the Assessing Residents' CI-CARE (ARC) program has evaluated the physician-patient interaction of resident physicians at University of California, Los Angeles (UCLA) Health utilizing a 20-item questionnaire administered through facilitator-patient interviews. To evaluate the impact of EHR implementation on the patient experience. Retrospective cohort study. Two academic medical campuses: Ronald Reagan UCLA Medical Center and UCLA Medical Center, Santa Monica. A total of 3417 surveys, spanning December 1, 2012 to May 30, 2013, were assessed. This included patient representation from 9 departments within UCLA Health. Surveys were analyzed to assess physician-patient communication. Statistical comparisons were made using χ analysis. All 16 questions assessing physician-patient communication received better responses in the 3 months following EHR implementation, compared to the 3 months prior to implementation. Of these, 9 questions illustrated statistically significant improvement, whereas the improvement in the remaining 7 questions was not statistically significant. These results suggest that EHRs may improve physician-patient communication. The ARC infrastructure allowed for observation of this trend; however, future research should aim to further validate and understand the etiologies of this improvement. © 2014 Society of Hospital Medicine.
Deriving injury risk curves using survival analysis from biomechanical experiments.
Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott
2016-10-03
Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.
Lin, Susie; McKenna, Samuel J; Yao, Chuan-Fong; Chen, Yu-Ray; Chen, Chit
2017-01-01
The objective of this study was to evaluate the efficacy of hypotensive anesthesia in reducing intraoperative blood loss, decreasing operation time, and improving the quality of the surgical field during orthognathic surgery. A systematic review and meta-analysis of randomized controlled trials addressing these issues were carried out. An electronic database search was performed. The risk of bias was evaluated with the Jadad Scale and Delphi List. The inverse variance statistical method and a random-effects model were used. Ten randomized controlled trials were included for analysis. Our meta-analysis indicated that hypotensive anesthesia reduced intraoperative blood loss by a mean of about 169 mL. Hypotensive anesthesia was not shown to reduce the operation time for orthognathic surgery, but it did improve the quality of the surgical field. Subgroup analysis indicated that for blood loss in double-jaw surgery, the weighted mean difference favored the hypotensive group, with a reduction in blood loss of 175 mL, but no statistically significant reduction in blood loss was found for anterior maxillary osteotomy. If local anesthesia with epinephrine was used in conjunction with hypotensive anesthesia, the reduction in intraoperative blood loss was increased to 254.93 mL. Hypotensive anesthesia was effective in reducing blood loss and improving the quality of the surgical field, but it did not reduce the operation time for orthognathic surgery. The use of local anesthesia in conjunction with hypotensive general anesthesia further reduced the amount of intraoperative blood loss for orthognathic surgery. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Samuel, Jonathan C; Sankhulani, Edward; Qureshi, Javeria S; Baloyi, Paul; Thupi, Charles; Lee, Clara N; Miller, William C; Cairns, Bruce A; Charles, Anthony G
2012-01-01
Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192-0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available.
Samuel, Jonathan C.; Sankhulani, Edward; Qureshi, Javeria S.; Baloyi, Paul; Thupi, Charles; Lee, Clara N.; Miller, William C.; Cairns, Bruce A.; Charles, Anthony G.
2012-01-01
Road traffic injuries are a major cause of preventable death in sub-Saharan Africa. Accurate epidemiologic data are scarce and under-reporting from primary data sources is common. Our objectives were to estimate the incidence of road traffic deaths in Malawi using capture-recapture statistical analysis and determine what future efforts will best improve upon this estimate. Our capture-recapture model combined primary data from both police and hospital-based registries over a one year period (July 2008 to June 2009). The mortality incidences from the primary data sources were 0.075 and 0.051 deaths/1000 person-years, respectively. Using capture-recapture analysis, the combined incidence of road traffic deaths ranged 0.192–0.209 deaths/1000 person-years. Additionally, police data were more likely to include victims who were male, drivers or pedestrians, and victims from incidents with greater than one vehicle involved. We concluded that capture-recapture analysis is a good tool to estimate the incidence of road traffic deaths, and that capture-recapture analysis overcomes limitations of incomplete data sources. The World Health Organization estimated incidence of road traffic deaths for Malawi utilizing a binomial regression model and survey data and found a similar estimate despite strikingly different methods, suggesting both approaches are valid. Further research should seek to improve capture-recapture data through utilization of more than two data sources and improving accuracy of matches by minimizing missing data, application of geographic information systems, and use of names and civil registration numbers if available. PMID:22355338
Lauritano, Dorina; Petruzzi, Massimo; Di Stasio, Dario; Lucchese, Alberta
2014-03-01
The aim of this study was to evaluate the efficacy of palifermin, an N-terminal truncated version of endogenous keratinocyte growth factor, in the control of oral mucositis during antiblastic therapy. Twenty patients undergoing allogeneic stem-cell transplantation for acute lymphoblastic leukaemia were treated with palifermin, and compared to a control group with the same number of subjects and similar inclusion criteria. Statistical analysis were performed to compare the outcomes in the treatment vs. control groups. In the treatment group, we found a statistically significant reduction in the duration of parenteral nutrition (P=0.002), duration of mucositis (P=0.003) and the average grade of mucositis (P=0.03). The statistical analysis showed that the drug was able to decrease the severity of mucositis. These data, although preliminary, suggest that palifermin could be a valid therapeutic adjuvant to improve the quality of life of patients suffering from leukaemia.
An Adaptive Buddy Check for Observational Quality Control
NASA Technical Reports Server (NTRS)
Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)
2000-01-01
An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.
Statistical analysis of excitation energies in actinide and rare-earth nuclei
NASA Astrophysics Data System (ADS)
Levon, A. I.; Magner, A. G.; Radionov, S. V.
2018-04-01
Statistical analysis of distributions of the collective states in actinide and rare-earth nuclei is performed in terms of the nearest-neighbor spacing distribution (NNSD). Several approximations, such as the linear approach to the level repulsion density and that suggested by Brody to the NNSDs were applied for the analysis. We found an intermediate character of the experimental spectra between the order and the chaos for a number of rare-earth and actinide nuclei. The spectra are closer to the Wigner distribution for energies limited by 3 MeV, and to the Poisson distribution for data including higher excitation energies and higher spins. The latter result is in agreement with the theoretical calculations. These features are confirmed by the cumulative distributions, where the Wigner contribution dominates at smaller spacings while the Poisson one is more important at larger spacings, and our linear approach improves the comparison with experimental data at all desired spacings.
Volcano plots in analyzing differential expressions with mRNA microarrays.
Li, Wentian
2012-12-01
A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.
Ledbetter, Alexander K; Sohlberg, McKay Moore; Fickas, Stephen F; Horney, Mark A; McIntosh, Kent
2017-11-06
This study evaluated a computer-based prompting intervention for improving expository essay writing after acquired brain injury (ABI). Four undergraduate participants aged 18-21 with mild-moderate ABI and impaired fluid cognition at least 6 months post-injury reported difficulty with the writing process after injury. The study employed a non-concurrent multiple probe across participants, in a single-case design. Outcome measures included essay quality scores and number of revisions to writing counted then coded by type using a revision taxonomy. An inter-scorer agreement procedure was completed for quality scores for 50% of essays, with data indicating that agreement exceeded a goal of 85%. Visual analysis of results showed increased essay quality for all participants in intervention phase compared with baseline, maintained 1 week after. Statistical analyses showed statistically significant results for two of the four participants. The authors discuss external cuing for self-monitoring and tapping of existing writing knowledge as possible explanations for improvement. The study provides preliminary evidence that computer-based prompting has potential to improve writing quality for undergraduates with ABI.
The Equity of New York State's System of Financing Schools: An Update.
ERIC Educational Resources Information Center
Scheuer, Joan
1983-01-01
This statistical analysis of the equity and efficiency of New York's complex school finance system concludes that legislation since 1975 has neither significantly reduced wide disparities in local spending nor weakened the link between wealth and expenditure because the system cannot be improved without a substantial funding increase. (MJL)
Railroad safety program, task 2
NASA Technical Reports Server (NTRS)
1983-01-01
Aspects of railroad safety and the preparation of a National Inspection Plan (NIP) for rail safety improvement are examined. Methodology for the allocation of inspection resources, preparation of a NIP instruction manual, and recommendations for future NIP, are described. A statistical analysis of regional rail accidents is presented with causes and suggested preventive measures included.
Can Low Tuition Fee Policy Improve Higher Education Equity and Social Welfare?
ERIC Educational Resources Information Center
Zha, Xianyou; Ding, Shouhai
2007-01-01
Traditionally there has been a theoretical view that raising tuition fees will undermine education equity and social welfare. This study examines the effects of different tuition policies on both these factors. A statistical analysis is made on the theoretical relationship between higher education tuition fees and dropout probability, which leads…
New Tools for "New" History: Computers and the Teaching of Quantitative Historical Methods.
ERIC Educational Resources Information Center
Burton, Orville Vernon; Finnegan, Terence
1989-01-01
Explains the development of an instructional software package and accompanying workbook which teaches students to apply computerized statistical analysis to historical data, improving the study of social history. Concludes that the use of microcomputers and supercomputers to manipulate historical data enhances critical thinking skills and the use…
Economics: A Discriminant Analysis of Students' Perceptions of Web-Based Learning.
ERIC Educational Resources Information Center
Usip, Ebenge E.; Bee, Richard H.
1998-01-01
Users and nonusers of Web-based instruction (WBI) in an undergraduate statistics classes at Youngstown State University were surveyed. Users concluded that distance learning via the Web was a good method of obtaining general information and useful tool in improving their academic performance. Nonusers thought the university should provide…
Lisa A. Schulte; David J. Mladenoff; Erik V. Nordheim
2002-01-01
We developed a quantitative and replicable classification system to improve understanding of historical composition and structure within northern Wisconsin's forests. The classification system was based on statistical cluster analysis and two forest metrics, relative dominance (% basal area) and relative importance (mean of relative dominance and relative density...
Overview Of Recent Enhancements To The Bumper-II Meteoroid and Orbital Debris Risk Assessment Tool
NASA Technical Reports Server (NTRS)
Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.; Prior, Thomas G.
2006-01-01
Discussion includes recent enhancements to the BUMPER-II program and input files in support of Shuttle Return to Flight. Improvements to the mesh definitions of the finite element input model will be presented. A BUMPER-II analysis process that was used to estimate statistical uncertainty is introduced.
Analyzing Exercise Training Effect and Its Impact on Cardiorespiratory and Cardiovascular Fitness
ERIC Educational Resources Information Center
Laumakis, Paul J.; McCormack, Kevin
2014-01-01
This paper provides a statistical investigation of the impact of heart rate levels on training effect for a specific exercise regimen, including an analysis of post-exercise heart rate recovery. Results indicate optimum target values for both average and maximum heart rate during exercise in order to improve both cardiorespiratory and…
ERIC Educational Resources Information Center
McEwan, Patrick J.
2015-01-01
I gathered 77 randomized experiments (with 111 treatment arms) that evaluated the effects of school-based interventions on learning in developing-country primary schools. On average, monetary grants and deworming treatments had mean effect sizes that were close to zero and not statistically significant. Nutritional treatments, treatments that…
Statistical analysis of 4 types of neck whiplash injuries based on classical meridian theory.
Chen, Yemeng; Zhao, Yan; Xue, Xiaolin; Li, Hui; Wu, Xiuyan; Zhang, Qunce; Zheng, Xin; Wang, Tianfang
2015-01-01
As one component of the Chinese medicine meridian system, the meridian sinew (Jingjin, (see text), tendino-musculo) is specially described as being for acupuncture treatment of the musculoskeletal system because of its dynamic attributes and tender point correlations. In recent decades, the therapeutic importance of the sinew meridian has become revalued in clinical application. Based on this theory, the authors have established therapeutic strategies of acupuncture treatment in Whiplash-Associated Disorders (WAD) by categorizing four types of neck symptom presentations. The advantage of this new system is to make it much easier for the clinician to find effective acupuncture points. This study attempts to prove the significance of the proposed therapeutic strategies by analyzing data collected from a clinical survey of various WAD using non-supervised statistical methods, such as correlation analysis, factor analysis, and cluster analysis. The clinical survey data have successfully verified discrete characteristics of four neck syndromes, based upon the range of motion (ROM) and tender point location findings. A summary of the relationships among the symptoms of the four neck syndromes has shown the correlation coefficient as having a statistical significance (P < 0.01 or P < 0.05), especially with regard to ROM. Furthermore, factor and cluster analyses resulted in a total of 11 categories of general symptoms, which implies syndrome factors are more related to the Liver, as originally described in classical theory. The hypothesis of meridian sinew syndromes in WAD is clearly supported by the statistical analysis of the clinical trials. This new discovery should be beneficial in improving therapeutic outcomes.
Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.
Li, Qiuju; Pan, Jianxin; Belcher, John
2016-12-01
In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.
Emery, R J
1997-03-01
Institutional radiation safety programs routinely use wipe test sampling and liquid scintillation counting analysis to indicate the presence of removable radioactive contamination. Significant volumes of liquid waste can be generated by such surveillance activities, and the subsequent disposal of these materials can sometimes be difficult and costly. In settings where large numbers of negative results are regularly obtained, the limited grouping of samples for analysis based on expected value statistical techniques is possible. To demonstrate the plausibility of the approach, single wipe samples exposed to varying amounts of contamination were analyzed concurrently with nine non-contaminated samples. Although the sample grouping inevitably leads to increased quenching with liquid scintillation counting systems, the effect did not impact the ability to detect removable contamination in amounts well below recommended action levels. Opportunities to further improve this cost effective semi-quantitative screening procedure are described, including improvements in sample collection procedures, enhancing sample-counting media contact through mixing and extending elution periods, increasing sample counting times, and adjusting institutional action levels.
Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne
2017-03-01
The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.
NASA Astrophysics Data System (ADS)
Singh, Sanjeev Kumar; Prasad, V. S.
2018-02-01
This paper presents a systematic investigation of medium-range rainfall forecasts from two versions of the National Centre for Medium Range Weather Forecasting (NCMRWF)-Global Forecast System based on three-dimensional variational (3D-Var) and hybrid analysis system namely, NGFS and HNGFS, respectively, during Indian summer monsoon (June-September) 2015. The NGFS uses gridpoint statistical interpolation (GSI) 3D-Var data assimilation system, whereas HNGFS uses hybrid 3D ensemble-variational scheme. The analysis includes the evaluation of rainfall fields and comparisons of rainfall using statistical score such as mean precipitation, bias, correlation coefficient, root mean square error and forecast improvement factor. In addition to these, categorical scores like Peirce skill score and bias score are also computed to describe particular aspects of forecasts performance. The comparison results of mean precipitation reveal that both the versions of model produced similar large-scale feature of Indian summer monsoon rainfall for day-1 through day-5 forecasts. The inclusion of fully flow-dependent background error covariance significantly improved the wet biases in HNGFS over the Indian Ocean. The forecast improvement factor and Peirce skill score in the HNGFS have also found better than NGFS for day-1 through day-5 forecasts.
Madden, Jennifer R; Mowry, Patricia; Gao, Dexiang; Cullen, Patsy McGuire; Foreman, Nicholas K
2010-01-01
This mixed methods pilot study evaluated the effects of the creative arts therapy (CAT) on the quality of life (QOL) of children receiving chemotherapy. A 2-group, repeated measures randomized design compared CAT with a volunteer's attention (n = 16). Statistical analysis of the randomized controlled phase of the study suggested an improvement in the following areas after the CAT: parent report of child's hurt (P = .03) and parent report of child's nausea (P = .0061). A nonrandomized phase, using a different instrument showed improved mood with statistical significance on the Faces Scale (P < .01), and patients were more excited (P < .05), happier (P < .02), and less nervous (P < .02). Provider focus groups revealed positive experiences. Case studies are included to exemplify the therapeutic process. With heightened interest in complementary therapy for children with cancer, future research with a larger sample size is needed to document the impact of incorporating creative arts into the healing process.
Rah, Jeong-Eun; Shin, Dongho; Oh, Do Hoon; Kim, Tae Hyun; Kim, Gwe-Ya
2014-09-01
To evaluate and improve the reliability of proton quality assurance (QA) processes and, to provide an optimal customized tolerance level using the statistical process control (SPC) methodology. The authors investigated the consistency check of dose per monitor unit (D/MU) and range in proton beams to see whether it was within the tolerance level of the daily QA process. This study analyzed the difference between the measured and calculated ranges along the central axis to improve the patient-specific QA process in proton beams by using process capability indices. The authors established a customized tolerance level of ±2% for D/MU and ±0.5 mm for beam range in the daily proton QA process. In the authors' analysis of the process capability indices, the patient-specific range measurements were capable of a specification limit of ±2% in clinical plans. SPC methodology is a useful tool for customizing the optimal QA tolerance levels and improving the quality of proton machine maintenance, treatment delivery, and ultimately patient safety.
Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei
2018-05-24
To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.
Using statistical process control to make data-based clinical decisions.
Pfadt, A; Wheeler, D J
1995-01-01
Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.
Interactivity fosters Bayesian reasoning without instruction.
Vallée-Tourangeau, Gaëlle; Abadie, Marlène; Vallée-Tourangeau, Frédéric
2015-06-01
Successful statistical reasoning emerges from a dynamic system including: a cognitive agent, material artifacts with their actions possibilities, and the thoughts and actions that are realized while reasoning takes place. Five experiments provide evidence that enabling the physical manipulation of the problem information (through the use of playing cards) substantially improves statistical reasoning, without training or instruction, not only with natural frequency statements (Experiment 1) but also with single-event probability statements (Experiment 2). Improved statistical reasoning was not simply a matter of making all sets and subsets explicit in the pack of cards (Experiment 3), it was not merely due to the discrete and countable layout resulting from the cards manipulation, and it was not mediated by participants' level of engagement with the task (Experiment 5). The positive effect of an increased manipulability of the problem information on participants' reasoning performance was generalizable both over problems whose numeric properties did not map perfectly onto the cards and over different types of cards (Experiment 4). A systematic analysis of participants' behaviors revealed that manipulating cards improved performance when reasoners spent more time actively changing the presentation layout "in the world" as opposed to when they spent more time passively pointing at cards, seemingly attempting to solve the problem "in their head." Although they often go unnoticed, the action possibilities of the material artifacts available and the actions that are realized on those artifacts are constitutive of successful statistical reasoning, even in adults who have ostensibly reached cognitive maturity. (c) 2015 APA, all rights reserved).
Wu, Johnny C; Gardner, David P; Ozer, Stuart; Gutell, Robin R; Ren, Pengyu
2009-08-28
The accurate prediction of the secondary and tertiary structure of an RNA with different folding algorithms is dependent on several factors, including the energy functions. However, an RNA higher-order structure cannot be predicted accurately from its sequence based on a limited set of energy parameters. The inter- and intramolecular forces between this RNA and other small molecules and macromolecules, in addition to other factors in the cell such as pH, ionic strength, and temperature, influence the complex dynamics associated with transition of a single stranded RNA to its secondary and tertiary structure. Since all of the factors that affect the formation of an RNAs 3D structure cannot be determined experimentally, statistically derived potential energy has been used in the prediction of protein structure. In the current work, we evaluate the statistical free energy of various secondary structure motifs, including base-pair stacks, hairpin loops, and internal loops, using their statistical frequency obtained from the comparative analysis of more than 50,000 RNA sequences stored in the RNA Comparative Analysis Database (rCAD) at the Comparative RNA Web (CRW) Site. Statistical energy was computed from the structural statistics for several datasets. While the statistical energy for a base-pair stack correlates with experimentally derived free energy values, suggesting a Boltzmann-like distribution, variation is observed between different molecules and their location on the phylogenetic tree of life. Our statistical energy values calculated for several structural elements were utilized in the Mfold RNA-folding algorithm. The combined statistical energy values for base-pair stacks, hairpins and internal loop flanks result in a significant improvement in the accuracy of secondary structure prediction; the hairpin flanks contribute the most.
Directions for new developments on statistical design and analysis of small population group trials.
Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel
2016-06-14
Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small population clinical trials. They address various challenges presented by the EMA/CHMP guideline as well as recent discussions about extrapolation. There is a need for involvement of the patients' perspective in the planning and conduct of small population clinical trials for a successful therapy evaluation.
Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data
An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.
2005-01-01
Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165
Measuring Technological and Content Knowledge of Undergraduate Primary Teachers in Mathematics
NASA Astrophysics Data System (ADS)
Doukakis, Spyros; Chionidou-Moskofoglou, Maria; Mangina-Phelan, Eleni; Roussos, Petros
Twenty-five final-year undergraduate students of primary education who were attending a course on mathematics education participated in a research project during the 2009 spring semester. A repeated measures experimental design was used. Quantitative data on students' computer attitudes, self-efficacy in ICT, attitudes toward educational software, and self-efficacy in maths were collected. Data analysis showed a statistically non-significant improvement on participants' computer attitudes and self-efficacy in ICT and ES, but a significant improvement of self-efficacy in mathematics.
Dennett, Amy M; Taylor, Nicholas F
2015-01-01
To determine the effectiveness of computer-based electronic devices that provide feedback in improving mobility and balance and reducing falls. Randomized controlled trials were searched from the earliest available date to August 2013. Standardized mean differences were used to complete meta-analyses, with statistical heterogeneity being described with the I-squared statistic. The GRADE approach was used to summarize the level of evidence for each completed meta-analysis. Risk of bias for individual trials was assessed with the (Physiotherapy Evidence Database) PEDro scale. Thirty trials were included. There was high-quality evidence that computerized devices can improve dynamic balance in people with a neurological condition compared with no therapy. There was low-to-moderate-quality evidence that computerized devices have no significant effect on mobility, falls efficacy and falls risk in community-dwelling older adults, and people with a neurological condition compared with physiotherapy. There is high-quality evidence that computerized devices that provide feedback may be useful in improving balance in people with neurological conditions compared with no therapy, but there is a lack of evidence supporting more meaningful changes in mobility and falls risk.
Site Suitability Analysis for Beekeeping via Analythical Hyrearchy Process, Konya Example
NASA Astrophysics Data System (ADS)
Sarı, F.; Ceylan, D. A.
2017-11-01
Over the past decade, the importance of the beekeeping activities has been emphasized in the field of biodiversity, ecosystems, agriculture and human health. Thus, efficient management and deciding correct beekeeping activities seems essential to maintain and improve productivity and efficiency. Due to this importance, considering the economic contributions to the rural area, the need for suitability analysis concept has been revealed. At this point, Multi Criteria Decision Analysis (MCDA) and Geographical Information Systems (GIS) integration provides efficient solutions to the complex structure of decision- making process for beekeeping activities. In this study, site suitability analysis via Analytical Hierarchy Process (AHP) was carried out for Konya city in Turkey. Slope, elevation, aspect, distance to water resources, roads and settlements, precipitation and flora criteria are included to determine suitability. The requirements, expectations and limitations of beekeeping activities are specified with the participation of experts and stakeholders. The final suitability map were validated with existing 117 beekeeping locations and Turkish Statistical Institute 2016 beekeeping statistics for Konya province.
ICAP - An Interactive Cluster Analysis Procedure for analyzing remotely sensed data
NASA Technical Reports Server (NTRS)
Wharton, S. W.; Turner, B. J.
1981-01-01
An Interactive Cluster Analysis Procedure (ICAP) was developed to derive classifier training statistics from remotely sensed data. ICAP differs from conventional clustering algorithms by allowing the analyst to optimize the cluster configuration by inspection, rather than by manipulating process parameters. Control of the clustering process alternates between the algorithm, which creates new centroids and forms clusters, and the analyst, who can evaluate and elect to modify the cluster structure. Clusters can be deleted, or lumped together pairwise, or new centroids can be added. A summary of the cluster statistics can be requested to facilitate cluster manipulation. The principal advantage of this approach is that it allows prior information (when available) to be used directly in the analysis, since the analyst interacts with ICAP in a straightforward manner, using basic terms with which he is more likely to be familiar. Results from testing ICAP showed that an informed use of ICAP can improve classification, as compared to an existing cluster analysis procedure.
Point-by-point compositional analysis for atom probe tomography.
Stephenson, Leigh T; Ceguerra, Anna V; Li, Tong; Rojhirunsakool, Tanaporn; Nag, Soumya; Banerjee, Rajarshi; Cairney, Julie M; Ringer, Simon P
2014-01-01
This new alternate approach to data processing for analyses that traditionally employed grid-based counting methods is necessary because it removes a user-imposed coordinate system that not only limits an analysis but also may introduce errors. We have modified the widely used "binomial" analysis for APT data by replacing grid-based counting with coordinate-independent nearest neighbour identification, improving the measurements and the statistics obtained, allowing quantitative analysis of smaller datasets, and datasets from non-dilute solid solutions. It also allows better visualisation of compositional fluctuations in the data. Our modifications include:.•using spherical k-atom blocks identified by each detected atom's first k nearest neighbours.•3D data visualisation of block composition and nearest neighbour anisotropy.•using z-statistics to directly compare experimental and expected composition curves. Similar modifications may be made to other grid-based counting analyses (contingency table, Langer-Bar-on-Miller, sinusoidal model) and could be instrumental in developing novel data visualisation options.
Uei, Shu-Lin; Tsai, Chung-Hung; Kuo, Yu-Ming
2016-04-29
Telehealth cost analysis has become a crucial issue for governments in recent years. In this study, we examined cases of metabolic syndrome in Hualien County, Taiwan. This research adopted the framework proposed by Marchand to establish a study process. In addition, descriptive statistics, a t test, analysis of variance, and regression analysis were employed to analyze 100 questionnaires. The results of the t$ test revealed significant differences in medical health expenditure, number of clinical visits for medical treatment, average amount of time spent commuting to clinics, amount of time spent undergoing medical treatment, and average number of people accompanying patients to medical care facilities or assisting with other tasks in the past one month, indicating that offering telehealth care services can reduce health expenditure. The statistical analysis results revealed that customer satisfaction has a positive effect on reducing health expenditure. Therefore, this study proves that telehealth care systems can effectively reduce health expenditure and directly improve customer satisfaction with medical treatment.
NASA Technical Reports Server (NTRS)
Jongeward, Andrew R.; Li, Zhanqing; He, Hao; Xiong, Xiaoxiong
2016-01-01
Aerosols contribute to Earths radiative budget both directly and indirectly, and large uncertainties remain in quantifying aerosol effects on climate. Variability in aerosol distribution and properties, as might result from changing emissions and transport processes, must be characterized. In this study, variations in aerosol loading across the eastern seaboard of theUnited States and theNorthAtlanticOcean during 2002 to 2012 are analyzed to examine the impacts of anthropogenic emission control measures using monthly mean data from MODIS, AERONET, and IMPROVE observations and Goddard Chemistry Aerosol Radiation and Transport (GOCART) model simulation.MODIS observes a statistically significant negative trend in aerosol optical depth (AOD) over the midlatitudes (-0.030 decade(sup-1)). Correlation analyses with surface AOD from AERONET sites in the upwind region combined with trend analysis from GOCART component AOD confirm that the observed decrease in the midlatitudes is chiefly associated with anthropogenic aerosols that exhibit significant negative trends from the eastern U.S. coast extending over the western North Atlantic. Additional analysis of IMPROVE surface PM(sub 2.5) observations demonstrates statistically significant negative trends in the anthropogenic components with decreasing mass concentrations over the eastern United States. Finally, a seasonal analysis of observational datasets is performed. The negative trend seen by MODIS is strongest during spring (MAM) and summer (JJA) months. This is supported by AERONET seasonal trends and is identified from IMPROVE seasonal trends as resulting from ammonium sulfate decreases during these seasons.
Gao, Xinxiao; Guo, Jia; Meng, Xin; Wang, Jun; Peng, Xiaoyan; Ikuno, Yasushi
2016-06-13
To evaluate the anatomical and visual outcomes by par plana vitrectomy with or without internal limiting membrane (ILM) peeling in highly myopic eyes with macular hole retinal detachment (MHRD). MEDLINE (Ovid, PubMed) and EMBASE were used for data collection up to September 30, 2015. The parameters of anatomical success, macular hole closure and improved best corrected visual acuity (BCVA) at or beyond 6 months after operation were assessed as the primary outcome measurement. The meta-analysis was performed with the fixed-effects model. Seven comparative analyses involving a total of 373 patients were included in the present meta-analysis. Statistically the pooled data showed significant relative risk (RR) in terms of primary reattachment between ILM peeling and non-peeling groups (RR, 1.19; 95 % CI, 1.04 to 1.36; P = 0.012). An effect favoring ILM peeling with regard to macular hole closure was also detected (RR, 1.71; 95 % CI, 1.20 to 2.43; P = 0.003). However, no statistically significant difference was found in the improved BCVA (logarithm of the minimum angle of resolution) at 6 months or more (95 % CI, -0.31 to 0.44; P = 0.738). There is no proved benefit of postoperative visual improvement. However, the available evidences from this study suggested a superiority of ILM peeling over no peeling for myopic patients with MHRD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-01-01
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerr, J.; Jones, G.L.
1996-12-31
Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less
Comparative inference of duplicated genes produced by polyploidization in soybean genome.
Yang, Yanmei; Wang, Jinpeng; Di, Jianyong
2013-01-01
Soybean (Glycine max) is one of the most important crop plants for providing protein and oil. It is important to investigate soybean genome for its economic and scientific value. Polyploidy is a widespread and recursive phenomenon during plant evolution, and it could generate massive duplicated genes which is an important resource for genetic innovation. Improved sequence alignment criteria and statistical analysis are used to identify and characterize duplicated genes produced by polyploidization in soybean. Based on the collinearity method, duplicated genes by whole genome duplication account for 70.3% in soybean. From the statistical analysis of the molecular distances between duplicated genes, our study indicates that the whole genome duplication event occurred more than once in the genome evolution of soybean, which is often distributed near the ends of chromosomes.
A RESEARCH DATABASE FOR IMPROVED DATA MANAGEMENT AND ANALYSIS IN LONGITUDINAL STUDIES
BIELEFELD, ROGER A.; YAMASHITA, TOYOKO S.; KEREKES, EDWARD F.; ERCANLI, EHAT; SINGER, LYNN T.
2014-01-01
We developed a research database for a five-year prospective investigation of the medical, social, and developmental correlates of chronic lung disease during the first three years of life. We used the Ingres database management system and the Statit statistical software package. The database includes records containing 1300 variables each, the results of 35 psychological tests, each repeated five times (providing longitudinal data on the child, the parents, and behavioral interactions), both raw and calculated variables, and both missing and deferred values. The four-layer menu-driven user interface incorporates automatic activation of complex functions to handle data verification, missing and deferred values, static and dynamic backup, determination of calculated values, display of database status, reports, bulk data extraction, and statistical analysis. PMID:7596250
Therapies for acute myeloid leukemia: vosaroxin
Sayar, Hamid; Bashardoust, Parvaneh
2017-01-01
Vosaroxin, a quinolone-derivative chemotherapeutic agent, was considered a promising drug for the treatment of acute myeloid leukemia (AML). Early-stage clinical trials with this agent led to a large randomized double-blind placebo-controlled study of vosaroxin in combination with intermediate-dose cytarabine for the treatment of relapsed or refractory AML. The study demonstrated better complete remission rates with vosaroxin, but there was no statistically significant overall survival benefit in the whole cohort. A subset analysis censoring patients who had undergone allogeneic stem cell transplantation, however, revealed a modest but statistically significant improvement in overall survival particularly among older patients. This article reviews the data available on vosaroxin including clinical trials in AML and offers an analysis of findings of these studies as well as the current status of vosaroxin. PMID:28860803
Therapies for acute myeloid leukemia: vosaroxin.
Sayar, Hamid; Bashardoust, Parvaneh
2017-01-01
Vosaroxin, a quinolone-derivative chemotherapeutic agent, was considered a promising drug for the treatment of acute myeloid leukemia (AML). Early-stage clinical trials with this agent led to a large randomized double-blind placebo-controlled study of vosaroxin in combination with intermediate-dose cytarabine for the treatment of relapsed or refractory AML. The study demonstrated better complete remission rates with vosaroxin, but there was no statistically significant overall survival benefit in the whole cohort. A subset analysis censoring patients who had undergone allogeneic stem cell transplantation, however, revealed a modest but statistically significant improvement in overall survival particularly among older patients. This article reviews the data available on vosaroxin including clinical trials in AML and offers an analysis of findings of these studies as well as the current status of vosaroxin.
Pearson, Glen J; Olson, Kari L; Panich, Nicole E; Majumdar, Sumit R; Tsuyuki, Ross T; Gilchrist, Dawna M; Damani, Ali; Francis, Gordon A
2008-01-01
Background: Specialty cardiovascular risk reduction clinics (CRRC) increase the proportion of patients attaining recommended lipid targets; however, it is not known if the benefits are sustained after discharge. We evaluated the impact of a CRRC on lipid levels and assessed the long-term effect of a CRRC in maintaining improved lipid levels following discharge. Methods: The medical records of consecutive dyslipidemic patients discharged ×6 months from a tertiary hospital CRRC from January 1991 to January 2001 were retrospectively reviewed. The primary outcome was the change in patients’ lipid levels between the final CRRC visit and the most recent primary care follow-up. A worst-case analysis was conducted to evaluate the potential impact of the patients in whom the follow-up lipid profiles post-discharge from the CRRC were not obtained. Results: Within the CRRC (median follow-up = 1.28 years in 1064 patients), we observed statistically significant improvements in all lipid parameters. In the 411 patients for whom post-discharge lipid profiles were available (median follow-up = 2.41 years), there were no significant differences observed in low-density lipoprotein-cholesterol, total cholesterol (TC), or triglycerides since CRRC discharge; however, there were small improvements in high-density lipoprotein-cholesterol (HDL-C) and TC:HDL ratio (p < 0.05 for both). The unadjusted worst-case analysis (653 patients with no follow-up lipid profiles) demonstrated statistically significant worsening of all lipid parameters between CRRC discharge and the most recent follow-up. However, when the change in lipid parameters between the baseline and the most recent follow-up was assessed in this analysis, the changes in all lipid parameters were significantly improved (p < 0.05). Conclusions: This study demonstrates that a CRRC can improve lipid levels and suggests that these benefits are sustained once patients are returned to the care of their primary physician. PMID:19183763
Sparse approximation of currents for statistics on curves and surfaces.
Durrleman, Stanley; Pennec, Xavier; Trouvé, Alain; Ayache, Nicholas
2008-01-01
Computing, processing, visualizing statistics on shapes like curves or surfaces is a real challenge with many applications ranging from medical image analysis to computational geometry. Modelling such geometrical primitives with currents avoids feature-based approach as well as point-correspondence method. This framework has been proved to be powerful to register brain surfaces or to measure geometrical invariants. However, if the state-of-the-art methods perform efficiently pairwise registrations, new numerical schemes are required to process groupwise statistics due to an increasing complexity when the size of the database is growing. Statistics such as mean and principal modes of a set of shapes often have a heavy and highly redundant representation. We propose therefore to find an adapted basis on which mean and principal modes have a sparse decomposition. Besides the computational improvement, this sparse representation offers a way to visualize and interpret statistics on currents. Experiments show the relevance of the approach on 34 sets of 70 sulcal lines and on 50 sets of 10 meshes of deep brain structures.
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.
An Asynchronous Many-Task Implementation of In-Situ Statistical Analysis using Legion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
2015-11-01
In this report, we propose a framework for the design and implementation of in-situ analy- ses using an asynchronous many-task (AMT) model, using the Legion programming model together with the MiniAero mini-application as a surrogate for full-scale parallel scientific computing applications. The bulk of this work consists of converting the Learn/Derive/Assess model which we had initially developed for parallel statistical analysis using MPI [PTBM11], from a SPMD to an AMT model. In this goal, we propose an original use of the concept of Legion logical regions as a replacement for the parallel communication schemes used for the only operation ofmore » the statistics engines that require explicit communication. We then evaluate this proposed scheme in a shared memory environment, using the Legion port of MiniAero as a proxy for a full-scale scientific application, as a means to provide input data sets of variable size for the in-situ statistical analyses in an AMT context. We demonstrate in particular that the approach has merit, and warrants further investigation, in collaboration with ongoing efforts to improve the overall parallel performance of the Legion system.« less
Scaling Laws in Canopy Flows: A Wind-Tunnel Analysis
NASA Astrophysics Data System (ADS)
Segalini, Antonio; Fransson, Jens H. M.; Alfredsson, P. Henrik
2013-08-01
An analysis of velocity statistics and spectra measured above a wind-tunnel forest model is reported. Several measurement stations downstream of the forest edge have been investigated and it is observed that, while the mean velocity profile adjusts quickly to the new canopy boundary condition, the turbulence lags behind and shows a continuous penetration towards the free stream along the canopy model. The statistical profiles illustrate this growth and do not collapse when plotted as a function of the vertical coordinate. However, when the statistics are plotted as function of the local mean velocity (normalized with a characteristic velocity scale), they do collapse, independently of the streamwise position and freestream velocity. A new scaling for the spectra of all three velocity components is proposed based on the velocity variance and integral time scale. This normalization improves the collapse of the spectra compared to existing scalings adopted in atmospheric measurements, and allows the determination of a universal function that provides the velocity spectrum. Furthermore, a comparison of the proposed scaling laws for two different canopy densities is shown, demonstrating that the vertical velocity variance is the most sensible statistical quantity to the characteristics of the canopy roughness.
An introduction to Bayesian statistics in health psychology.
Depaoli, Sarah; Rus, Holly M; Clifton, James P; van de Schoot, Rens; Tiemensma, Jitske
2017-09-01
The aim of the current article is to provide a brief introduction to Bayesian statistics within the field of health psychology. Bayesian methods are increasing in prevalence in applied fields, and they have been shown in simulation research to improve the estimation accuracy of structural equation models, latent growth curve (and mixture) models, and hierarchical linear models. Likewise, Bayesian methods can be used with small sample sizes since they do not rely on large sample theory. In this article, we discuss several important components of Bayesian statistics as they relate to health-based inquiries. We discuss the incorporation and impact of prior knowledge into the estimation process and the different components of the analysis that should be reported in an article. We present an example implementing Bayesian estimation in the context of blood pressure changes after participants experienced an acute stressor. We conclude with final thoughts on the implementation of Bayesian statistics in health psychology, including suggestions for reviewing Bayesian manuscripts and grant proposals. We have also included an extensive amount of online supplementary material to complement the content presented here, including Bayesian examples using many different software programmes and an extensive sensitivity analysis examining the impact of priors.
Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu
2014-05-01
The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.
Soman, Gopalan; Yang, Xiaoyi; Jiang, Hengguang; Giardina, Steve; Vyas, Vinay; Mitra, George; Yovandich, Jason; Creekmore, Stephen P; Waldmann, Thomas A; Quiñones, Octavio; Alvord, W Gregory
2009-08-31
A colorimetric cell proliferation assay using soluble tetrazolium salt [(CellTiter 96(R) Aqueous One Solution) cell proliferation reagent, containing the (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium, inner salt) and an electron coupling reagent phenazine ethosulfate], was optimized and qualified for quantitative determination of IL-15 dependent CTLL-2 cell proliferation activity. An in-house recombinant Human (rHu)IL-15 reference lot was standardized (IU/mg) against an international reference standard. Specificity of the assay for IL-15 was documented by illustrating the ability of neutralizing anti-IL-15 antibodies to block the product specific CTLL-2 cell proliferation and the lack of blocking effect with anti-IL-2 antibodies. Under the defined assay conditions, the linear dose-response concentration range was between 0.04 and 0.17ng/ml of the rHuIL-15 produced in-house and 0.5-3.0IU/ml for the international standard. Statistical analysis of the data was performed with the use of scripts written in the R Statistical Language and Environment utilizing a four-parameter logistic regression fit analysis procedure. The overall variation in the ED(50) values for the in-house reference standard from 55 independent estimates performed over the period of 1year was 12.3% of the average. Excellent intra-plate and within-day/inter-plate consistency was observed for all four parameter estimates in the model. Different preparations of rHuIL-15 showed excellent intra-plate consistency in the parameter estimates corresponding to the lower and upper asymptotes as well as to the 'slope' factor at the mid-point. The ED(50) values showed statistically significant differences for different lots and for control versus stressed samples. Three R-scripts improve data analysis capabilities allowing one to describe assay variations, to draw inferences between data sets from formal statistical tests, and to set up improved assay acceptance criteria based on comparability and consistency in the four parameters of the model. The assay is precise, accurate and robust and can be fully validated. Applications of the assay were established including process development support, release of the rHuIL-15 product for pre-clinical and clinical studies, and for monitoring storage stability.
NASA Astrophysics Data System (ADS)
Karpov, A. V.; Yumagulov, E. Z.
2003-05-01
We have restored and ordered the archive of meteor observations carried out with a meteor radar complex ``KGU-M5'' since 1986. A relational database has been formed under the control of the Database Management System (DBMS) Oracle 8. We also improved and tested a statistical method for studying the fine spatial structure of meteor streams with allowance for the specific features of application of the DBMS. Statistical analysis of the results of observations made it possible to obtain information about the substance distribution in the Quadrantid, Geminid, and Perseid meteor streams.
[Development of Hospital Equipment Maintenance Information System].
Zhou, Zhixin
2015-11-01
Hospital equipment maintenance information system plays an important role in improving medical treatment quality and efficiency. By requirement analysis of hospital equipment maintenance, the system function diagram is drawed. According to analysis of input and output data, tables and reports in connection with equipment maintenance process, relationships between entity and attribute is found out, and E-R diagram is drawed and relational database table is established. Software development should meet actual process requirement of maintenance and have a friendly user interface and flexible operation. The software can analyze failure cause by statistical analysis.
MacBride-Stewart, Sean; Marwick, Charis; Houston, Neil; Watt, Iain; Patton, Andrea; Guthrie, Bruce
2017-01-01
Background It is uncertain whether improvements in primary care high-risk prescribing seen in research trials can be realised in the real-world setting. Aim To evaluate the impact of a 1-year system-wide phase IV prescribing safety improvement initiative, which included education, feedback, support to identify patients to review, and small financial incentives. Design and setting An interrupted time series analysis of targeted high-risk prescribing in all 56 general practices in NHS Forth Valley, Scotland, was performed. In 2013–2014, this focused on high-risk non-steroidal anti-inflammatory drugs (NSAIDs) in older people and NSAIDs with oral anticoagulants; in 2014–2015, it focused on antipsychotics in older people. Method The primary analysis used segmented regression analysis to estimate impact at the end of the intervention, and 12 months later. The secondary analysis used difference-in-difference methods to compare Forth Valley changes with those in NHS Greater Glasgow and Clyde (GGC). Results In the primary analysis, downward trends for all three NSAID measures that were existent before the intervention statistically significantly steepened following implementation of the intervention. At the end of the intervention period, 1221 fewer patients than expected were prescribed a high-risk NSAID. In contrast, antipsychotic prescribing in older people increased slowly over time, with no intervention-associated change. In the secondary analysis, reductions at the end of the intervention period in all three NSAID measures were statistically significantly greater in NHS Forth Valley than in NHS GGC, but only significantly greater for two of these measures 12 months after the intervention finished. Conclusion There were substantial and sustained reductions in the high-risk prescribing of NSAIDs, although with some waning of effect 12 months after the intervention ceased. The same intervention had no effect on antipsychotic prescribing in older people. PMID:28347986
MacBride-Stewart, Sean; Marwick, Charis; Houston, Neil; Watt, Iain; Patton, Andrea; Guthrie, Bruce
2017-05-01
It is uncertain whether improvements in primary care high-risk prescribing seen in research trials can be realised in the real-world setting. To evaluate the impact of a 1-year system-wide phase IV prescribing safety improvement initiative, which included education, feedback, support to identify patients to review, and small financial incentives. An interrupted time series analysis of targeted high-risk prescribing in all 56 general practices in NHS Forth Valley, Scotland, was performed. In 2013-2014, this focused on high-risk non-steroidal anti-inflammatory drugs (NSAIDs) in older people and NSAIDs with oral anticoagulants; in 2014-2015, it focused on antipsychotics in older people. The primary analysis used segmented regression analysis to estimate impact at the end of the intervention, and 12 months later. The secondary analysis used difference-in-difference methods to compare Forth Valley changes with those in NHS Greater Glasgow and Clyde (GGC). In the primary analysis, downward trends for all three NSAID measures that were existent before the intervention statistically significantly steepened following implementation of the intervention. At the end of the intervention period, 1221 fewer patients than expected were prescribed a high-risk NSAID. In contrast, antipsychotic prescribing in older people increased slowly over time, with no intervention-associated change. In the secondary analysis, reductions at the end of the intervention period in all three NSAID measures were statistically significantly greater in NHS Forth Valley than in NHS GGC, but only significantly greater for two of these measures 12 months after the intervention finished. There were substantial and sustained reductions in the high-risk prescribing of NSAIDs, although with some waning of effect 12 months after the intervention ceased. The same intervention had no effect on antipsychotic prescribing in older people. © British Journal of General Practice 2017.
Statistical Techniques for Assessing water‐quality effects of BMPs
Walker, John F.
1994-01-01
Little has been published on the effectiveness of various management practices in small rural lakes and streams at the watershed scale. In this study, statistical techniques were used to test for changes in water‐quality data from watersheds where best management practices (BMPs) were implemented. Reductions in data variability due to climate and seasonality were accomplished through the use of regression methods. This study discusses the merits of using storm‐mass‐transport data as a means of improving the ability to detect BMP effects on stream‐water quality. Statistical techniques were applied to suspended‐sediment records from three rural watersheds in Illinois for the period 1981–84. None of the techniques identified changes in suspended sediment, primarily because of the small degree of BMP implementation and because of potential errors introduced through the estimation of storm‐mass transport. A Monte Carlo sensitivity analysis was used to determine the level of discrete change that could be detected for each watershed. In all cases, the use of regressions improved the ability to detect trends.Read More: http://ascelibrary.org/doi/abs/10.1061/(ASCE)0733-9437(1994)120:2(334)
NASA Astrophysics Data System (ADS)
Li, Jiangtong; Luo, Yongdao; Dai, Honglin
2018-01-01
Water is the source of life and the essential foundation of all life. With the development of industrialization, the phenomenon of water pollution is becoming more and more frequent, which directly affects the survival and development of human. Water quality detection is one of the necessary measures to protect water resources. Ultraviolet (UV) spectral analysis is an important research method in the field of water quality detection, which partial least squares regression (PLSR) analysis method is becoming predominant technology, however, in some special cases, PLSR's analysis produce considerable errors. In order to solve this problem, the traditional principal component regression (PCR) analysis method was improved by using the principle of PLSR in this paper. The experimental results show that for some special experimental data set, improved PCR analysis method performance is better than PLSR. The PCR and PLSR is the focus of this paper. Firstly, the principal component analysis (PCA) is performed by MATLAB to reduce the dimensionality of the spectral data; on the basis of a large number of experiments, the optimized principal component is extracted by using the principle of PLSR, which carries most of the original data information. Secondly, the linear regression analysis of the principal component is carried out with statistic package for social science (SPSS), which the coefficients and relations of principal components can be obtained. Finally, calculating a same water spectral data set by PLSR and improved PCR, analyzing and comparing two results, improved PCR and PLSR is similar for most data, but improved PCR is better than PLSR for data near the detection limit. Both PLSR and improved PCR can be used in Ultraviolet spectral analysis of water, but for data near the detection limit, improved PCR's result better than PLSR.
Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor
2016-11-01
The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.
Jager, Tjalling
2013-02-05
The individuals of a species are not equal. These differences frustrate experimental biologists and ecotoxicologists who wish to study the response of a species (in general) to a treatment. In the analysis of data, differences between model predictions and observations on individual animals are usually treated as random measurement error around the true response. These deviations, however, are mainly caused by real differences between the individuals (e.g., differences in physiology and in initial conditions). Understanding these intraspecies differences, and accounting for them in the data analysis, will improve our understanding of the response to the treatment we are investigating and allow for a more powerful, less biased, statistical analysis. Here, I explore a basic scheme for statistical inference to estimate parameters governing stress that allows individuals to differ in their basic physiology. This scheme is illustrated using a simple toxicokinetic-toxicodynamic model and a data set for growth of the springtail Folsomia candida exposed to cadmium in food. This article should be seen as proof of concept; a first step in bringing more realism into the statistical inference for process-based models in ecotoxicology.
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
Gonçalves, Hernâni; Pinto, Paula; Silva, Manuela; Ayres-de-Campos, Diogo; Bernardes, João
2016-04-01
Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.
Modelling and analysis of FMS productivity variables by ISM, SEM and GTMA approach
NASA Astrophysics Data System (ADS)
Jain, Vineet; Raj, Tilak
2014-09-01
Productivity has often been cited as a key factor in a flexible manufacturing system (FMS) performance, and actions to increase it are said to improve profitability and the wage earning capacity of employees. Improving productivity is seen as a key issue for survival and success in the long term of a manufacturing system. The purpose of this paper is to make a model and analysis of the productivity variables of FMS. This study was performed by different approaches viz. interpretive structural modelling (ISM), structural equation modelling (SEM), graph theory and matrix approach (GTMA) and a cross-sectional survey within manufacturing firms in India. ISM has been used to develop a model of productivity variables, and then it has been analyzed. Exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are powerful statistical techniques. CFA is carried by SEM. EFA is applied to extract the factors in FMS by the statistical package for social sciences (SPSS 20) software and confirming these factors by CFA through analysis of moment structures (AMOS 20) software. The twenty productivity variables are identified through literature and four factors extracted, which involves the productivity of FMS. The four factors are people, quality, machine and flexibility. SEM using AMOS 20 was used to perform the first order four-factor structures. GTMA is a multiple attribute decision making (MADM) methodology used to find intensity/quantification of productivity variables in an organization. The FMS productivity index has purposed to intensify the factors which affect FMS.
Lu, Yan; He, Tian
2014-09-15
Much attention has been recently paid to ex-post assessments of socioeconomic and environmental benefits of payment for ecosystem services (PES) programs on poverty reduction, water quality, and forest protection. To evaluate the effects of a regional PES program on water quality, we selected chemical oxygen demand (COD) and ammonia-nitrogen (NH3-N) as indicators of water quality. Statistical methods and an intervention analysis model were employed to assess whether the PES program produced substantial changes in water quality at 10 water-quality sampling stations in the Shaying River watershed, China during 2006-2011. Statistical results from paired-sample t-tests and box plots of COD and NH3-N concentrations at the 10 stations showed that the PES program has played a positive role in improving water quality and reducing trans-boundary water pollution in the Shaying River watershed. Using the intervention analysis model, we quantitatively evaluated the effects of the intervention policy, i.e., the watershed PES program, on water quality at the 10 stations. The results suggest that this method could be used to assess the environmental benefits of watershed or water-related PES programs, such as improvements in water quality, seasonal flow regulation, erosion and sedimentation, and aquatic habitat. Copyright © 2014 Elsevier B.V. All rights reserved.
Fulcher, Yan G.; Fotso, Martial; Chang, Chee-Hoon; Rindt, Hans; Reinero, Carol R.
2016-01-01
Asthma is prevalent in children and cats, and needs means of noninvasive diagnosis. We sought to distinguish noninvasively the differences in 53 cats before and soon after induction of allergic asthma, using NMR spectra of exhaled breath condensate (EBC). Statistical pattern recognition was improved considerably by preprocessing the spectra with probabilistic quotient normalization and glog transformation. Classification of the 106 preprocessed spectra by principal component analysis and partial least squares with discriminant analysis (PLS-DA) appears to be impaired by variances unrelated to eosinophilic asthma. By filtering out confounding variances, orthogonal signal correction (OSC) PLS-DA greatly improved the separation of the healthy and early asthmatic states, attaining 94% specificity and 94% sensitivity in predictions. OSC enhancement of multi-level PLS-DA boosted the specificity of the prediction to 100%. OSC-PLS-DA of the normalized spectra suggest the most promising biomarkers of allergic asthma in cats to include increased acetone, metabolite(s) with overlapped NMR peaks near 5.8 ppm, and a hydroxyphenyl-containing metabolite, as well as decreased phthalate. Acetone is elevated in the EBC of 74% of the cats with early asthma. The noninvasive detection of early experimental asthma, biomarkers in EBC, and metabolic perturbation invite further investigation of the diagnostic potential in humans. PMID:27764146
Vanderveen, J A; Bassler, D; Robertson, C M T; Kirpalani, H
2009-05-01
To determine in a systematic review, whether interventions for infant development that involve parents, improve neurodevelopment at 12 months corrected age or older. Randomized trials were identified where an infant intervention was aimed to improve development and involved parents of preterms; and long-term neurodevelopment using standardized tests at 12 months (or longer) was reported. Identified studies (n=25) used a variety of interventions including parent education, infant stimulation, home visits or individualized developmental care. Meta-analysis at 12 months (N=2198 infants) found significantly higher mental (N=2198) and physical (N=1319) performance scores favoring the intervention group. At 24 months, the mental (N=1490) performance scores were improved, but physical (N=1025) performance scores were not statistically significant. The improvement in neurodevelopmental outcome was not sustained at 36 months (N=961) and 5 years (N=1017). Positive clinically meaningful effects (>5 points) are seen to an age of 36 months, but are no longer present at 5 years.
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
Developing and Testing a Model to Predict Outcomes of Organizational Change
Gustafson, David H; Sainfort, François; Eichler, Mary; Adams, Laura; Bisognano, Maureen; Steudel, Harold
2003-01-01
Objective To test the effectiveness of a Bayesian model employing subjective probability estimates for predicting success and failure of health care improvement projects. Data Sources Experts' subjective assessment data for model development and independent retrospective data on 221 healthcare improvement projects in the United States, Canada, and the Netherlands collected between 1996 and 2000 for validation. Methods A panel of theoretical and practical experts and literature in organizational change were used to identify factors predicting the outcome of improvement efforts. A Bayesian model was developed to estimate probability of successful change using subjective estimates of likelihood ratios and prior odds elicited from the panel of experts. A subsequent retrospective empirical analysis of change efforts in 198 health care organizations was performed to validate the model. Logistic regression and ROC analysis were used to evaluate the model's performance using three alternative definitions of success. Data Collection For the model development, experts' subjective assessments were elicited using an integrative group process. For the validation study, a staff person intimately involved in each improvement project responded to a written survey asking questions about model factors and project outcomes. Results Logistic regression chi-square statistics and areas under the ROC curve demonstrated a high level of model performance in predicting success. Chi-square statistics were significant at the 0.001 level and areas under the ROC curve were greater than 0.84. Conclusions A subjective Bayesian model was effective in predicting the outcome of actual improvement projects. Additional prospective evaluations as well as testing the impact of this model as an intervention are warranted. PMID:12785571
Schenone, Mauro; Ziebarth, Sarah; Duncan, Jose; Stokes, Lea; Hernandez, Angela
2018-02-05
To investigate the proportion of documented ultrasound findings that were unsupported by stored ultrasound images in the obstetric ultrasound unit, before and after the implementation of a quality improvement process consisting of a checklist and feedback. A quality improvement process was created involving utilization of a checklist and feedback from physician to sonographer. The feedback was based on findings of the physician's review of the report and images using a check list. To assess the impact of this process, two groups were compared. Group 1 consisted of 58 ultrasound reports created prior to initiation of the process. Group 2 included 65 ultrasound reports created after process implementation. Each chart was reviewed by a physician and a sonographer. Findings considered unsupported by stored images by both reviewers were used for analysis, and the proportion of unsupported findings was compared between the two groups. Results are expressed as mean ± standard error. A p value of < .05 was used to determine statistical significance. Univariate analysis of baseline characteristics and potential confounders showed no statistically significant difference between the groups. The mean proportion of unsupported findings in Group 1 was 5.1 ± 0.87, with Group 2 having a significantly lower proportion (2.6 ± 0.62) (p value = .018). Results suggest a significant decrease in the proportion of unsupported findings in ultrasound reports after quality improvement process implementation. Thus, we present a simple yet effective quality improvement process to reduce unsupported ultrasound findings.
Data mining and statistical inference in selective laser melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, Chandrika
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Data mining and statistical inference in selective laser melting
Kamath, Chandrika
2016-01-11
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Eldein, Hebatallah Nour; Mansour, Nadia M.; Mohamed, Samar F.
2013-01-01
Introduction: Family physicians are the first point of medical contact for most patients, and they come into contact with a large number of smokers. Also, they are well suited to offer effective counseling to people, because family physicians already have some knowledge of patients and their social environments. Aims: The present study was conducted to assess family physicians’ knowledge, attitude and practice of smoking cessation counseling aiming to improve quality of smoking cessation counseling among family physicians. Materials and Methods: The study was descriptive analytic cross sectional study. It was conducted within family medicine centers. Sample was comprehensive. it included 75 family physicians. They were asked to fill previously validated anonymous questionnaire to collect data about their personal characteristics, knowledge, attitude and practice of smoking cessation counseling, barriers and recommendations of physicians. Equal or above the mean scores were used as cut off point of the best scores for knowledge, attitude and practice. Statistical Analysis: SPSS version 18 was used for data entry and statistical analysis. Results: The best knowledge, attitude and practice scores among family physicians in the study sample were (45.3 %, 93.3% and 44% respectively). Age (P = 0.039) and qualification of family physicians (P = 0.04) were significant variables regarding knowledge scores while no statistically significance between personal characteristics of family physicians and their attitude or practice scores regarding smoking cessation counseling. More than half of the family physicians recommended training to improve their smoking cessation counseling. Conclusions: Favorable attitude scores of family physicians exceed passing knowledge scores or practice scores. Need for knowledge and training are stimulus to design an educational intervention to improve quality of smoking cessation counseling. PMID:24479071
Motivating factors among Iranian nurses
Negarandeh, Reza; Dehghan-Nayeri, Nahid; Ghasemi, Elham
2015-01-01
Background: One of the most important challenges of Iranian health care system is “quality of care,” and it is assumed that motivated nurses are more ready to provide better care. There are limited studies investigating Iranian nurses’ motivations; however, factors which motivate them have not been studied yet. Identifying the motivating factors enables nurse managers to inspire nurses for continuous quality improvement. The aim of this study was to identify motivating factors for Iranian hospital nurses. Materials and Methods: This is a cross-sectional descriptive study in which 310 nurses working at 14 hospitals of Tehran University of Medical Sciences were selected by proportionate stratified random sampling. Data were collected in 2010 by a researcher-developed questionnaire. Descriptive statistics and independent t-test, analysis of variance, Tukey post-hoc test, Chi-Square and Fisher's exact test were used for statistical analysis by Statistical Package for Social Sciences (SPSS) version 16. Results: The mean score of motivation was 90.53 ± 10.76 (range: 59–121). Four motivating factors including “career development” (22.63 ± 5.66), “job characteristics” (34.29 ± 4), “job authority” (18.48 ± 2.79), and “recognition” (15.12 ± 2.5) were recognized. The least mean of the motivation score, considering the number of items, was 3.23 for career development, while the highest mean was 3.81 for job characteristics. Conclusions: The findings showed that motivation of nurses was at a medium level, which calls for improvement. The factors that have the greatest potential to motivate nurses were identified in this study and they can help managers to achieve the goal of continuous quality improvement. PMID:26257797
The effect of endoscopic olfactory cleft polyp removal on olfaction.
Kuperan, Arjuna B; Lieberman, Seth M; Jourdy, Deya N; Al-Bar, Mohammad H; Goldstein, Bradley J; Casiano, Roy R
2015-01-01
The presence of olfactory cleft polyps in chronic rhinosinusitis with nasal polyposis is well documented, but the effect of endoscopic olfactory cleft polyp surgery on olfaction, versus observation, has not been well studied. This analysis assessed if microdebridement of olfactory cleft polyps yields significant objective smell improvements in those with anosmia or hyposmia. A randomized prospective single-blinded study was performed on patients undergoing bilateral endoscopic sinus surgery with profound bilateral nasal polyposis, excluding those younger than 18 years or without olfactory polyps. A preoperative University of Pennsylvania Smell Identification Test (UPSIT), visual analog scale (VAS), and sinonasal outcomes 20 score (SNOT-20), and a follow-up at 6 months was performed. Two cohorts were created, including one with cleft polyp removal (group A) and one with cleft polyps left in place (group B). There were 10 patients in group A and 7 in group B. By using the Wilcoxon signed rank test, the two groups were individually analyzed for changes in the preoperative UPSIT, VAS, and SNOT-20 versus the 6-month test results. In group A, the improvement in the UPSIT, VAS, and SNOT-20 were statistically significant at p < 0.05. For group B only the improvement in the VAS was statistically significant, at p < 0.05. There was a statistically significant difference in clinical smell improvement between group A and B at 6 months (p = 0.00512). Evidence exists that olfactory cleft polyp surgery improves olfactory function outcomes. Long-term data beyond 6 months is needed to further validate these early promising outcomes.
NASA Astrophysics Data System (ADS)
Shan, X.; Zhang, K.; Zhuang, Y.; Fu, R.; Hong, Y.
2017-12-01
Seasonal prediction of rainfall during the dry-to-wet transition season in austral spring (September-November) over southern Amazonia is central for improving planting crops and fire mitigation in that region. Previous studies have identified the key large-scale atmospheric dynamic and thermodynamics pre-conditions during the dry season (June-August) that influence the rainfall anomalies during the dry to wet transition season over Southern Amazonia. Based on these key pre-conditions during dry season, we have evaluated several statistical models and developed a Neural Network based statistical prediction system to predict rainfall during the dry to wet transition for Southern Amazonia (5-15°S, 50-70°W). Multivariate Empirical Orthogonal Function (EOF) Analysis is applied to the following four fields during JJA from the ECMWF Reanalysis (ERA-Interim) spanning from year 1979 to 2015: geopotential height at 200 hPa, surface relative humidity, convective inhibition energy (CIN) index and convective available potential energy (CAPE), to filter out noise and highlight the most coherent spatial and temporal variations. The first 10 EOF modes are retained for inputs to the statistical models, accounting for at least 70% of the total variance in the predictor fields. We have tested several linear and non-linear statistical methods. While the regularized Ridge Regression and Lasso Regression can generally capture the spatial pattern and magnitude of rainfall anomalies, we found that that Neural Network performs best with an accuracy greater than 80%, as expected from the non-linear dependence of the rainfall on the large-scale atmospheric thermodynamic conditions and circulation. Further tests of various prediction skill metrics and hindcasts also suggest this Neural Network prediction approach can significantly improve seasonal prediction skill than the dynamic predictions and regression based statistical predictions. Thus, this statistical prediction system could have shown potential to improve real-time seasonal rainfall predictions in the future.
Shek, Samantha Y N; Yeung, Chi K; Chan, Johnny C Y; Chan, Henry H L
2016-02-01
Several studies have been published on the first generation non-thermal focused ultrasound with an average improvement of 0-3.95 cm reported. We aim to investigate the efficacy of the second-generation non-thermal focused ultrasound device with a combined radiofrequency hand piece. With the addition of radiofrequency energy, the temperature of the adipose tissue is raised before focused ultrasound is applied. This facilitates the mechanical disruption of fat cells by focused ultrasound. Twenty subjects were recruited and underwent three treatments biweekly. Caliper reading, abdominal circumference, and standardized photographs were taken with the Vectra(®) system at all visits. We aim to have the subjects stand and hold the same position and the photograph taken after exhalation. Caliper and circumference measurements carry uncertainty. It is impossible to eliminate all uncertainties but can be improved by having the same trained physician assistant perform the measurement at the same site and taking an average of three readings. Pain score and satisfaction were recorded by means of the visual analogue scale. The efficacy is defined by a statistically significant improvement in circumferential improvement based on intention-to-treat analysis. Seventeen subjects completed the treatment schedule. Abdominal circumference showed statistically significant improvement at 2 weeks post-second treatment (P = 0.023) and almost all subsequent follow-ups. Caliper readings were statistically significant at 2 weeks post-second treatment (P = 0.013) and almost all follow-ups. The mean pain score reported was 2.3 on the visual analog scale and 6% were unsatisfied with the overall treatments. Six incidents of wheal formation appeared immediately after treatment all of which subsided spontaneously within several hours. The combination non-thermal focused ultrasound and radiofrequency device is effective for improving body contour in Asians. © 2015 Wiley Periodicals, Inc.
Yongqiang Liu
2003-01-01
It was suggested in a recent statistical correlation analysis that predictability of monthly-seasonal precipitation could be improved by using coupled singular value decomposition (SVD) pattems between soil moisture and precipitation instead of their values at individual locations. This study provides predictive evidence for this suggestion by comparing skills of two...
An In-Depth Analysis of Adult Learning Policies and Their Effectiveness in Europe
ERIC Educational Resources Information Center
European Union, 2015
2015-01-01
Adult learning policies, like any other policies, need to be effective: they need to reach their objectives and attain the desired impacts, which should be carefully defined. Understanding the performance of policies allows policy makers to change and improve them. A growing body of research and statistics provides important insights into how…
The Application of SPSS in Analyzing the Effect of English Vocabulary Strategy Instruction
ERIC Educational Resources Information Center
Chen, Shaoying
2010-01-01
The vocabulary learning is one of very important part in the college English teaching. Correct analysis of the result of vocabulary strategy instruction can offer feedbacks for English teaching, and help teachers to improve the teaching method. In this article, the issue how to use SPSS (Statistical Package for the Social Science) to…
Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L
2013-05-15
Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.
Morales, Daniel R; Flynn, Rob; Zhang, Jianguo; Trucco, Emmanuel; Quint, Jennifer K; Zutis, Kris
2018-05-01
Several models for predicting the risk of death in people with chronic obstructive pulmonary disease (COPD) exist but have not undergone large scale validation in primary care. The objective of this study was to externally validate these models using statistical and machine learning approaches. We used a primary care COPD cohort identified using data from the UK Clinical Practice Research Datalink. Age-standardised mortality rates were calculated for the population by gender and discrimination of ADO (age, dyspnoea, airflow obstruction), COTE (COPD-specific comorbidity test), DOSE (dyspnoea, airflow obstruction, smoking, exacerbations) and CODEX (comorbidity, dyspnoea, airflow obstruction, exacerbations) at predicting death over 1-3 years measured using logistic regression and a support vector machine learning (SVM) method of analysis. The age-standardised mortality rate was 32.8 (95%CI 32.5-33.1) and 25.2 (95%CI 25.4-25.7) per 1000 person years for men and women respectively. Complete data were available for 54879 patients to predict 1-year mortality. ADO performed the best (c-statistic of 0.730) compared with DOSE (c-statistic 0.645), COTE (c-statistic 0.655) and CODEX (c-statistic 0.649) at predicting 1-year mortality. Discrimination of ADO and DOSE improved at predicting 1-year mortality when combined with COTE comorbidities (c-statistic 0.780 ADO + COTE; c-statistic 0.727 DOSE + COTE). Discrimination did not change significantly over 1-3 years. Comparable results were observed using SVM. In primary care, ADO appears superior at predicting death in COPD. Performance of ADO and DOSE improved when combined with COTE comorbidities suggesting better models may be generated with additional data facilitated using novel approaches. Copyright © 2018. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Hendikawati, P.; Arifudin, R.; Zahid, M. Z.
2018-03-01
This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
OSO 8 observational limits to the acoustic coronal heating mechanism
NASA Technical Reports Server (NTRS)
Bruner, E. C., Jr.
1981-01-01
An improved analysis of time-resolved line profiles of the C IV resonance line at 1548 A has been used to test the acoustic wave hypothesis of solar coronal heating. It is shown that the observed motions and brightness fluctuations are consistent with the existence of acoustic waves. Specific account is taken of the effect of photon statistics on the observed velocities, and a test is devised to determine whether the motions represent propagating or evanescent waves. It is found that on the average about as much energy is carried upward as downward such that the net acoustic flux density is statistically consistent with zero. The statistical uncertainty in this null result is three orders of magnitue lower than the flux level needed to heat the corona.
Dart, Lyn; Vanbeber, Anne; Smith-Barbaro, Peggy; Costilla, Vanessa; Samuel, Charlotte; Terregino, Carol A.; Abali, Emine Ercikan; Dollinger, Beth; Baumgartner, Nicole; Kramer, Nicholas; Seelochan, Alex; Taher, Sabira; Deutchman, Mark; Evans, Meredith; Ellis, Robert B.; Oyola, Sonia; Maker-Clark, Geeta; Budnick, Isadore; Tran, David; DeValle, Nicole; Shepard, Rachel; Chow, Erika; Petrin, Christine; Razavi, Alexander; McGowan, Casey; Grant, Austin; Bird, Mackenzie; Carry, Connor; McGowan, Glynis; McCullough, Colleen; Berman, Casey M.; Dotson, Kerri; Sarris, Leah; Harlan, Timothy S.; Co-investigators, on behalf of the CHOP
2018-01-01
Background Cardiovascular disease (CVD) annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world's first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. Methods This analysis provides a novel integration of artificial intelligence-based machine learning (ML) with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. Results 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00–2.28, p < 0.001) and MedDiet adherence (OR 1.40, 95% CI 1.07–1.84, p = 0.015), while reducing trainees' soft drink consumption (OR 0.56, 95% CI 0.37–0.85, p = 0.007). Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally (p < 0.001). Discussion This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students' own diets. This study suggests that the public health and medical sectors can unite population health management and precision medicine for a sustainable model of next-generation health systems providing effective, equitable, accessible care beginning with reversing the CVD epidemic. PMID:29850526
Monlezun, Dominique J; Dart, Lyn; Vanbeber, Anne; Smith-Barbaro, Peggy; Costilla, Vanessa; Samuel, Charlotte; Terregino, Carol A; Abali, Emine Ercikan; Dollinger, Beth; Baumgartner, Nicole; Kramer, Nicholas; Seelochan, Alex; Taher, Sabira; Deutchman, Mark; Evans, Meredith; Ellis, Robert B; Oyola, Sonia; Maker-Clark, Geeta; Dreibelbis, Tomi; Budnick, Isadore; Tran, David; DeValle, Nicole; Shepard, Rachel; Chow, Erika; Petrin, Christine; Razavi, Alexander; McGowan, Casey; Grant, Austin; Bird, Mackenzie; Carry, Connor; McGowan, Glynis; McCullough, Colleen; Berman, Casey M; Dotson, Kerri; Niu, Tianhua; Sarris, Leah; Harlan, Timothy S; Co-Investigators, On Behalf Of The Chop
2018-01-01
Cardiovascular disease (CVD) annually claims more lives and costs more dollars than any other disease globally amid widening health disparities, despite the known significant reductions in this burden by low cost dietary changes. The world's first medical school-based teaching kitchen therefore launched CHOP-Medical Students as the largest known multisite cohort study of hands-on cooking and nutrition education versus traditional curriculum for medical students. This analysis provides a novel integration of artificial intelligence-based machine learning (ML) with causal inference statistics. 43 ML automated algorithms were tested, with the top performer compared to triply robust propensity score-adjusted multilevel mixed effects regression panel analysis of longitudinal data. Inverse-variance weighted fixed effects meta-analysis pooled the individual estimates for competencies. 3,248 unique medical trainees met study criteria from 20 medical schools nationally from August 1, 2012, to June 26, 2017, generating 4,026 completed validated surveys. ML analysis produced similar results to the causal inference statistics based on root mean squared error and accuracy. Hands-on cooking and nutrition education compared to traditional medical school curriculum significantly improved student competencies (OR 2.14, 95% CI 2.00-2.28, p < 0.001) and MedDiet adherence (OR 1.40, 95% CI 1.07-1.84, p = 0.015), while reducing trainees' soft drink consumption (OR 0.56, 95% CI 0.37-0.85, p = 0.007). Overall improved competencies were demonstrated from the initial study site through the scale-up of the intervention to 10 sites nationally ( p < 0.001). This study provides the first machine learning-augmented causal inference analysis of a multisite cohort showing hands-on cooking and nutrition education for medical trainees improves their competencies counseling patients on nutrition, while improving students' own diets. This study suggests that the public health and medical sectors can unite population health management and precision medicine for a sustainable model of next-generation health systems providing effective, equitable, accessible care beginning with reversing the CVD epidemic.
"Geo-statistics methods and neural networks in geophysical applications: A case study"
NASA Astrophysics Data System (ADS)
Rodriguez Sandoval, R.; Urrutia Fucugauchi, J.; Ramirez Cruz, L. C.
2008-12-01
The study is focus in the Ebano-Panuco basin of northeastern Mexico, which is being explored for hydrocarbon reservoirs. These reservoirs are in limestones and there is interest in determining porosity and permeability in the carbonate sequences. The porosity maps presented in this study are estimated from application of multiattribute and neural networks techniques, which combine geophysics logs and 3-D seismic data by means of statistical relationships. The multiattribute analysis is a process to predict a volume of any underground petrophysical measurement from well-log and seismic data. The data consist of a series of target logs from wells which tie a 3-D seismic volume. The target logs are neutron porosity logs. From the 3-D seismic volume a series of sample attributes is calculated. The objective of this study is to derive a set of attributes and the target log values. The selected set is determined by a process of forward stepwise regression. The analysis can be linear or nonlinear. In the linear mode the method consists of a series of weights derived by least-square minimization. In the nonlinear mode, a neural network is trained using the select attributes as inputs. In this case we used a probabilistic neural network PNN. The method is applied to a real data set from PEMEX. For better reservoir characterization the porosity distribution was estimated using both techniques. The case shown a continues improvement in the prediction of the porosity from the multiattribute to the neural network analysis. The improvement is in the training and the validation, which are important indicators of the reliability of the results. The neural network showed an improvement in resolution over the multiattribute analysis. The final maps provide more realistic results of the porosity distribution.
Surgical adverse outcome reporting as part of routine clinical care.
Kievit, J; Krukerink, M; Marang-van de Mheen, P J
2010-12-01
In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. In 19,907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23-1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45-0.77)). Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting.
Surgical adverse outcome reporting as part of routine clinical care
Krukerink, M; Marang-van de Mheen, P J
2010-01-01
Background In The Netherlands, health professionals have created a doctor-driven standardised system to report and analyse adverse outcomes (AO). The aim is to improve healthcare by learning from past experiences. The key elements of this system are (1) an unequivocal definition of an adverse outcome, (2) appropriate contextual information and (3) a three-dimensional hierarchical classification system. Objectives First, to assess whether routine doctor-driven AO reporting is feasible. Second, to investigate how doctors can learn from AO reporting and analysis to improve the quality of care. Methods Feasibility was assessed by how well doctors reported AO in the surgical department of a Dutch university hospital over a period of 9 years. AO incidence was analysed per patient subgroup and over time, in a time-trend analysis of three equal 3-year periods. AO were analysed case by case and statistically, to learn lessons from past events. Results In 19 907 surgical admissions, 9189 AOs were reported: one or more AO in 18.2% of admissions. On average, 55 lessons were learnt each year (in 4.3% of AO). More AO were reported in P3 than P1 (OR 1.39 (1.23–1.57)). Although minor AO increased, fatal AO decreased over time (OR 0.59 (0.45–0.77)). Conclusions Doctor-driven AO reporting is shown to be feasible. Lessons can be learnt from case-by-case analyses of individual AO, as well as by statistical analysis of AO groups and subgroups (illustrated by time-trend analysis), thus contributing to the improvement of the quality of care. Moreover, by standardising AO reporting, data can be compared across departments or hospitals, to generate (confidential) mirror information for professionals cooperating in a peer-review setting. PMID:20430928
Huynh, Lynn; Totev, Todor; Vekeman, Francis; Neary, Maureen P; Duh, Mei S; Benson, Al B
2017-09-01
To calculate the cost reduction associated with diarrhea/flushing symptom resolution/improvement following treatment with above-standard dose octreotide-LAR from the commercial payor's perspective. Diarrhea and flushing are two major carcinoid syndrome symptoms of neuroendocrine tumor (NET). Previously, a study of NET patients from three US tertiary oncology centers (NET 3-Center Study) demonstrated that dose escalation of octreotide LAR to above-standard dose resolved/improved diarrhea/flushing in 79% of the patients within 1 year. Time course of diarrhea/flushing symptom data were collected from the NET 3-Center Study. Daily healthcare costs were calculated from a commercial claims database analysis. For the patient cohort experiencing any diarrhea/flushing symptom resolution/improvement, their observation period was divided into days of symptom resolution/improvement or no improvement, which were then multiplied by the respective daily healthcare cost and summed over 1 year to yield the blended mean annual cost per patient. For patients who experienced no diarrhea/flushing symptom improvement, mean annual daily healthcare cost of diarrhea/flushing over a 1-year period was calculated. The economic model found that 108 NET patients who experienced diarrhea/flushing symptom resolution/improvement within 1 year had statistically significantly lower mean annual healthcare cost/patient than patients with no symptom improvement, by $14,766 (p = .03). For the sub-set of 85 patients experiencing resolution/improvement of diarrhea, their cost reduction was more pronounced, at $18,740 (p = .01), statistically significantly lower than those with no improvement; outpatient costs accounted for 56% of the cost reduction (p = .02); inpatient costs, emergency department costs, and pharmacy costs accounted for the remaining 44%. The economic model relied on two different sources of data, with some heterogeneity in the prior treatment and disease status of patients. Symptom resolution/improvement of diarrhea/flushing after treatment with an above-standard dose of octreotide-LAR in NET was associated with a statistically significant healthcare cost decrease compared to a scenario of no symptom improvement.
Webster, R J; Williams, A; Marchetti, F; Yauk, C L
2018-07-01
Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Brittberg, Mats; Recker, David; Ilgenfritz, John; Saris, Daniel B F
2018-05-01
Matrix-based cell therapy improves surgical handling, increases patient comfort, and allows for expanded indications with better reliability within the knee joint. Five-year efficacy and safety of autologous cultured chondrocytes on porcine collagen membrane (MACI) versus microfracture for treating cartilage defects have not yet been reported from any randomized controlled clinical trial. To examine the clinical efficacy and safety results at 5 years after treatment with MACI and compare these with the efficacy and safety of microfracture treatment for symptomatic cartilage defects of the knee. Randomized controlled trial; Level of evidence, 1. This article describes the 5-year follow-up of the SUMMIT (Superiority of MACI Implant Versus Microfracture Treatment) clinical trial conducted at 14 study sites in Europe. All 144 patients who participated in SUMMIT were eligible to enroll; analyses of the 5-year data were performed with data from patients who signed informed consent and continued in the Extension study. Of the 144 patients randomized in the SUMMIT trial, 128 signed informed consent and continued observation in the Extension study: 65 MACI (90.3%) and 63 microfracture (87.5%). The improvements in Knee injury and Osteoarthritis Outcome Score (KOOS) Pain and Function domains previously described were maintained over the 5-year follow-up. Five years after treatment, the improvement in MACI over microfracture in the co-primary endpoint of KOOS pain and function was maintained and was clinically and statistically significant ( P = .022). Improvements in activities of daily living remained statistically significantly better ( P = .007) in MACI patients, with quality of life and other symptoms remaining numerically higher in MACI patients but losing statistical significance relative to the results of the SUMMIT 2-year analysis. Magnetic resonance imaging (MRI) evaluation of structural repair was performed in 120 patients at year 5. As in the 2-year SUMMIT (MACI00206) results, the MRI evaluation showed improvement in defect filling for both treatments; however, no statistically significant differences were noted between treatment groups. Symptomatic cartilage knee defects 3 cm 2 or larger treated with MACI were clinically and statistically significantly improved at 5 years compared with microfracture treatment. No remarkable adverse events or safety issues were noted in this heterogeneous patient population.
Hitchman, Sean M.; Mather, Martha E.; Smith, Joseph M.; Fencl, Jane S.
2018-01-01
Conserving native biodiversity depends on restoring functional habitats in the face of human-induced disturbances. Low-head dams are a ubiquitous human impact that degrades aquatic ecosystems worldwide. To improve our understanding of how low-head dams impact habitat and associated biodiversity, our research examined complex interactions among three spheres of the total environment. i.e., how low-head dams (anthroposphere) affect aquatic habitat (hydrosphere), and native biodiversity (biosphere) in streams and rivers. Creation of lake-like habitats upstream of low-head dams is a well-documented major impact of dams. Alterations downstream of low head dams also have important consequences, but these downstream dam effects are more challenging to detect. In a multidisciplinary field study at five dammed and five undammed sites within the Neosho River basin, KS, we tested hypotheses about two types of habitat sampling (transect and mosaic) and two types of statistical analyses (analysis of covariance and path analysis). We used fish as our example of biodiversity alteration. Our research provided three insights that can aid environmental professionals who seek to conserve and restore fish biodiversity in aquatic ecosystems threatened by human modifications. First, a mosaic approach identified habitat alterations below low-head dams (e.g. increased proportion of riffles) that were not detected using the more commonly-used transect sampling approach. Second, the habitat mosaic approach illustrated how low-head dams reduced natural variation in stream habitat. Third, path analysis, a statistical approach that tests indirect effects, showed how dams, habitat, and fish biodiversity interact. Specifically, path analysis revealed that low-head dams increased the proportion of riffle habitat below dams, and, as a result, indirectly increased fish species richness. Furthermore, the pool habitat that was created above low-head dams dramatically decreased fish species richness. As we show here, mosaic habitat sampling and path analysis can help conservation practitioners improve science-based management plans for disturbed aquatic systems worldwide.
Huang, Shou-Guo; Chen, Bo; Lv, Dong; Zhang, Yong; Nie, Feng-Feng; Li, Wei; Lv, Yao; Zhao, Huan-Li; Liu, Hong-Mei
2017-01-01
Purpose Using a network meta-analysis approach, our study aims to develop a ranking of the six surgical procedures, that is, Plate, titanium elastic nail (TEN), tension band wire (TBW), hook plate (HP), reconstruction plate (RP) and Knowles pin, by comparing the post-surgery constant shoulder scores in patients with clavicular fracture (CF). Methods A comprehensive search of electronic scientific literature databases was performed to retrieve publications investigating surgical procedures in CF, with the stringent eligible criteria, and clinical experimental studies of high quality and relevance to our area of interest were selected for network meta-analysis. Statistical analyses were conducted using Stata 12.0. Results A total of 19 studies met our inclusion criteria were eventually enrolled into our network meta-analysis, representing 1164 patients who had undergone surgical procedures for CF (TEN group = 240; Plate group = 164; TBW group = 180; RP group = 168; HP group = 245; Knowles pin group = 167). The network meta-analysis results revealed that RP significantly improved constant shoulder score in patients with CF when compared with TEN, and the post-operative constant shoulder scores in patients with CF after Plate, TBW, HP, Knowles pin and TEN were similar with no statistically significant differences. The treatment relative ranking of predictive probabilities of constant shoulder scores in patients with CF after surgery revealed the surface under the cumulative ranking curves (SUCRA) value is the highest in RP. Conclusion The current network meta-analysis suggests that RP may be the optimum surgical treatment among six inventions for patients with CF, and it can improve the shoulder score of patients with CF. Implications for Rehabilitation RP improves shoulder joint function after surgical procedure. RP achieves stability with minimal complications after surgery. RP may be the optimum surgical treatment for rehabilitation of patients with CF.
Hitchman, Sean M; Mather, Martha E; Smith, Joseph M; Fencl, Jane S
2018-04-01
Conserving native biodiversity depends on restoring functional habitats in the face of human-induced disturbances. Low-head dams are a ubiquitous human impact that degrades aquatic ecosystems worldwide. To improve our understanding of how low-head dams impact habitat and associated biodiversity, our research examined complex interactions among three spheres of the total environment. i.e., how low-head dams (anthroposphere) affect aquatic habitat (hydrosphere), and native biodiversity (biosphere) in streams and rivers. Creation of lake-like habitats upstream of low-head dams is a well-documented major impact of dams. Alterations downstream of low head dams also have important consequences, but these downstream dam effects are more challenging to detect. In a multidisciplinary field study at five dammed and five undammed sites within the Neosho River basin, KS, we tested hypotheses about two types of habitat sampling (transect and mosaic) and two types of statistical analyses (analysis of covariance and path analysis). We used fish as our example of biodiversity alteration. Our research provided three insights that can aid environmental professionals who seek to conserve and restore fish biodiversity in aquatic ecosystems threatened by human modifications. First, a mosaic approach identified habitat alterations below low-head dams (e.g. increased proportion of riffles) that were not detected using the more commonly-used transect sampling approach. Second, the habitat mosaic approach illustrated how low-head dams reduced natural variation in stream habitat. Third, path analysis, a statistical approach that tests indirect effects, showed how dams, habitat, and fish biodiversity interact. Specifically, path analysis revealed that low-head dams increased the proportion of riffle habitat below dams, and, as a result, indirectly increased fish species richness. Furthermore, the pool habitat that was created above low-head dams dramatically decreased fish species richness. As we show here, mosaic habitat sampling and path analysis can help conservation practitioners improve science-based management plans for disturbed aquatic systems worldwide. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khatri, Ayisha Al; Jens, Grundmann; der Weth Rüdiger, van; Niels, Schütze
2015-04-01
Al Batinah coastal area is the main agricultural region in Oman. Agriculture is concentrated in Al Batinah, because of more fertile soils and easier access to water in the form of groundwater compared to other administrative areas in the country. The region now is facing a problem as a result of over abstraction of fresh groundwater for irrigation from the main aquifer along the coast. This enforces the inflow of sea water into the coastal aquifer and causes salinization of the groundwater. As a consequence the groundwater becomes no longer suitable for irrigation which impacts the social and economical situation of farmers as well as the environment. Therefore, the existing situation generates conflicts between different stakeholders regarding water availability, sustainable aquifer management, and profitable agricultural production in Al Batinah region. Several management measures to maintain the groundwater aquifer in the region, were implemented by the government. However, these solutions showed only limited successes for the existing problem. The aim of this study now is to evaluate the implementation potential of several management interventions and their combinations by analysing opinions and responses of all relevant stakeholders in the region. This is done in order to identify potential conflicts among stakeholders to a participatory process within the frame of an integrated water resources management and to support decision makers in taking more informed decisions. Questionnaires were designed for collecting data from different groups of stakeholders e.g. water professionals, farmers from the study area and decision makers of different organizations and ministries. These data were analysed statistically for each group separately as well as regarding relations amongst groups by using the SPSS (Statistical Package for Social Science) software package. Results show, that the need to improve the situation is supported by all groups. However, significant differences exist between groups on how to achieve this improvement, since farmers prefer management interventions operating more on the water resources side while decision makers support measures for a better management on the water demand side. Furthermore, the opinions within single groups are sometimes contradicting for several management interventions. The use of more advanced statistical methods like discriminant analysis or Bayesian network allow for identifying factors and drivers to explain these differences. Both approaches, will help to understand stakeholder's behaviours and to evaluate the implementation potential of several management interventions. Keywords IWRM, Stakeholder participation, field survey, statistical analysis, Oman
Martín, Josune; Torre, Fernando; Padierna, Angel; Aguirre, Urko; González, Nerea; Matellanes, Begoña; Quintana, José M
2014-11-01
To assess whether an interdisciplinary intervention is more effective than usual care for improving the health-related quality of life (HRQoL) among patients with fibromyalgia (FM), and to identify variables that were predictors of improvement in HRQoL. In a randomized controlled clinical trial carried out on an outpatient basis in a hospital pain management unit, 153 patients with FM were randomly allocated to an experimental group (EG) or a control group (CG). Participants completed the Fibromyalgia Impact Questionnaire (FIQ) at baseline and 6 months after the intervention. The EG received an interdisciplinary treatment (12 sessions for 6 weeks) which consisted of coordinated psychological, medical, educational, and physiotherapeutic interventions while the CG received standard-of-care pharmacologic treatment. Descriptive statistics, ANOVA, Chi square and Fisher tests and generalized linear models were used for data analysis. Six months after the intervention, statistically significant improvements in HRQoL were observed in physical functioning (P = 0.01), pain (P = 0.03) and total FIQ score (P = 0.04) in the EG compared to the CG. The number of physical illnesses was identified as a predictor for improvement. This interdisciplinary intervention has shown effectiveness in improving the HRQoL of this sample of patients with FM. The number of physical illnesses was identified as a predictor of that improvement. © 2013 World Institute of Pain.
Levine, Stephen Z; Rabinowitz, Jonathan; Rizopoulos, Dimitris
2011-08-15
The adequacy of the Positive and Negative Syndrome Scale (PANSS) items in measuring symptom severity in schizophrenia was examined using Item Response Theory (IRT). Baseline PANSS assessments were analyzed from two multi-center clinical trials of antipsychotic medication in chronic schizophrenia (n=1872). Generally, the results showed that the PANSS (a) item ratings discriminated symptom severity best for the negative symptoms; (b) has an excess of "Severe" and "Extremely severe" rating options; and (c) assessments are more reliable at medium than very low or high levels of symptom severity. Analysis also showed that the detection of statistically and non-statistically significant differences in treatment were highly similar for the original and IRT-modified PANSS. In clinical trials of chronic schizophrenia, the PANSS appears to require the following modifications: fewer rating options, adjustment of 'Lack of judgment and insight', and improved severe symptom assessment. 2011 Elsevier Ltd. All rights reserved.
Progress toward openness, transparency, and reproducibility in cognitive neuroscience.
Gilmore, Rick O; Diaz, Michele T; Wyble, Brad A; Yarkoni, Tal
2017-05-01
Accumulating evidence suggests that many findings in psychological science and cognitive neuroscience may prove difficult to reproduce; statistical power in brain imaging studies is low and has not improved recently; software errors in analysis tools are common and can go undetected for many years; and, a few large-scale studies notwithstanding, open sharing of data, code, and materials remain the rare exception. At the same time, there is a renewed focus on reproducibility, transparency, and openness as essential core values in cognitive neuroscience. The emergence and rapid growth of data archives, meta-analytic tools, software pipelines, and research groups devoted to improved methodology reflect this new sensibility. We review evidence that the field has begun to embrace new open research practices and illustrate how these can begin to address problems of reproducibility, statistical power, and transparency in ways that will ultimately accelerate discovery. © 2017 New York Academy of Sciences.
Evaluation of the quality of the teaching-learning process in undergraduate courses in Nursing.
González-Chordá, Víctor Manuel; Maciá-Soler, María Loreto
2015-01-01
to identify aspects of improvement of the quality of the teaching-learning process through the analysis of tools that evaluated the acquisition of skills by undergraduate students of Nursing. prospective longitudinal study conducted in a population of 60 secondyear Nursing students based on registration data, from which quality indicators that evaluate the acquisition of skills were obtained, with descriptive and inferential analysis. nine items were identified and nine learning activities included in the assessment tools that did not reach the established quality indicators (p<0.05). There are statistically significant differences depending on the hospital and clinical practices unit (p<0.05). the analysis of the evaluation tools used in the article "Nursing Care in Welfare Processes" of the analyzed university undergraduate course enabled the detection of the areas for improvement in the teachinglearning process. The challenge of education in nursing is to reach the best clinical research and educational results, in order to provide improvements to the quality of education and health care.
Lee, Do Hyun; Oh, In Young; Koo, Kyo Tan; Suk, Jang Mi; Jung, Sang Wook; Park, Jin Oh; Kim, Beom Joon; Choi, Yoo Mi
2015-02-01
Skin aging is accompanied by wrinkle formation. At some sites, such as the periorbital skin, this is a relatively early phenomenon. We evaluated the anti-wrinkle effect of a preparation containing human growth factor and hyaluronic acid serum on periorbital wrinkles (crow's feet). In total, 23 Korean women (age range: 39-59 years), who were not pregnant, nursing, or undergoing any concurrent therapy, were enrolled in this study. All the patients completed an 8-week trial of twice-daily application of human growth factor and hyaluronic acid serum on the entire face. Efficacy was based on a global photodamage score, photographs, and image analysis using replicas and visiometer analysis every 4 weeks. The standard wrinkle and roughness parameters used in assessing skin by visiometer were calculated and statistically analyzed. Periorbital wrinkles were significantly improved after treatment, with improvements noted both by physician's assessment and visiometer analysis. Topical application of human growth factor and hyaluronic acid was beneficial in reducing periorbital wrinkles.
Guadalupe, Zenaida; Soldevilla, Alberto; Sáenz-Navajas, María-Pilar; Ayestarán, Belén
2006-04-21
A multiple-step analytical method was developed to improve the analysis of polymeric phenolics in red wines. With a common initial step based on the fractionation of wine phenolics by gel permeation chromatography (GPC), different analytical techniques were used: high-performance liquid chromatography-diode array detection (HPLC-DAD), HPLC-mass spectrometry (MS), capillary zone electrophoresis (CZE) and spectrophotometry. This method proved to be valid for analyzing different families of phenolic compounds, such as monomeric phenolics and their derivatives, polymeric pigments and proanthocyanidins. The analytical characteristics of fractionation by GPC were studied and the method was fully validated, yielding satisfactory statistical results. GPC fractionation substantially improved the analysis of polymeric pigments by CZE, in terms of response, repeatability and reproducibility. It also represented an improvement in the traditional vanillin assay used for proanthocyanidin (PA) quantification. Astringent proanthocyanidins were also analyzed using a simple combined method that allowed these compounds, for which only general indexes were available, to be quantified.
Cassimatis, Constantine; Liu, Karen P Y; Fahey, Paul; Bissett, Michelle
2016-09-01
A systematic review with meta-analysis was performed to investigate the effect external sensory cued therapy on activities of daily living (ADL) performance that include walking and daily tasks such as dressing for individuals with Parkinson's disease (PD). A detailed computer-aided search of the literature was applied to MEDLINE, Cumulative Index to Nursing and Allied Health Literature, EMBASE and PubMed. Studies investigating the effects of external sensory cued therapy on ADL performance for individuals with PD in all stages of disease progression were collected. Relevant articles were critically reviewed and study results were synthesized by two independent researchers. A data-analysis method was used to extract data from selected articles. A meta-analysis was carried out for all randomized-controlled trials. Six studies with 243 individuals with PD were included in this review. All six studies yielded positive findings in favour of external sensory cues. The meta-analysis showed that external sensory cued therapy improved statistically after treatment (P=0.011) and at follow-up (P<0.001) for ADL performance. The results of this review provided evidence of an improvement in ADL performance in general in individuals with PD. It is recommended that clinicians incorporate external sensory into a training programme focused on improving daily task performance.
Emotional and cognitive effects of peer tutoring among secondary school mathematics students
NASA Astrophysics Data System (ADS)
Alegre Ansuategui, Francisco José; Moliner Miravet, Lidón
2017-11-01
This paper describes an experience of same-age peer tutoring conducted with 19 eighth-grade mathematics students in a secondary school in Castellon de la Plana (Spain). Three constructs were analysed before and after launching the program: academic performance, mathematics self-concept and attitude of solidarity. Students' perceptions of the method were also analysed. The quantitative data was gathered by means of a mathematics self-concept questionnaire, an attitude of solidarity questionnaire and the students' numerical ratings. A statistical analysis was performed using Student's t-test. The qualitative information was gathered by means of discussion groups and a field diary. This information was analysed using descriptive analysis and by categorizing the information. Results show statistically significant improvements in all the variables and the positive assessment of the experience and the interactions that took place between the students.
Mortality and cause-of-death reporting and analysis systems in seven Pacific Island countries.
Carter, Karen L; Rao, Chalapati; Lopez, Alan D; Taylor, Richard
2012-06-13
Mortality statistics are essential for population health assessment. Despite limitations in data availability, Pacific Island Countries are considered to be in epidemiological transition, with non-communicable diseases increasingly contributing to premature adult mortality. To address rapidly changing health profiles, countries would require mortality statistics from routine death registration given their relatively small population sizes. This paper uses a standard analytical framework to examine death registration systems in Fiji, Kiribati, Nauru, Palau, Solomon Islands, Tonga and Vanuatu. In all countries, legislation on death registration exists but does not necessarily reflect current practices. Health departments carry the bulk of responsibility for civil registration functions. Medical cause-of-death certificates are completed for at least hospital deaths in all countries. Overall, significantly more information is available than perceived or used. Use is primarily limited by poor understanding, lack of coordination, limited analytical skills, and insufficient technical resources. Across the region, both registration and statistics systems need strengthening to improve the availability, completeness, and quality of data. Close interaction between health staff and local communities provides a good foundation for further improvements in death reporting. System strengthening activities must include a focus on clear assignment of responsibility, provision of appropriate authority to perform assigned tasks, and fostering ownership of processes and data to ensure sustained improvements. These human elements need to be embedded in a culture of data sharing and use. Lessons from this multi-country exercise would be applicable in other regions afflicted with similar issues of availability and quality of vital statistics.
Comparison of Accuracy Between a Conventional and Two Digital Intraoral Impression Techniques.
Malik, Junaid; Rodriguez, Jose; Weisbloom, Michael; Petridis, Haralampos
To compare the accuracy (ie, precision and trueness) of full-arch impressions fabricated using either a conventional polyvinyl siloxane (PVS) material or one of two intraoral optical scanners. Full-arch impressions of a reference model were obtained using addition silicone impression material (Aquasil Ultra; Dentsply Caulk) and two optical scanners (Trios, 3Shape, and CEREC Omnicam, Sirona). Surface matching software (Geomagic Control, 3D Systems) was used to superimpose the scans within groups to determine the mean deviations in precision and trueness (μm) between the scans, which were calculated for each group and compared statistically using one-way analysis of variance with post hoc Bonferroni (trueness) and Games-Howell (precision) tests (IBM SPSS ver 24, IBM UK). Qualitative analysis was also carried out from three-dimensional maps of differences between scans. Means and standard deviations (SD) of deviations in precision for conventional, Trios, and Omnicam groups were 21.7 (± 5.4), 49.9 (± 18.3), and 36.5 (± 11.12) μm, respectively. Means and SDs for deviations in trueness were 24.3 (± 5.7), 87.1 (± 7.9), and 80.3 (± 12.1) μm, respectively. The conventional impression showed statistically significantly improved mean precision (P < .006) and mean trueness (P < .001) compared to both digital impression procedures. There were no statistically significant differences in precision (P = .153) or trueness (P = .757) between the digital impressions. The qualitative analysis revealed local deviations along the palatal surfaces of the molars and incisal edges of the anterior teeth of < 100 μm. Conventional full-arch PVS impressions exhibited improved mean accuracy compared to two direct optical scanners. No significant differences were found between the two digital impression methods.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Measurement of self-evaluative motives: a shopping scenario.
Wajda, Theresa A; Kolbe, Richard; Hu, Michael Y; Cui, Annie Peng
2008-08-01
To develop measures of consumers' self-evaluative motives of Self-verification, Self-enhancement, and Self-improvement within the context of a mall shopping environment, an initial set of 49 items was generated by conducting three focus-group sessions. These items were subsequently converted into shopping-dependent motive statements. 250 undergraduate college students responded on a 7-point scale to each statement as these related to the acquisition of recent personal shopping goods. An exploratory factor analysis yielded five factors, accounting for 57.7% of the variance, three of which corresponded to the Self-verification motive (five items), Self-enhancement motive (three items), and Self-improvement motive (six items). These 14 items, along with 9 reconstructed items, yielded 23 items retained and subjected to additional testing. In a final round of data collection, 169 college students provided data for exploratory factor analysis. 11 items were used in confirmatory factor analysis. Analysis indicated that the 11-item scale adequately captured measures of the three self-evaluative motives. However, further data reduction produced a 9-item scale with marked improvement in statistical fit over the 11-item scale.
Collaborative Employee Wellness: Living Healthy With Diabetes.
Hovatter, Joan McGarvev; Cooke, Catherine E; de Bittner, Magaly Rodriguez
Innovative approaches to managing an employee population with a high prevalence of type 2 diabetes mellitus can mitigate costs for employers by improving employees' health. This article describes such an approach at McCormick & Company, Inc., where participants had statistically significant improvements in weight, average plasma glucose concentration (also called glycated hemoglobin or A1c) and cholesterol. A simulation analysis applying the findings of the study population to Maryland employees with a baseline A1c of greater than 6.0% showed that participation in the program could improve glycemic control in these patients, reducing the A1 c by 0.24% on average, with associated cost savings for the employer.
Thompson, Cheryl Bagley
2009-01-01
This 13th article of the Basics of Research series is first in a short series on statistical analysis. These articles will discuss creating your statistical analysis plan, levels of measurement, descriptive statistics, probability theory, inferential statistics, and general considerations for interpretation of the results of a statistical analysis.
2013-01-01
Background The theoretical basis of genome-wide association studies (GWAS) is statistical inference of linkage disequilibrium (LD) between any polymorphic marker and a putative disease locus. Most methods widely implemented for such analyses are vulnerable to several key demographic factors and deliver a poor statistical power for detecting genuine associations and also a high false positive rate. Here, we present a likelihood-based statistical approach that accounts properly for non-random nature of case–control samples in regard of genotypic distribution at the loci in populations under study and confers flexibility to test for genetic association in presence of different confounding factors such as population structure, non-randomness of samples etc. Results We implemented this novel method together with several popular methods in the literature of GWAS, to re-analyze recently published Parkinson’s disease (PD) case–control samples. The real data analysis and computer simulation show that the new method confers not only significantly improved statistical power for detecting the associations but also robustness to the difficulties stemmed from non-randomly sampling and genetic structures when compared to its rivals. In particular, the new method detected 44 significant SNPs within 25 chromosomal regions of size < 1 Mb but only 6 SNPs in two of these regions were previously detected by the trend test based methods. It discovered two SNPs located 1.18 Mb and 0.18 Mb from the PD candidates, FGF20 and PARK8, without invoking false positive risk. Conclusions We developed a novel likelihood-based method which provides adequate estimation of LD and other population model parameters by using case and control samples, the ease in integration of these samples from multiple genetically divergent populations and thus confers statistically robust and powerful analyses of GWAS. On basis of simulation studies and analysis of real datasets, we demonstrated significant improvement of the new method over the non-parametric trend test, which is the most popularly implemented in the literature of GWAS. PMID:23394771
Assessment of the beryllium lymphocyte proliferation test using statistical process control.
Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M
2006-10-01
Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.
Venigalla, Sriram; Nead, Kevin T; Sebro, Ronnie; Guttmann, David M; Sharma, Sonam; Simone, Charles B; Levin, William P; Wilson, Robert J; Weber, Kristy L; Shabason, Jacob E
2018-03-15
Soft tissue sarcomas (STS) are rare malignancies that require complex multidisciplinary management. Therefore, facilities with high sarcoma case volume may demonstrate superior outcomes. We hypothesized that STS treatment at high-volume (HV) facilities would be associated with improved overall survival (OS). Patients aged ≥18 years with nonmetastatic STS treated with surgery and radiation therapy at a single facility from 2004 through 2013 were identified from the National Cancer Database. Facilities were dichotomized into HV and low-volume (LV) cohorts based on total case volume over the study period. OS was assessed using multivariable Cox regression with propensity score-matching. Patterns of care were assessed using multivariable logistic regression analysis. Of 9025 total patients, 1578 (17%) and 7447 (83%) were treated at HV and LV facilities, respectively. On multivariable analysis, high educational attainment, larger tumor size, higher grade, and negative surgical margins were statistically significantly associated with treatment at HV facilities; conversely, black race and non-metropolitan residence were negative predictors of treatment at HV facilities. On propensity score-matched multivariable analysis, treatment at HV facilities versus LV facilities was associated with improved OS (hazard ratio, 0.87, 95% confidence interval, 0.80-0.95; P = .001). Older age, lack of insurance, greater comorbidity, larger tumor size, higher tumor grade, and positive surgical margins were associated with statistically significantly worse OS. In this observational cohort study using the National Cancer Database, receipt of surgery and radiation therapy at HV facilities was associated with improved OS in patients with STS. Potential sociodemographic disparities limit access to care at HV facilities for certain populations. Our findings highlight the importance of receipt of care at HV facilities for patients with STS and warrant further study into improving access to care at HV facilities. Copyright © 2017 Elsevier Inc. All rights reserved.
Kang, Jinbum; Lee, Jae Young; Yoo, Yangmo
2016-06-01
Effective speckle reduction in ultrasound B-mode imaging is important for enhancing the image quality and improving the accuracy in image analysis and interpretation. In this paper, a new feature-enhanced speckle reduction (FESR) method based on multiscale analysis and feature enhancement filtering is proposed for ultrasound B-mode imaging. In FESR, clinical features (e.g., boundaries and borders of lesions) are selectively emphasized by edge, coherence, and contrast enhancement filtering from fine to coarse scales while simultaneously suppressing speckle development via robust diffusion filtering. In the simulation study, the proposed FESR method showed statistically significant improvements in edge preservation, mean structure similarity, speckle signal-to-noise ratio, and contrast-to-noise ratio (CNR) compared with other speckle reduction methods, e.g., oriented speckle reducing anisotropic diffusion (OSRAD), nonlinear multiscale wavelet diffusion (NMWD), the Laplacian pyramid-based nonlinear diffusion and shock filter (LPNDSF), and the Bayesian nonlocal means filter (OBNLM). Similarly, the FESR method outperformed the OSRAD, NMWD, LPNDSF, and OBNLM methods in terms of CNR, i.e., 10.70 ± 0.06 versus 9.00 ± 0.06, 9.78 ± 0.06, 8.67 ± 0.04, and 9.22 ± 0.06 in the phantom study, respectively. Reconstructed B-mode images that were developed using the five speckle reduction methods were reviewed by three radiologists for evaluation based on each radiologist's diagnostic preferences. All three radiologists showed a significant preference for the abdominal liver images obtained using the FESR methods in terms of conspicuity, margin sharpness, artificiality, and contrast, p<0.0001. For the kidney and thyroid images, the FESR method showed similar improvement over other methods. However, the FESR method did not show statistically significant improvement compared with the OBNLM method in margin sharpness for the kidney and thyroid images. These results demonstrate that the proposed FESR method can improve the image quality of ultrasound B-mode imaging by enhancing the visualization of lesion features while effectively suppressing speckle noise.
Improving Family Communication.
Medrano, Vilma; Bonilla, Gladys; Hernández, Ericka; Romanjek, Mariana Harnecker; Gómez, Adriana; Hernández, Jasón; Reyes, Marcela Ríos; Lindenberg, Cathy Strachan
2017-03-01
TeenSmart International harnesses the power and flexibility of technology to empower youth to take personal responsibility for their health and lifestyle choices. Access to the Internet via mobile phones is often cheaper than paying to connect to a wired broadband service, and in rural areas, mobile networks may be the only means of accessing the Internet. This study assessed the feasibility, acceptability, and impact of "cues to action" or brief motivating cell phone text messages to improve adolescent family communication and relationships. A quasi-experimental design using a voluntary sample of 100 Nicaraguan youth at high risk for poor family communication participated. Pre- and posttest quantitative measures using Student t statistical analysis, a focus group, and a participant testimony provided the evaluation evidence. Findings suggest that there are economic and motivational barriers to the use of text messages, but when barriers are eliminated, the behavioral results are positive. Youth who received two weekly text messages over a 6-month period demonstrated statistically significant improvements in family communication perceptions, attitudes, and behaviors, strengthening their family communications and relationships. Brief and personalized text messaging "cues to action" may be a cost-effective intervention to improve adolescent healthy lifestyle behaviors.
NASA Astrophysics Data System (ADS)
Haseltine, Jessica
2006-10-01
A statistical analysis of enrollment in AP maths and sciences in the Abilene Independent School District, between 2000 and 2005, studied the relationship between gender, enrollment, and performance. Data suggested that mid-scoring females were less likely than their male counterparts to enroll in AP-level courses. AISD showed higher female : male score ratios than national and state averages but no improvement in enrollment comparisons. Several programs are suggested to improve both participation and performance of females in upper-level math and science courses.
Patient Populations, Clinical Associations, and System Efficiency in Healthcare Delivery System
NASA Astrophysics Data System (ADS)
Liu, Yazhuo
The efforts to improve health care delivery usually involve studies and analysis of patient populations and healthcare systems. In this dissertation, I present the research conducted in the following areas: identifying patient groups, improving treatments for specific conditions by using statistical as well as data mining techniques, and developing new operation research models to increase system efficiency from the health institutes' perspective. The results provide better understanding of high risk patient groups, more accuracy in detecting disease' correlations and practical scheduling tools that consider uncertain operation durations and real-life constraints.
Measuring effectiveness of drugs in observational databanks: promises and perils
Krishnan, Eswar; Fries, James F
2004-01-01
Observational databanks have inherent strengths and shortcomings. As in randomized controlled trials, poor design of these databanks can either exaggerate or reduce estimates of drug effectiveness and can limit generalizability. This commentary highlights selected aspects of study design, data collection and statistical analysis that can help overcome many of these inadequacies. An international metaRegister and a formal mechanism for standardizing and sharing drug data could help improve the utility of databanks. Medical journals have a vital role in enforcing a quality checklist that improves reporting. PMID:15059263
Improving Learning Performance Through Rational Resource Allocation
NASA Technical Reports Server (NTRS)
Gratch, J.; Chien, S.; DeJong, G.
1994-01-01
This article shows how rational analysis can be used to minimize learning cost for a general class of statistical learning problems. We discuss the factors that influence learning cost and show that the problem of efficient learning can be cast as a resource optimization problem. Solutions found in this way can be significantly more efficient than the best solutions that do not account for these factors. We introduce a heuristic learning algorithm that approximately solves this optimization problem and document its performance improvements on synthetic and real-world problems.
O'Shannessy, Daniel J; Bendas, Katie; Schweizer, Charles; Wang, Wenquan; Albone, Earl; Somers, Elizabeth B; Weil, Susan; Meredith, Rhonda K; Wustner, Jason; Grasso, Luigi; Landers, Mark; Nicolaides, Nicholas C
2017-07-01
Farletuzumab (FAR) is a humanized monoclonal antibody (mAb) that binds to folate receptor alpha. A Ph3 trial in ovarian cancer patients treated with carboplatin/taxane plus FAR or placebo did not meet the primary statistical endpoint. Subgroup analysis demonstrated that subjects with high FAR exposure levels (Cmin>57.6μg/mL) showed statistically significant improvements in PFS and OS. The neonatal Fc receptor (fcgrt) plays a central role in albumin/IgG stasis and mAb pharmacokinetics (PK). Here we evaluated fcgrt sequence and association of its promoter variable number tandem repeats (VNTR) and coding single nucleotide variants (SNV) with albumin/IgG levels and FAR PK in the Ph3 patients. A statistical correlation existed between high FAR Cmin and AUC in patients with the highest quartile of albumin and lowest quartile of IgG1. Analysis of fcgrt identified 5 different VNTRs in the promoter region and 9 SNVs within the coding region, 4 which are novel. Copyright © 2017. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Weisenseel, Robert A.; Karl, William C.; Castanon, David A.; DiMarzio, Charles A.
1999-02-01
We present an analysis of statistical model based data-level fusion for near-IR polarimetric and thermal data, particularly for the detection of mines and mine-like targets. Typical detection-level data fusion methods, approaches that fuse detections from individual sensors rather than fusing at the level of the raw data, do not account rationally for the relative reliability of different sensors, nor the redundancy often inherent in multiple sensors. Representative examples of such detection-level techniques include logical AND/OR operations on detections from individual sensors and majority vote methods. In this work, we exploit a statistical data model for the detection of mines and mine-like targets to compare and fuse multiple sensor channels. Our purpose is to quantify the amount of knowledge that each polarimetric or thermal channel supplies to the detection process. With this information, we can make reasonable decisions about the usefulness of each channel. We can use this information to improve the detection process, or we can use it to reduce the number of required channels.
A two-point diagnostic for the H II galaxy Hubble diagram
NASA Astrophysics Data System (ADS)
Leaf, Kyle; Melia, Fulvio
2018-03-01
A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.
Effect analysis of intradermal hyaluronic acid injection to treat enlarged facial pores.
Qian, Wei; Zhang, Yan-Kun; Hou, Ying; Lyu, Wei; Cao, Qian; Li, Yan-Qi; Fan, Ju-Feng
2017-08-08
To investigate the clinical application and efficacy of intradermal injection of low molecular weight hyaluronic acid (LMW-HA) for treating enlarged facial pores. From January 2015 to May 2016, 42 subjects who sought aesthetic treatment underwent intradermal injection of LMW-HA to improve enlarged facial pores. For each treatment, 2.5 mL (25 mg) of LMW-HA was injected into the skin of the full face. The treatment was repeated 2-5 times with an interval of 1 to 1.5 months between consecutive treatments. The postoperative follow-up period was 1 to 6 months. Statistical analysis was used to compare the degree of enlargement of facial pores before and after injection. The clinical efficacy and adverse effects were recorded. The enlarged facial pores before and after treatment were categorized and subjected to the Wilcoxon matched-pairs signed-rank test. The difference was statistically significant (P<.01). The improvement rate was 40.03±18.41%. No infection, nodules, or pigmentation was reported at the injection sites in the subjects who sought aesthetic treatment. The overall satisfaction rate was 92.8%. Intradermal injection of LMW-HA can significantly improve skin texture, reduce pore size, and enhance skin radiance. The injection technique was simple, safe, and effective and could easily be extended to clinical practice. © 2017 Wiley Periodicals, Inc.
Updyke, Katelyn Mariko; Urso, Brittany; Beg, Shazia; Solomon, James
2017-10-09
Systemic lupus erythematosus (SLE) is a multi-organ, autoimmune disease in which patients lose self-tolerance and develop immune complexes which deposit systemically causing multi-organ damage and inflammation. Patients often experience unpredictable flares of symptoms with poorly identified triggers. Literature suggests exogenous exposures may contribute to flares in symptoms. An online pilot survey was marketed globally through social media to self-reported SLE patients with the goal to identify specific subpopulations who are susceptible to disease state changes based on analyzed exogenous factors. The pilot survey was promoted for two weeks, 80 respondents fully completed the survey and were included in statistical analysis. Descriptive statistical analysis was performed on de-identified patient surveys and compared to previous literature studies reporting known or theorized triggers in the SLE disease state. The pilot survey identified similar exogenous triggers compared to previous literature, including antibiotics, increasing beef intake, and metal implants. The goal of the pilot survey is to utilize similar questions to develop a detailed internet-based patient interactive form that can be edited and time stamped as a method to promote continuous quality improvement assessments. The ultimate objective of the platform is to interact with SLE patients from across the globe longitudinally to optimize disease control and improve quality of care by allowing them to avoid harmful triggers.
Course of Weaning from Prolonged Mechanical Ventilation after Cardiac Surgery
Herlihy, James P.; Koch, Stephen M.; Jackson, Robert; Nora, Hope
2006-01-01
In order to determine the temporal pattern of weaning from mechanical ventilation for patients undergoing prolonged mechanical ventilation after cardiac surgery, we performed a retrospective review of 21 patients' weaning courses at our long-term acute care hospital. Using multiple regression analysis of an estimate of individual patients' percentage of mechanical ventilator support per day (%MVSD), we determined that 14 of 21 patients (67%) showed a statistically significant quadratic or cubic relationship between time and %MVSD. These patients showed little or no improvement in their ventilator dependence until a point in time when, abruptly, they began to make rapid progress (a “wean turning point”), after which they progressed to discontinuation of mechanical ventilation in a relatively short period of time. The other 7 patients appeared to have a similar weaning pattern, although the data were not statistically significant. Most patients in the study group weaned from the ventilator through a specific temporal pattern that is newly described herein. Data analysis suggested that the mechanism for the development of a wean turning point was improvement of pulmonary mechanics rather than improvement in gas exchange or respiratory load. Although these observations need to be confirmed by a prospective trial, they may have implications for weaning cardiac surgery patients from prolonged mechanical ventilation, and possibly for weaning a broader group of patients who require prolonged mechanical ventilation. PMID:16878611
Urso, Brittany; Beg, Shazia; Solomon, James
2017-01-01
Systemic lupus erythematosus (SLE) is a multi-organ, autoimmune disease in which patients lose self-tolerance and develop immune complexes which deposit systemically causing multi-organ damage and inflammation. Patients often experience unpredictable flares of symptoms with poorly identified triggers. Literature suggests exogenous exposures may contribute to flares in symptoms. An online pilot survey was marketed globally through social media to self-reported SLE patients with the goal to identify specific subpopulations who are susceptible to disease state changes based on analyzed exogenous factors. The pilot survey was promoted for two weeks, 80 respondents fully completed the survey and were included in statistical analysis. Descriptive statistical analysis was performed on de-identified patient surveys and compared to previous literature studies reporting known or theorized triggers in the SLE disease state. The pilot survey identified similar exogenous triggers compared to previous literature, including antibiotics, increasing beef intake, and metal implants. The goal of the pilot survey is to utilize similar questions to develop a detailed internet-based patient interactive form that can be edited and time stamped as a method to promote continuous quality improvement assessments. The ultimate objective of the platform is to interact with SLE patients from across the globe longitudinally to optimize disease control and improve quality of care by allowing them to avoid harmful triggers. PMID:29226052
Poikkeus, Tarja; Suhonen, Riitta; Katajisto, Jouko; Leino-Kilpi, Helena
2018-03-12
Organizations and nurse leaders do not always effectively support nurses' ethical competence. More information is needed about nurses' perceptions of this support and relevant factors to improve it. The aim of the study was to examine relationships between nurses' perceived organizational and individual support, ethical competence, ethical safety, and work satisfaction. A cross-sectional questionnaire survey was conducted. Questionnaires were distributed to nurses (n = 298) working in specialized, primary, or private health care in Finland. Descriptive statistics, multifactor analysis of variance, and linear regression analysis were used to test the relationships. The nurses reported low organizational and individual support for their ethical competence, whereas perceptions of their ethical competence, ethical safety, and work satisfaction were moderate. There were statistically significant positive correlations between both perceived individual and organizational support, and ethical competence, nurses' work satisfaction, and nurses' ethical safety. Organizational and individual support for nurses' ethical competence should be strengthened, at least in Finland, by providing more ethics education and addressing ethical problems in multiprofessional discussions. Findings confirm that organizational level support for ethical competence improves nurses' work satisfaction. They also show that individual level support improves nurses' sense of ethical safety, and both organizational and individual support strengthen nurses' ethical competence. These findings should assist nurse leaders to implement effective support practices to strengthen nurses' ethical competence, ethical safety, and work satisfaction.
Alignment-free genetic sequence comparisons: a review of recent approaches by word analysis
Steele, Joe; Bastola, Dhundy
2014-01-01
Modern sequencing and genome assembly technologies have provided a wealth of data, which will soon require an analysis by comparison for discovery. Sequence alignment, a fundamental task in bioinformatics research, may be used but with some caveats. Seminal techniques and methods from dynamic programming are proving ineffective for this work owing to their inherent computational expense when processing large amounts of sequence data. These methods are prone to giving misleading information because of genetic recombination, genetic shuffling and other inherent biological events. New approaches from information theory, frequency analysis and data compression are available and provide powerful alternatives to dynamic programming. These new methods are often preferred, as their algorithms are simpler and are not affected by synteny-related problems. In this review, we provide a detailed discussion of computational tools, which stem from alignment-free methods based on statistical analysis from word frequencies. We provide several clear examples to demonstrate applications and the interpretations over several different areas of alignment-free analysis such as base–base correlations, feature frequency profiles, compositional vectors, an improved string composition and the D2 statistic metric. Additionally, we provide detailed discussion and an example of analysis by Lempel–Ziv techniques from data compression. PMID:23904502
Guidelines for the Investigation of Mediating Variables in Business Research.
MacKinnon, David P; Coxe, Stefany; Baraldi, Amanda N
2012-03-01
Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized.
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
NASA Astrophysics Data System (ADS)
Amato, Umberto; Antoniadis, Anestis; De Feis, Italia; Masiello, Guido; Matricardi, Marco; Serio, Carmine
2009-03-01
Remote sensing of atmosphere is changing rapidly thanks to the development of high spectral resolution infrared space-borne sensors. The aim is to provide more and more accurate information on the lower atmosphere, as requested by the World Meteorological Organization (WMO), to improve reliability and time span of weather forecasts plus Earth's monitoring. In this paper we show the results we have obtained on a set of Infrared Atmospheric Sounding Interferometer (IASI) observations using a new statistical strategy based on dimension reduction. Retrievals have been compared to time-space colocated ECMWF analysis for temperature, water vapor and ozone.
NASA Astrophysics Data System (ADS)
Li, H. J.; Wei, F. S.; Feng, X. S.; Xie, Y. Q.
2008-09-01
This paper investigates methods to improve the predictions of Shock Arrival Time (SAT) of the original Shock Propagation Model (SPM). According to the classical blast wave theory adopted in the SPM, the shock propagating speed is determined by the total energy of the original explosion together with the background solar wind speed. Noting that there exists an intrinsic limit to the transit times computed by the SPM predictions for a specified ambient solar wind, we present a statistical analysis on the forecasting capability of the SPM using this intrinsic property. Two facts about SPM are found: (1) the error in shock energy estimation is not the only cause of the prediction errors and we should not expect that the accuracy of SPM to be improved drastically by an exact shock energy input; and (2) there are systematic differences in prediction results both for the strong shocks propagating into a slow ambient solar wind and for the weak shocks into a fast medium. Statistical analyses indicate the physical details of shock propagation and thus clearly point out directions of the future improvement of the SPM. A simple modification is presented here, which shows that there is room for improvement of SPM and thus that the original SPM is worthy of further development.
Hauber, A Brett; González, Juan Marcos; Groothuis-Oudshoorn, Catharina G M; Prior, Thomas; Marshall, Deborah A; Cunningham, Charles; IJzerman, Maarten J; Bridges, John F P
2016-06-01
Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discrete choice experiments (DCEs), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other stakeholders. Recent consensus-based guidance on good research practices, including two recent task force reports from the International Society for Pharmacoeconomics and Outcomes Research, has aided in improving the quality of conjoint analyses and DCEs in outcomes research. Nevertheless, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There are multiple methods for analyzing DCE data. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting among alternative approaches to conducting statistical analysis of DCE data. We first present a simplistic DCE example and a simple method for using the resulting data. We then present a pedagogical example of a DCE and one of the most common approaches to analyzing data from such a question format-conditional logit. We then describe some common alternative methods for analyzing these data and the strengths and weaknesses of each alternative. We present the ESTIMATE checklist, which includes a list of questions to consider when justifying the choice of analysis method, describing the analysis, and interpreting the results. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A Cyber-Attack Detection Model Based on Multivariate Analyses
NASA Astrophysics Data System (ADS)
Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi
In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.
Statistical PERT: An Improved Subnetwork Analysis Procedure
1975-11-01
z -^■zzzzzzzzzzzz - u < t- < VIVII/II/IU’. Vll/tVlV. WI^IA O li ’ V *->■>>■■>■ >>■>>*■*’* t, < K^^ Kfc -^-Kh-f-^t-H li. t...A.R. Dawe Office of Naval Research San Francisco Area Office 760 Market St. - ROOD 447 San Francisco, California 94103 Technical Library Naval
ERIC Educational Resources Information Center
Abouelenein, Mahmoud S.
2012-01-01
The purpose of this quantitative, descriptive research study was to determine, through statistical analysis, any correlation between the perceived transformational leadership traits of CIOs at two-year community colleges in Kansas and measures of the job satisfaction among IT workers at those community colleges. The objectives of this research…
Schools and Data: The Educator's Guide for Using Data to Improve Decision Making
ERIC Educational Resources Information Center
Creighton, Theodore B.
2006-01-01
Since the first edition of "Schools and Data", the No Child Left Behind Act has swept the country, and data-based decision making is no longer an option for educators. Today's educational climate makes it imperative for all schools to collect data and use statistical analysis to help create clear goals and recognize strategies for…
The Chat Is Coming from inside the House: An Analysis of Perceived Chat Behavior and Reality
ERIC Educational Resources Information Center
Berndt-Morris, Elizabeth; Minnis, Samantha M.
2014-01-01
When looking for ways to improve library services, we considered what data sources were readily available to us and how we could harvest and use this data. We investigated three years of chat reference statistics at Central Michigan University, a large research institution, to gain a better understanding of our patrons' chat behavior. We then…
ERIC Educational Resources Information Center
Rushton, Gregory T.; Rosengrant, David; Dewar, Andrew; Shah, Lisa; Ray, Herman E.; Sheppard, Keith; Watanabe, Lynn
2017-01-01
Efforts to improve the number and quality of the high school physics teaching workforce have taken several forms, including those sponsored by professional organizations. Using a series of large-scale teacher demographic data sets from the National Center for Education Statistics (NCES), this study sought to investigate trends in teacher quality…
Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework
NASA Astrophysics Data System (ADS)
Gannon, C.
2017-12-01
As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.
Box-Cox transformation of firm size data in statistical analysis
NASA Astrophysics Data System (ADS)
Chen, Ting Ting; Takaishi, Tetsuya
2014-03-01
Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.
Statistical analysis in MSW collection performance assessment.
Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel
2014-09-01
The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Model-based reconstruction of synthetic promoter library in Corynebacterium glutamicum.
Zhang, Shuanghong; Liu, Dingyu; Mao, Zhitao; Mao, Yufeng; Ma, Hongwu; Chen, Tao; Zhao, Xueming; Wang, Zhiwen
2018-05-01
To develop an efficient synthetic promoter library for fine-tuned expression of target genes in Corynebacterium glutamicum. A synthetic promoter library for C. glutamicum was developed based on conserved sequences of the - 10 and - 35 regions. The synthetic promoter library covered a wide range of strengths, ranging from 1 to 193% of the tac promoter. 68 promoters were selected and sequenced for correlation analysis between promoter sequence and strength with a statistical model. A new promoter library was further reconstructed with improved promoter strength and coverage based on the results of correlation analysis. Tandem promoter P70 was finally constructed with increased strength by 121% over the tac promoter. The promoter library developed in this study showed a great potential for applications in metabolic engineering and synthetic biology for the optimization of metabolic networks. To the best of our knowledge, this is the first reconstruction of synthetic promoter library based on statistical analysis of C. glutamicum.
Individualism: a valid and important dimension of cultural differences between nations.
Schimmack, Ulrich; Oishi, Shigehiro; Diener, Ed
2005-01-01
Oyserman, Coon, and Kemmelmeier's (2002) meta-analysis suggested problems in the measurement of individualism and collectivism. Studies using Hofstede's individualism scores show little convergent validity with more recent measures of individualism and collectivism. We propose that the lack of convergent validity is due to national differences in response styles. Whereas Hofstede statistically controlled for response styles, Oyserman et al.'s meta-analysis relied on uncorrected ratings. Data from an international student survey demonstrated convergent validity between Hofstede's individualism dimension and horizontal individualism when response styles were statistically controlled, whereas uncorrected scores correlated highly with the individualism scores in Oyserman et al.'s meta-analysis. Uncorrected horizontal individualism scores and meta-analytic individualism scores did not correlate significantly with nations' development, whereas corrected horizontal individualism scores and Hofstede's individualism dimension were significantly correlated with development. This pattern of results suggests that individualism is a valid construct for cross-cultural comparisons, but that the measurement of this construct needs improvement.
Miccoli, Gabriele; Gaimari, Gianfranco; Seracchiani, Marco; Morese, Antonio; Khrenova, Tatyana; Di Nardo, Dario
2017-01-01
Aim of the study was to evaluate effectiveness of different heat treatments in improving Ni-Ti endodontic rotary instruments' resistance to fracture. 24 new NiTi instruments similar in length and shape: 12 M3 instruments, tip size 25 and .06 taper (United Dental, Shanghai, China), and 12 M3 Pro Gold instruments tip size 25 and .06 taper (United Dental, Shanghai, China), were tested in a 60° curved artificial root canal. Each group received a different heat treatment. Cycles to fracture were calculated for each instrument. Differences among groups were evaluated with an analysis of variance test (significance level was set at P<0.05.). Statistical analysis found significant differences (p<0.0213) between groups. The M3 Pro Gold instruments were significantly more resistant to fatigue (mean values = 1012, SD +/- 77) than M3 instruments (mean values = 748, SD +/- 62). No statistically significant differences were found between fragments' lengths (p>0,05). An increased flexibility and the reduction of internal defects produced by heat treatments during or after manufacturing processes, may be responsible for improving resistance to cyclic fatigue and flexural stresses.
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
Role of Organizational Climate in Organizational Commitment: The Case of Teaching Hospitals.
Bahrami, Mohammad Amin; Barati, Omid; Ghoroghchian, Malake-Sadat; Montazer-Alfaraj, Razieh; Ranjbar Ezzatabadi, Mohammad
2016-04-01
The commitment of employees is affected by several factors, including factors related to the organizational climate. The aim of this study was to investigate the relationship between organizational commitment of nurses and the organizational climate in hospital settings. A cross-sectional study was conducted in 2014 at two teaching hospitals in Yazd, Iran. A total of 90 nurses in these hospitals participated. We used stratified random sampling of the nursing population. The required data were gathered using two valid questionnaires: Allen and Meyer's organizational commitment standard questionnaire and Halpin and Croft's Organizational Climate Description Questionnaire. Data analysis was done through SPSS 20 statistical software (IBM Corp., Armonk, NY, USA). We used descriptive statistics and Pearson's correlation coefficient for the data analysis. The findings indicated a positive and significant correlation between organizational commitment and organizational climate (r = 0.269, p = 0.01). There is also a significant positive relationship between avoidance of organizational climate and affective commitment (r = 0.208, p = 0.049) and between focus on production and normative and continuance commitment (r = 0.308, p = 0.003). Improving the organizational climate could be a valuable strategy for improving organizational commitment.
Lognormal Assimilation of Water Vapor in a WRF-GSI Cycled System
NASA Astrophysics Data System (ADS)
Fletcher, S. J.; Kliewer, A.; Jones, A. S.; Forsythe, J. M.
2015-12-01
Recent publications have shown the viability of both detecting a lognormally-distributed signal for water vapor mixing ratio and the improved quality of satellite retrievals in a 1DVAR mixed lognormal-Gaussian assimilation scheme over a Gaussian-only system. This mixed scheme is incorporated into the Gridpoint Statistical Interpolation (GSI) assimilation scheme with the goal of improving forecasts from the Weather Research and Forecasting (WRF) Model in a cycled system. Results are presented of the impact of treating water vapor as a lognormal random variable. Included in the analysis are: 1) the evolution of Tropical Storm Chris from 2006, and 2) an analysis of a "Pineapple Express" water vapor event from 2005 where a lognormal signal has been previously detected.