Sample records for comprehensive statistical analysis

  1. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent. PMID:26053876

  2. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis.

    PubMed

    Wu, Yazhou; Zhou, Liang; Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.

  3. SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data

    ERIC Educational Resources Information Center

    Auerbach, Charles; Schudrich, Wendy Zeitlin

    2013-01-01

    The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…

  4. Statistical Learning Analysis in Neuroscience: Aiming for Transparency

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Haxby, James V.; Pollmann, Stefan

    2009-01-01

    Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires “neuroscience-aware” technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities. PMID:20582270

  5. The Empirical Review of Meta-Analysis Published in Korea

    ERIC Educational Resources Information Center

    Park, Sunyoung; Hong, Sehee

    2016-01-01

    Meta-analysis is a statistical method that is increasingly utilized to combine and compare the results of previous primary studies. However, because of the lack of comprehensive guidelines for how to use meta-analysis, many meta-analysis studies have failed to consider important aspects, such as statistical programs, power analysis, publication…

  6. Comprehension-Based versus Production-Based Grammar Instruction: A Meta-Analysis of Comparative Studies

    ERIC Educational Resources Information Center

    Shintani, Natsuko; Li, Shaofeng; Ellis, Rod

    2013-01-01

    This article reports a meta-analysis of studies that investigated the relative effectiveness of comprehension-based instruction (CBI) and production-based instruction (PBI). The meta-analysis only included studies that featured a direct comparison of CBI and PBI in order to ensure methodological and statistical robustness. A total of 35 research…

  7. An Analysis of Mathematics Course Sequences for Low Achieving Students at a Comprehensive Technical High School

    ERIC Educational Resources Information Center

    Edge, D. Michael

    2011-01-01

    This non-experimental study attempted to determine how the different prescribed mathematic tracks offered at a comprehensive technical high school influenced the mathematics performance of low-achieving students on standardized assessments of mathematics achievement. The goal was to provide an analysis of any statistically significant differences…

  8. Arthrodesis following failed total knee arthroplasty: comprehensive review and meta-analysis of recent literature.

    PubMed

    Damron, T A; McBeath, A A

    1995-04-01

    With the increasing duration of follow up on total knee arthroplasties, more revision arthroplasties are being performed. When revision is not advisable, a salvage procedure such as arthrodesis or resection arthroplasty is indicated. This article provides a comprehensive review of the literature regarding arthrodesis following failed total knee arthroplasty. In addition, a statistical meta-analysis of five studies using modern arthrodesis techniques is presented. A statistically significant greater fusion rate with intramedullary nail arthrodesis compared to external fixation is documented. Gram negative and mixed infections are found to be significant risk factors for failure of arthrodesis.

  9. AN EMPIRICAL BAYES APPROACH TO COMBINING ESTIMATES OF THE VALUE OF A STATISTICAL LIFE FOR ENVIRONMENTAL POLICY ANALYSIS

    EPA Science Inventory

    This analysis updates EPA's standard VSL estimate by using a more comprehensive collection of VSL studies that include studies published between 1992 and 2000, as well as applying a more appropriate statistical method. We provide a pooled effect VSL estimate by applying the empi...

  10. Chi-Square Statistics, Tests of Hypothesis and Technology.

    ERIC Educational Resources Information Center

    Rochowicz, John A.

    The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…

  11. An Assessment of the Effectiveness of Air Force Risk Management Practices in Program Acquisition Using Survey Instrument Analysis

    DTIC Science & Technology

    2015-06-18

    Engineering Effectiveness Survey. CMU/SEI-2012-SR-009. Carnegie Mellon University. November 2012. Field, Andy. Discovering Statistics Using SPSS , 3rd...enough into the survey to begin answering questions on risk practices. All of the data statistical analysis will be performed using SPSS . Prior to...probabilistically using distributions for likelihood and impact. Statistical methods like Monte Carlo can more comprehensively evaluate the cost and

  12. Cognition, comprehension and application of biostatistics in research by Indian postgraduate students in periodontics.

    PubMed

    Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar

    2014-01-01

    Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed.

  13. Emergent Readers' Social Interaction Styles and Their Comprehension Processes during Buddy Reading

    ERIC Educational Resources Information Center

    Christ, Tanya; Wang, X. Christine; Chiu, Ming Ming

    2015-01-01

    To examine the relations between emergent readers' social interaction styles and their comprehension processes, we adapted sociocultural and transactional views of learning and reading, and conducted statistical discourse analysis of 1,359 conversation turns transcribed from 14 preschoolers' 40 buddy reading events. Results show that interaction…

  14. Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.

    PubMed

    Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew

    2012-08-08

    Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.

  15. Operational Consequences of Literacy Gap.

    DTIC Science & Technology

    1980-05-01

    Comprehension Scores on the Safety and Sanitation Content 37 11. Statistics on Experimental Groups’ Performance by Sex and Content 37 12. Analysis of...Variance of Experimental Groups by Sex and Content 38 13. Mean Comprehension Scores Broken Down by Content, Subject RGL and Reading Time 39 14. Analysis...ratings along a scale of difficulty which parallels the school grade scale. Burkett (1975) and Klare (1963; 1974-1975) provide summaries of the extensive

  16. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  17. Cognition, comprehension and application of biostatistics in research by Indian postgraduate students in periodontics

    PubMed Central

    Swetha, Jonnalagadda Laxmi; Arpita, Ramisetti; Srikanth, Chintalapani; Nutalapati, Rajasekhar

    2014-01-01

    Background: Biostatistics is an integral part of research protocols. In any field of inquiry or investigation, data obtained is subsequently classified, analyzed and tested for accuracy by statistical methods. Statistical analysis of collected data, thus, forms the basis for all evidence-based conclusions. Aim: The aim of this study is to evaluate the cognition, comprehension and application of biostatistics in research among post graduate students in Periodontics, in India. Materials and Methods: A total of 391 post graduate students registered for a master's course in periodontics at various dental colleges across India were included in the survey. Data regarding the level of knowledge, understanding and its application in design and conduct of the research protocol was collected using a dichotomous questionnaire. A descriptive statistics was used for data analysis. Results: Nearly 79.2% students were aware of the importance of biostatistics in research, 55-65% were familiar with MS-EXCEL spreadsheet for graphical representation of data and with the statistical softwares available on the internet, 26.0% had biostatistics as mandatory subject in their curriculum, 9.5% tried to perform statistical analysis on their own while 3.0% were successful in performing statistical analysis of their studies on their own. Conclusion: Biostatistics should play a central role in planning, conduct, interim analysis, final analysis and reporting of periodontal research especially by the postgraduate students. Indian postgraduate students in periodontics are aware of the importance of biostatistics in research but the level of understanding and application is still basic and needs to be addressed. PMID:24744547

  18. 2011 statistical abstract of the United States

    USGS Publications Warehouse

    Krisanda, Joseph M.

    2011-01-01

    The Statistical Abstract of the United States, published since 1878, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States.Use the Abstract as a convenient volume for statistical reference, and as a guide to sources of more information both in print and on the Web.Sources of data include the Census Bureau, Bureau of Labor Statistics, Bureau of Economic Analysis, and many other Federal agencies and private organizations.

  19. Application of Ontology Technology in Health Statistic Data Analysis.

    PubMed

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

  20. Training in metabolomics research. II. Processing and statistical analysis of metabolomics data, metabolite identification, pathway analysis, applications of metabolomics and its future

    PubMed Central

    Barnes, Stephen; Benton, H. Paul; Casazza, Krista; Cooper, Sara; Cui, Xiangqin; Du, Xiuxia; Engler, Jeffrey; Kabarowski, Janusz H.; Li, Shuzhao; Pathmasiri, Wimal; Prasain, Jeevan K.; Renfrow, Matthew B.; Tiwari, Hemant K.

    2017-01-01

    Metabolomics, a systems biology discipline representing analysis of known and unknown pathways of metabolism, has grown tremendously over the past 20 years. Because of its comprehensive nature, metabolomics requires careful consideration of the question(s) being asked, the scale needed to answer the question(s), collection and storage of the sample specimens, methods for extraction of the metabolites from biological matrices, the analytical method(s) to be employed and the quality control of the analyses, how collected data are correlated, the statistical methods to determine metabolites undergoing significant change, putative identification of metabolites, and the use of stable isotopes to aid in verifying metabolite identity and establishing pathway connections and fluxes. This second part of a comprehensive description of the methods of metabolomics focuses on data analysis, emerging methods in metabolomics and the future of this discipline. PMID:28239968

  1. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    PubMed

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  2. 2011 statistical abstract of the United States

    USGS Publications Warehouse

    Krisanda, Joseph M.

    2011-01-01

    The Statistical Abstract of the United States, published since 1878, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States.


    Use the Abstract as a convenient volume for statistical reference, and as a guide to sources of more information both in print and on the Web.


    Sources of data include the Census Bureau, Bureau of Labor Statistics, Bureau of Economic Analysis, and many other Federal agencies and private organizations.

  3. Developing a Test for Assessing Elementary Students' Comprehension of Science Texts

    ERIC Educational Resources Information Center

    Wang, Jing-Ru; Chen, Shin-Feng; Tsay, Reuy-Fen; Chou, Ching-Ting; Lin, Sheau-Wen; Kao, Huey-Lien

    2012-01-01

    This study reports on the process of developing a test to assess students' reading comprehension of scientific materials and on the statistical results of the verification study. A combination of classic test theory and item response theory approaches was used to analyze the assessment data from a verification study. Data analysis indicates the…

  4. The Effects of Measurement Error on Statistical Models for Analyzing Change. Final Report.

    ERIC Educational Resources Information Center

    Dunivant, Noel

    The results of six major projects are discussed including a comprehensive mathematical and statistical analysis of the problems caused by errors of measurement in linear models for assessing change. In a general matrix representation of the problem, several new analytic results are proved concerning the parameters which affect bias in…

  5. Psychometric Analysis of Role Conflict and Ambiguity Scales in Academia

    ERIC Educational Resources Information Center

    Khan, Anwar; Yusoff, Rosman Bin Md.; Khan, Muhammad Muddassar; Yasir, Muhammad; Khan, Faisal

    2014-01-01

    A comprehensive Psychometric Analysis of Rizzo et al.'s (1970) Role Conflict & Ambiguity (RCA) scales were performed after its distribution among 600 academic staff working in six universities of Pakistan. The reliability analysis includes calculation of Cronbach Alpha Coefficients and Inter-Items statistics, whereas validity was determined by…

  6. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    PubMed

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value

  7. Sources of Safety Data and Statistical Strategies for Design and Analysis: Clinical Trials.

    PubMed

    Zink, Richard C; Marchenko, Olga; Sanchez-Kam, Matilde; Ma, Haijun; Jiang, Qi

    2018-03-01

    There has been an increased emphasis on the proactive and comprehensive evaluation of safety endpoints to ensure patient well-being throughout the medical product life cycle. In fact, depending on the severity of the underlying disease, it is important to plan for a comprehensive safety evaluation at the start of any development program. Statisticians should be intimately involved in this process and contribute their expertise to study design, safety data collection, analysis, reporting (including data visualization), and interpretation. In this manuscript, we review the challenges associated with the analysis of safety endpoints and describe the safety data that are available to influence the design and analysis of premarket clinical trials. We share our recommendations for the statistical and graphical methodologies necessary to appropriately analyze, report, and interpret safety outcomes, and we discuss the advantages and disadvantages of safety data obtained from clinical trials compared to other sources. Clinical trials are an important source of safety data that contribute to the totality of safety information available to generate evidence for regulators, sponsors, payers, physicians, and patients. This work is a result of the efforts of the American Statistical Association Biopharmaceutical Section Safety Working Group.

  8. Training in metabolomics research. II. Processing and statistical analysis of metabolomics data, metabolite identification, pathway analysis, applications of metabolomics and its future.

    PubMed

    Barnes, Stephen; Benton, H Paul; Casazza, Krista; Cooper, Sara J; Cui, Xiangqin; Du, Xiuxia; Engler, Jeffrey; Kabarowski, Janusz H; Li, Shuzhao; Pathmasiri, Wimal; Prasain, Jeevan K; Renfrow, Matthew B; Tiwari, Hemant K

    2016-08-01

    Metabolomics, a systems biology discipline representing analysis of known and unknown pathways of metabolism, has grown tremendously over the past 20 years. Because of its comprehensive nature, metabolomics requires careful consideration of the question(s) being asked, the scale needed to answer the question(s), collection and storage of the sample specimens, methods for extraction of the metabolites from biological matrices, the analytical method(s) to be employed and the quality control of the analyses, how collected data are correlated, the statistical methods to determine metabolites undergoing significant change, putative identification of metabolites and the use of stable isotopes to aid in verifying metabolite identity and establishing pathway connections and fluxes. This second part of a comprehensive description of the methods of metabolomics focuses on data analysis, emerging methods in metabolomics and the future of this discipline. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Implementation of Statistics Textbook Support with ICT and Portfolio Assessment Approach to Improve Students Teacher Mathematical Connection Skills

    NASA Astrophysics Data System (ADS)

    Hendikawati, P.; Dewi, N. R.

    2017-04-01

    Statistics needed for use in the data analysis process and had a comprehensive implementation in daily life so that students must master the well statistical material. The use of Statistics textbook support with ICT and portfolio assessment approach was expected to help the students to improve mathematical connection skills. The subject of this research was 30 student teachers who take Statistics courses. The results of this research are the use of Statistics textbook support with ICT and portfolio assessment approach can improve students mathematical connection skills.

  10. Ripening-dependent metabolic changes in the volatiles of pineapple (Ananas comosus (L.) Merr.) fruit: II. Multivariate statistical profiling of pineapple aroma compounds based on comprehensive two-dimensional gas chromatography-mass spectrometry.

    PubMed

    Steingass, Christof Björn; Jutzi, Manfred; Müller, Jenny; Carle, Reinhold; Schmarr, Hans-Georg

    2015-03-01

    Ripening-dependent changes of pineapple volatiles were studied in a nontargeted profiling analysis. Volatiles were isolated via headspace solid phase microextraction and analyzed by comprehensive 2D gas chromatography and mass spectrometry (HS-SPME-GC×GC-qMS). Profile patterns presented in the contour plots were evaluated applying image processing techniques and subsequent multivariate statistical data analysis. Statistical methods comprised unsupervised hierarchical cluster analysis (HCA) and principal component analysis (PCA) to classify the samples. Supervised partial least squares discriminant analysis (PLS-DA) and partial least squares (PLS) regression were applied to discriminate different ripening stages and describe the development of volatiles during postharvest storage, respectively. Hereby, substantial chemical markers allowing for class separation were revealed. The workflow permitted the rapid distinction between premature green-ripe pineapples and postharvest-ripened sea-freighted fruits. Volatile profiles of fully ripe air-freighted pineapples were similar to those of green-ripe fruits postharvest ripened for 6 days after simulated sea freight export, after PCA with only two principal components. However, PCA considering also the third principal component allowed differentiation between air-freighted fruits and the four progressing postharvest maturity stages of sea-freighted pineapples.

  11. Parricide: An Empirical Analysis of 24 Years of U.S. Data

    ERIC Educational Resources Information Center

    Heide, Kathleen M.; Petee, Thomas A.

    2007-01-01

    Empirical analysis of homicides in which children have killed parents has been limited. The most comprehensive statistical analysis involving parents as victims was undertaken by Heide and used Supplementary Homicide Report (SHR) data for the 10-year period 1977 to 1986. This article provides an updated examination of characteristics of victims,…

  12. MetaGenyo: a web tool for meta-analysis of genetic association studies.

    PubMed

    Martorell-Marugan, Jordi; Toro-Dominguez, Daniel; Alarcon-Riquelme, Marta E; Carmona-Saez, Pedro

    2017-12-16

    Genetic association studies (GAS) aims to evaluate the association between genetic variants and phenotypes. In the last few years, the number of this type of study has increased exponentially, but the results are not always reproducible due to experimental designs, low sample sizes and other methodological errors. In this field, meta-analysis techniques are becoming very popular tools to combine results across studies to increase statistical power and to resolve discrepancies in genetic association studies. A meta-analysis summarizes research findings, increases statistical power and enables the identification of genuine associations between genotypes and phenotypes. Meta-analysis techniques are increasingly used in GAS, but it is also increasing the amount of published meta-analysis containing different errors. Although there are several software packages that implement meta-analysis, none of them are specifically designed for genetic association studies and in most cases their use requires advanced programming or scripting expertise. We have developed MetaGenyo, a web tool for meta-analysis in GAS. MetaGenyo implements a complete and comprehensive workflow that can be executed in an easy-to-use environment without programming knowledge. MetaGenyo has been developed to guide users through the main steps of a GAS meta-analysis, covering Hardy-Weinberg test, statistical association for different genetic models, analysis of heterogeneity, testing for publication bias, subgroup analysis and robustness testing of the results. MetaGenyo is a useful tool to conduct comprehensive genetic association meta-analysis. The application is freely available at http://bioinfo.genyo.es/metagenyo/ .

  13. A Critical Review and Appropriation of Pierre Bourdieu's Analysis of Social and Cultural Reproduction

    ERIC Educational Resources Information Center

    Shirley, Dennis

    1986-01-01

    Makes accessible Bourdieu's comprehensive and systematic sociology of French education; which integrates classical sociological theory and statistical analysis. Isolates and explicates key terminology, links these concepts together, and critiques the work from the perspective of the philosophy of praxis. (LHW)

  14. Incorporating Multi-criteria Optimization and Uncertainty Analysis in the Model-Based Systems Engineering of an Autonomous Surface Craft

    DTIC Science & Technology

    2009-09-01

    SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem

  15. COMAN: a web server for comprehensive metatranscriptomics analysis.

    PubMed

    Ni, Yueqiong; Li, Jun; Panagiotou, Gianni

    2016-08-11

    Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.

  16. Orchestrating high-throughput genomic analysis with Bioconductor

    PubMed Central

    Huber, Wolfgang; Carey, Vincent J.; Gentleman, Robert; Anders, Simon; Carlson, Marc; Carvalho, Benilton S.; Bravo, Hector Corrada; Davis, Sean; Gatto, Laurent; Girke, Thomas; Gottardo, Raphael; Hahne, Florian; Hansen, Kasper D.; Irizarry, Rafael A.; Lawrence, Michael; Love, Michael I.; MacDonald, James; Obenchain, Valerie; Oleś, Andrzej K.; Pagès, Hervé; Reyes, Alejandro; Shannon, Paul; Smyth, Gordon K.; Tenenbaum, Dan; Waldron, Levi; Morgan, Martin

    2015-01-01

    Bioconductor is an open-source, open-development software project for the analysis and comprehension of high-throughput data in genomics and molecular biology. The project aims to enable interdisciplinary research, collaboration and rapid development of scientific software. Based on the statistical programming language R, Bioconductor comprises 934 interoperable packages contributed by a large, diverse community of scientists. Packages cover a range of bioinformatic and statistical applications. They undergo formal initial review and continuous automated testing. We present an overview for prospective users and contributors. PMID:25633503

  17. Effect of social support on informed consent in older adults with Parkinson disease and their caregivers.

    PubMed

    Ford, M E; Kallen, M; Richardson, P; Matthiesen, E; Cox, V; Teng, E J; Cook, K F; Petersen, N J

    2008-01-01

    To evaluate the effects of social support on comprehension and recall of consent form information in a study of Parkinson disease patients and their caregivers. Comparison of comprehension and recall outcomes among participants who read and signed the consent form accompanied by a family member/friend versus those of participants who read and signed the consent form unaccompanied. Comprehension and recall of consent form information were measured at one week and one month respectively, using Part A of the Quality of Informed Consent Questionnaire (QuIC). The mean age of the sample of 143 participants was 71 years (SD = 8.6 years). Analysis of covariance was used to compare QuIC scores between the intervention group (n = 70) and control group (n = 73). In the 1-week model, no statistically significant intervention effect was found (p = 0.860). However, the intervention status by patient status interaction was statistically significant (p = 0.012). In the 1-month model, no statistically significant intervention effect was found (p = 0.480). Again, however, the intervention status by patient status interaction was statistically significant (p = 0.040). At both time periods, intervention group patients scored higher (better) on the QuIC than did intervention group caregivers, and control group patients scored lower (worse) on the QuIC than did control group caregivers. Social support played a significant role in enhancing comprehension and recall of consent form information among patients.

  18. Statistical analysis and model validation of automobile emissions

    DOT National Transportation Integrated Search

    2000-09-01

    The article discusses the development of a comprehensive modal emissions model that is currently being integrated with a variety of transportation models as part of National Cooperative Highway Research Program project 25-11. Described is the second-...

  19. miRNet - dissecting miRNA-target interactions and functional associations through network-based visual analysis

    PubMed Central

    Fan, Yannan; Siklenka, Keith; Arora, Simran K.; Ribeiro, Paula; Kimmins, Sarah; Xia, Jianguo

    2016-01-01

    MicroRNAs (miRNAs) can regulate nearly all biological processes and their dysregulation is implicated in various complex diseases and pathological conditions. Recent years have seen a growing number of functional studies of miRNAs using high-throughput experimental technologies, which have produced a large amount of high-quality data regarding miRNA target genes and their interactions with small molecules, long non-coding RNAs, epigenetic modifiers, disease associations, etc. These rich sets of information have enabled the creation of comprehensive networks linking miRNAs with various biologically important entities to shed light on their collective functions and regulatory mechanisms. Here, we introduce miRNet, an easy-to-use web-based tool that offers statistical, visual and network-based approaches to help researchers understand miRNAs functions and regulatory mechanisms. The key features of miRNet include: (i) a comprehensive knowledge base integrating high-quality miRNA-target interaction data from 11 databases; (ii) support for differential expression analysis of data from microarray, RNA-seq and quantitative PCR; (iii) implementation of a flexible interface for data filtering, refinement and customization during network creation; (iv) a powerful fully featured network visualization system coupled with enrichment analysis. miRNet offers a comprehensive tool suite to enable statistical analysis and functional interpretation of various data generated from current miRNA studies. miRNet is freely available at http://www.mirnet.ca. PMID:27105848

  20. Inquiring the Most Critical Teacher's Technology Education Competences in the Highest Efficient Technology Education Learning Organization

    ERIC Educational Resources Information Center

    Yung-Kuan, Chan; Hsieh, Ming-Yuan; Lee, Chin-Feng; Huang, Chih-Cheng; Ho, Li-Chih

    2017-01-01

    Under the hyper-dynamic education situation, this research, in order to comprehensively explore the interplays between Teacher Competence Demands (TCD) and Learning Organization Requests (LOR), cross-employs the data refined method of Descriptive Statistics (DS) method and Analysis of Variance (ANOVA) and Principal Components Analysis (PCA)…

  1. Statistics and quality assurance for the Northern Research Station Forest Inventory and Analysis Program, 2016

    Treesearch

    Dale D. Gormanson; Scott A. Pugh; Charles J. Barnett; Patrick D. Miles; Randall S. Morin; Paul A. Sowers; Jim Westfall

    2017-01-01

    The U.S. Forest Service Forest Inventory and Analysis (FIA) program collects sample plot data on all forest ownerships across the United States. FIA's primary objective is to determine the extent, condition, volume, growth, and use of trees on the Nation's forest land through a comprehensive inventory and analysis of the Nation's forest resources. The...

  2. A comprehensive analysis of the performance characteristics of the Mount Laguna solar photovoltaic installation

    NASA Technical Reports Server (NTRS)

    Shumka, A.; Sollock, S. G.

    1981-01-01

    This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.

  3. A comprehensive study on pavement edge line implementation.

    DOT National Transportation Integrated Search

    2014-04-01

    The previous 2011 study Safety Improvement from Edge Lines on Rural Two-Lane Highways analyzed the crash data of : three years before and one year after edge line implementation by using the latest safety analysis statistical method. It : concl...

  4. DAnTE: a statistical tool for quantitative analysis of –omics data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep

    2008-05-03

    DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.

  5. Does administering a comprehensive examination affect pass rates on the Registered Health Information Administrator certification examination?

    PubMed

    McNeill, Marjorie H

    2009-01-01

    The purpose of this research study was to determine whether the administration of a comprehensive examination before graduation increases the percentage of students passing the Registered Health Information Administrator certification examination. A t-test for independent means yielded a statistically significant difference between the Registered Health Information Administrator certification examination pass rates of health information administration programs that administer a comprehensive examination and programs that do not administer a comprehensive examination. Programs with a high certification examination pass rate do not require a comprehensive examination when compared with those programs with a lower pass rate. It is concluded that health information administration faculty at the local level should perform program self-analysis to improve student progress toward achievement of learning outcomes and entry-level competencies.

  6. An investigation of the feasibility of improving oculometer data analysis through application of advanced statistical techniques

    NASA Technical Reports Server (NTRS)

    Rana, D. S.

    1980-01-01

    The data reduction capabilities of the current data reduction programs were assessed and a search for a more comprehensive system with higher data analytic capabilities was made. Results of the investigation are presented.

  7. Global, Local, and Graphical Person-Fit Analysis Using Person-Response Functions

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R.

    2005-01-01

    Person-fit statistics test whether the likelihood of a respondent's complete vector of item scores on a test is low given the hypothesized item response theory model. This binary information may be insufficient for diagnosing the cause of a misfitting item-score vector. The authors propose a comprehensive methodology for person-fit analysis in the…

  8. Comparing the Effectiveness of SPSS and EduG Using Different Designs for Generalizability Theory

    ERIC Educational Resources Information Center

    Teker, Gulsen Tasdelen; Guler, Nese; Uyanik, Gulden Kaya

    2015-01-01

    Generalizability theory (G theory) provides a broad conceptual framework for social sciences such as psychology and education, and a comprehensive construct for numerous measurement events by using analysis of variance, a strong statistical method. G theory, as an extension of both classical test theory and analysis of variance, is a model which…

  9. Counselling by primary care physicians may help patients with heartburn-predominant uninvestigated dyspepsia.

    PubMed

    Paré, Pierre; Lee, Joanna; Hawes, Ian A

    2010-03-01

    To determine whether strategies to counsel and empower patients with heartburn-predominant dyspepsia could improve health-related quality of life. Using a cluster randomized, parallel group, multicentre design, nine centres were assigned to provide either basic or comprehensive counselling to patients (age range 18 to 50 years) presenting with heartburn-predominant upper gastrointestinal symptoms, who would be considered for drug therapy without further investigation. Patients were treated for four weeks with esomeprazole 40 mg once daily, followed by six months of treatment that was at the physician's discretion. The primary end point was the baseline change in Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire score. A total of 135 patients from nine centres were included in the intention-to-treat analysis. There was a statistically significant baseline improvement in all domains of the QOLRAD questionnaire in both study arms at four and seven months (P<0.0001). After four months, the overall mean change in QOLRAD score appeared greater in the comprehensive counselling group than in the basic counselling group (1.77 versus 1.47, respectively); however, this difference was not statistically significant (P=0.07). After seven months, the overall mean baseline change in QOLRAD score between the comprehensive and basic counselling groups was not statistically significant (1.69 versus 1.56, respectively; P=0.63). A standardized, comprehensive counselling intervention showed a positive initial trend in improving quality of life in patients with heartburn-predominant uninvestigated dyspepsia. Further investigation is needed to confirm the potential benefits of providing patients with comprehensive counselling regarding disease management.

  10. Counselling by primary care physicians may help patients with heartburn-predominant uninvestigated dyspepsia

    PubMed Central

    Paré, Pierre; Math, Joanna Lee M; Hawes, Ian A

    2010-01-01

    OBJECTIVE: To determine whether strategies to counsel and empower patients with heartburn-predominant dyspepsia could improve health-related quality of life. METHODS: Using a cluster randomized, parallel group, multicentre design, nine centres were assigned to provide either basic or comprehensive counselling to patients (age range 18 to 50 years) presenting with heartburn-predominant upper gastrointestinal symptoms, who would be considered for drug therapy without further investigation. Patients were treated for four weeks with esomeprazole 40 mg once daily, followed by six months of treatment that was at the physician’s discretion. The primary end point was the baseline change in Quality of Life in Reflux and Dyspepsia (QOLRAD) questionnaire score. RESULTS: A total of 135 patients from nine centres were included in the intention-to-treat analysis. There was a statistically significant baseline improvement in all domains of the QOLRAD questionnaire in both study arms at four and seven months (P<0.0001). After four months, the overall mean change in QOLRAD score appeared greater in the comprehensive counselling group than in the basic counselling group (1.77 versus 1.47, respectively); however, this difference was not statistically significant (P=0.07). After seven months, the overall mean baseline change in QOLRAD score between the comprehensive and basic counselling groups was not statistically significant (1.69 versus 1.56, respectively; P=0.63). CONCLUSIONS: A standardized, comprehensive counselling intervention showed a positive initial trend in improving quality of life in patients with heartburn-predominant uninvestigated dyspepsia. Further investigation is needed to confirm the potential benefits of providing patients with comprehensive counselling regarding disease management. PMID:20352148

  11. Unattended Sleep Studies in a VA Population: Initial Evaluation by Chart Review Versus Clinic Visit by a Midlevel Provider.

    PubMed

    Alsharif, Abdelhamid M; Potts, Michelle; Laws, Regina; Freire, Amado X; Sultan-Ali, Ibrahim

    2016-10-01

    Obstructive sleep apnea (OSA) is a prevalent disorder that is associated with multiple medical consequences. Although in-laboratory polysomnography is the gold standard for the diagnosis of OSA, portable monitors have been developed and studied to help increase efficiency and ease of diagnosis. We aimed to assess the adequacy of a midlevel provider specializing in sleep medicine to risk-stratify patients for OSA based on a chart review versus a comprehensive clinic evaluation before scheduling an unattended sleep study. This study was an observational, nonrandomized, retrospective data collection by chart review of patients accrued prospectively who underwent an unattended sleep study at the Sleep Health Center at the Memphis Veterans Affairs Medical Center during the first 13 months of the program (May 1, 2011-May 31, 2012). A total of 205 patients were included in the data analysis. Analysis showed no statistically significant differences between chart review and clinic visit groups ( P = 0.54) in terms of OSA diagnosis. Although not statistically significant, the analysis shows a trend toward higher mean age (50.3 vs 47.4 years; P = 0.10) and lower mean body mass index (34.4 vs 36.0; P = 0.08) in individuals who were evaluated during a comprehensive clinic visit. A statistically significant difference is seen in terms of the pretest clinical probability of OSA being moderate or high in 62.2% of patients in the clinic visit group and 95.7% in the chart review group, with a χ 2 P ≤ 0.0001. In the Veterans Health Administration's system, the assessment of pretest probability may be determined by a midlevel provider using chart review with equal efficacy to a comprehensive face-to-face evaluation in terms of OSA diagnosis via unattended sleep studies.

  12. Approximations to the distribution of a test statistic in covariance structure analysis: A comprehensive study.

    PubMed

    Wu, Hao

    2018-05-01

    In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.

  13. Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2014-01-01

    Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979

  14. 78 FR 8682 - Shipping Coordinating Committee; Notice of Committee Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-06

    ... the Protocol of 1978 (MARPOL 73/78); Casualty statistics and investigations; Harmonization of port State control activities; Port State Control (PSC) Guidelines on seafarers' hours of rest and PSC... control under the 2004 Ballast Water Management (BWM) Convention; Comprehensive analysis of difficulties...

  15. Statistical research on the bioactivity of new marine natural products discovered during the 28 years from 1985 to 2012.

    PubMed

    Hu, Yiwen; Chen, Jiahui; Hu, Guping; Yu, Jianchen; Zhu, Xun; Lin, Yongcheng; Chen, Shengping; Yuan, Jie

    2015-01-07

    Every year, hundreds of new compounds are discovered from the metabolites of marine organisms. Finding new and useful compounds is one of the crucial drivers for this field of research. Here we describe the statistics of bioactive compounds discovered from marine organisms from 1985 to 2012. This work is based on our database, which contains information on more than 15,000 chemical substances including 4196 bioactive marine natural products. We performed a comprehensive statistical analysis to understand the characteristics of the novel bioactive compounds and detail temporal trends, chemical structures, species distribution, and research progress. We hope this meta-analysis will provide useful information for research into the bioactivity of marine natural products and drug development.

  16. Reading Ability as a Predictor of Academic Procrastination among African American Graduate Students

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2008-01-01

    The present study examined the relationship between reading ability (i.e., reading comprehension and reading vocabulary) and academic procrastination among 120 African American graduate students. A canonical correlation analysis revealed statistically significant and practically significant multivariate relationships between these two reading…

  17. Statistics and quality assurance for the Northern Research Station Forest Inventory and Analysis Program

    Treesearch

    Dale D. Gormanson; Scott A. Pugh; Charles J. Barnett; Patrick D. Miles; Randall S. Morin; Paul A. Sowers; James A. Westfall

    2018-01-01

    The U.S. Forest Service Forest Inventory and Analysis (FIA) program collects sample plot data on all forest ownerships across the United States. FIA’s primary objective is to determine the extent, condition, volume, growth, and use of trees on the Nation’s forest land through a comprehensive inventory and analysis of the Nation’s forest resources. The FIA program...

  18. Evaluation of risk communication in a mammography patient decision aid.

    PubMed

    Klein, Krystal A; Watson, Lindsey; Ash, Joan S; Eden, Karen B

    2016-07-01

    We characterized patients' comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest-posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Participants' positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Evaluation of risk communication in a mammography patient decision aid

    PubMed Central

    Klein, Krystal A.; Watson, Lindsey; Ash, Joan S.; Eden, Karen B.

    2016-01-01

    Objectives We characterized patients’ comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Methods Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest–posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Results Participants’ positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Conclusions Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Practice implications Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics PMID:26965020

  20. GSuite HyperBrowser: integrative analysis of dataset collections across the genome and epigenome.

    PubMed

    Simovski, Boris; Vodák, Daniel; Gundersen, Sveinung; Domanska, Diana; Azab, Abdulrahman; Holden, Lars; Holden, Marit; Grytten, Ivar; Rand, Knut; Drabløs, Finn; Johansen, Morten; Mora, Antonio; Lund-Andersen, Christin; Fromm, Bastian; Eskeland, Ragnhild; Gabrielsen, Odd Stokke; Ferkingstad, Egil; Nakken, Sigve; Bengtsen, Mads; Nederbragt, Alexander Johan; Thorarensen, Hildur Sif; Akse, Johannes Andreas; Glad, Ingrid; Hovig, Eivind; Sandve, Geir Kjetil

    2017-07-01

    Recent large-scale undertakings such as ENCODE and Roadmap Epigenomics have generated experimental data mapped to the human reference genome (as genomic tracks) representing a variety of functional elements across a large number of cell types. Despite the high potential value of these publicly available data for a broad variety of investigations, little attention has been given to the analytical methodology necessary for their widespread utilisation. We here present a first principled treatment of the analysis of collections of genomic tracks. We have developed novel computational and statistical methodology to permit comparative and confirmatory analyses across multiple and disparate data sources. We delineate a set of generic questions that are useful across a broad range of investigations and discuss the implications of choosing different statistical measures and null models. Examples include contrasting analyses across different tissues or diseases. The methodology has been implemented in a comprehensive open-source software system, the GSuite HyperBrowser. To make the functionality accessible to biologists, and to facilitate reproducible analysis, we have also developed a web-based interface providing an expertly guided and customizable way of utilizing the methodology. With this system, many novel biological questions can flexibly be posed and rapidly answered. Through a combination of streamlined data acquisition, interoperable representation of dataset collections, and customizable statistical analysis with guided setup and interpretation, the GSuite HyperBrowser represents a first comprehensive solution for integrative analysis of track collections across the genome and epigenome. The software is available at: https://hyperbrowser.uio.no. © The Author 2017. Published by Oxford University Press.

  1. Verbal Neuropsychological Functions in Aphasia: An Integrative Model

    ERIC Educational Resources Information Center

    Vigliecca, Nora Silvana; Báez, Sandra

    2015-01-01

    A theoretical framework which considers the verbal functions of the brain under a multivariate and comprehensive cognitive model was statistically analyzed. A confirmatory factor analysis was performed to verify whether some recognized aphasia constructs can be hierarchically integrated as latent factors from a homogenously verbal test. The Brief…

  2. Student Participation in Dual Enrollment and College Success

    ERIC Educational Resources Information Center

    Jones, Stephanie J.

    2014-01-01

    The study investigated the impact of dual enrollment participation on the academic preparation of first-year full-time college students at a large comprehensive community college and a large research university. The research design was causal-comparative and utilized descriptive and inferential statistics. Multivariate analysis of variances were…

  3. Transnational Partnerships in Higher Education in China: The Diversity and Complexity of Elite Strategic Alliances

    ERIC Educational Resources Information Center

    Montgomery, Catherine

    2016-01-01

    Transnational partnerships between universities can illustrate the changing political,social, and cultural terrain of global higher education. Drawing on secondary data analysis of government educational statistics, university web pages, and a comprehensive literature review, this article focuses on transnational partnerships with particular…

  4. 78 FR 50373 - Proposed Information Collection; Comment Request; Annual Capital Expenditures Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-19

    ... source of detailed comprehensive statistics on actual business spending for non-farm companies, non- governmental companies, organizations, and associations operating in the United States. Both employer and nonemployer companies are included in the survey. The Bureau of Economic Analysis, the primary Federal user of...

  5. St. Paul Harbor, St. Paul Island, Alaska; Design for Wave and Shoaling Protection; Hydraulic Model Investigation

    DTIC Science & Technology

    1988-09-01

    S P a .E REPORT DOCUMENTATION PAGE OMR;oJ ’ , CRR Eo Dale n2 ;R6 ’a 4EPOR- SCRFT CASS F.C.T ON ’b RES’RICTI’,E MARKINGS Unclassified a ECRIT y...and selection of test waves 30. Measured prototype wave data on which a comprehensive statistical analysis of wave conditions could be based were...Tests Existing conditions 32. Prior to testing of the various improvement plans, comprehensive tests were conducted for existing conditions (Plate 1

  6. Reusable, extensible, and modifiable R scripts and Kepler workflows for comprehensive single set ChIP-seq analysis.

    PubMed

    Cormier, Nathan; Kolisnik, Tyler; Bieda, Mark

    2016-07-05

    There has been an enormous expansion of use of chromatin immunoprecipitation followed by sequencing (ChIP-seq) technologies. Analysis of large-scale ChIP-seq datasets involves a complex series of steps and production of several specialized graphical outputs. A number of systems have emphasized custom development of ChIP-seq pipelines. These systems are primarily based on custom programming of a single, complex pipeline or supply libraries of modules and do not produce the full range of outputs commonly produced for ChIP-seq datasets. It is desirable to have more comprehensive pipelines, in particular ones addressing common metadata tasks, such as pathway analysis, and pipelines producing standard complex graphical outputs. It is advantageous if these are highly modular systems, available as both turnkey pipelines and individual modules, that are easily comprehensible, modifiable and extensible to allow rapid alteration in response to new analysis developments in this growing area. Furthermore, it is advantageous if these pipelines allow data provenance tracking. We present a set of 20 ChIP-seq analysis software modules implemented in the Kepler workflow system; most (18/20) were also implemented as standalone, fully functional R scripts. The set consists of four full turnkey pipelines and 16 component modules. The turnkey pipelines in Kepler allow data provenance tracking. Implementation emphasized use of common R packages and widely-used external tools (e.g., MACS for peak finding), along with custom programming. This software presents comprehensive solutions and easily repurposed code blocks for ChIP-seq analysis and pipeline creation. Tasks include mapping raw reads, peakfinding via MACS, summary statistics, peak location statistics, summary plots centered on the transcription start site (TSS), gene ontology, pathway analysis, and de novo motif finding, among others. These pipelines range from those performing a single task to those performing full analyses of ChIP-seq data. The pipelines are supplied as both Kepler workflows, which allow data provenance tracking, and, in the majority of cases, as standalone R scripts. These pipelines are designed for ease of modification and repurposing.

  7. Statistical properties of the radiation from SASE FEL operating in the linear regime

    NASA Astrophysics Data System (ADS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1998-02-01

    The paper presents comprehensive analysis of statistical properties of the radiation from self amplified spontaneous emission (SASE) free electron laser operating in linear mode. The investigation has been performed in a one-dimensional approximation, assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied: field correlations, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and photoelectric counting statistics of SASE FEL radiation. It is shown that the radiation from SASE FEL operating in linear regime possesses all the features corresponding to completely chaotic polarized radiation.

  8. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data

    PubMed Central

    Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J.; Yanes, Oscar

    2012-01-01

    Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples. PMID:24957762

  9. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data.

    PubMed

    Vinaixa, Maria; Samino, Sara; Saez, Isabel; Duran, Jordi; Guinovart, Joan J; Yanes, Oscar

    2012-10-18

    Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.

  10. Statistical Research on the Bioactivity of New Marine Natural Products Discovered during the 28 Years from 1985 to 2012

    PubMed Central

    Hu, Yiwen; Chen, Jiahui; Hu, Guping; Yu, Jianchen; Zhu, Xun; Lin, Yongcheng; Chen, Shengping; Yuan, Jie

    2015-01-01

    Every year, hundreds of new compounds are discovered from the metabolites of marine organisms. Finding new and useful compounds is one of the crucial drivers for this field of research. Here we describe the statistics of bioactive compounds discovered from marine organisms from 1985 to 2012. This work is based on our database, which contains information on more than 15,000 chemical substances including 4196 bioactive marine natural products. We performed a comprehensive statistical analysis to understand the characteristics of the novel bioactive compounds and detail temporal trends, chemical structures, species distribution, and research progress. We hope this meta-analysis will provide useful information for research into the bioactivity of marine natural products and drug development. PMID:25574736

  11. Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model

    PubMed Central

    Hopkins, John B.; Ferguson, Jake M.

    2012-01-01

    Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246

  12. The relevance of receptive vocabulary in reading comprehension.

    PubMed

    Nalom, Ana Flávia de Oliveira; Soares, Aparecido José Couto; Cárnio, Maria Silvia

    2015-01-01

    To characterize the performance of students from the 5th year of primary school, with and without indicatives of reading and writing disorders, in receptive vocabulary and reading comprehension of sentences and texts, and to verify possible correlations between both. This study was approved by the Research Ethics Committee of the institution (no. 098/13). Fifty-two students in the 5th year from primary school, with and without indicatives of reading and writing disorders, and from two public schools participated in this study. After signing the informed consent and having a speech therapy assessment for the application of inclusion criteria, the students were submitted to a specific test for standardized evaluation of receptive vocabulary and reading comprehension. The data were studied using statistical analysis through the Kruskal-Wallis test, analysis of variance techniques, and Spearman's rank correlation coefficient with level of significance to be 0.05. A receiver operating characteristic (ROC) curve (was constructed in which reading comprehension was considered as gold standard. The students without indicatives of reading and writing disorders presented a better performance in all tests. No significant correlation was found between the tests that evaluated reading comprehension in either group. A correlation was found between reading comprehension of texts and receptive vocabulary in the group without indicatives. In the absence of indicatives of reading and writing disorders, the presence of a good range of vocabulary highly contributes to a proficient reading comprehension of texts.

  13. A Quantile Regression Approach to Understanding the Relations Between Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    PubMed Central

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2015-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in Adult Basic Education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. PMID:25351773

  14. Statistics and classification of the microwave zebra patterns associated with solar flares

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Baolin; Tan, Chengming; Zhang, Yin

    2014-01-10

    The microwave zebra pattern (ZP) is the most interesting, intriguing, and complex spectral structure frequently observed in solar flares. A comprehensive statistical study will certainly help us to understand the formation mechanism, which is not exactly clear now. This work presents a comprehensive statistical analysis of a big sample with 202 ZP events collected from observations at the Chinese Solar Broadband Radio Spectrometer at Huairou and the Ondŕejov Radiospectrograph in the Czech Republic at frequencies of 1.00-7.60 GHz from 2000 to 2013. After investigating the parameter properties of ZPs, such as the occurrence in flare phase, frequency range, polarization degree,more » duration, etc., we find that the variation of zebra stripe frequency separation with respect to frequency is the best indicator for a physical classification of ZPs. Microwave ZPs can be classified into three types: equidistant ZPs, variable-distant ZPs, and growing-distant ZPs, possibly corresponding to mechanisms of the Bernstein wave model, whistler wave model, and double plasma resonance model, respectively. This statistical classification may help us to clarify the controversies between the existing various theoretical models and understand the physical processes in the source regions.« less

  15. North Atlantic Coast Comprehensive Study Phase I: Statistical Analysis of Historical Extreme Water Levels with Sea Level Change

    DTIC Science & Technology

    2014-09-01

    14-7 ii Abstract The U.S. North Atlantic coast is subject to coastal flooding as a result of both severe extratropical storms (e.g., Nor’easters...Products and Services, excluding any kind of high-resolution hydrodynamic modeling. Tropical and extratropical storms were treated as a single...joint probability analysis and high-fidelity modeling of tropical and extratropical storms

  16. The Prompt-afterglow Connection in Gamma-ray Bursts: a Comprehensive Statistical Analysis of Swift X-ray Light-curves

    NASA Technical Reports Server (NTRS)

    Margutti, R.; Zaninoni, E.; Bernardini, M. G.; Chincarini, G.; Pasotti, F.; Guidorzi, C.; Angelini, Lorella; Burrows, D. N.; Capalbi, M.; Evans, P. A.; hide

    2012-01-01

    We present a comprehensive statistical analysis of Swift X-ray light-curves of Gamma- Ray Bursts (GRBs) collecting data from more than 650 GRBs discovered by Swift and other facilities. The unprecedented sample size allows us to constrain the rest-frame X-ray properties of GRBs from a statistical perspective, with particular reference to intrinsic time scales and the energetics of the different light-curve phases in a common rest-frame 0.3-30 keV energy band. Temporal variability episodes are also studied and their properties constrained. Two fundamental questions drive this effort: i) Does the X-ray emission retain any kind of "memory" of the prompt ?-ray phase? ii) Where is the dividing line between long and short GRB X-ray properties? We show that short GRBs decay faster, are less luminous and less energetic than long GRBs in the X-rays, but are interestingly characterized by similar intrinsic absorption. We furthermore reveal the existence of a number of statistically significant relations that link the X-ray to prompt ?-ray parameters in long GRBs; short GRBs are outliers of the majority of these 2-parameter relations. However and more importantly, we report on the existence of a universal 3-parameter scaling that links the X-ray and the ?-ray energy to the prompt spectral peak energy of both long and short GRBs: E(sub X,iso)? E(sup 1.00+/-0.06)(sub ?,iso) /E(sup 0.60+/-0.10)(sub pk).

  17. The prompt-afterglow connection in gamma-ray bursts: a comprehensive statistical analysis of Swift X-ray light curves

    NASA Astrophysics Data System (ADS)

    Margutti, R.; Zaninoni, E.; Bernardini, M. G.; Chincarini, G.; Pasotti, F.; Guidorzi, C.; Angelini, L.; Burrows, D. N.; Capalbi, M.; Evans, P. A.; Gehrels, N.; Kennea, J.; Mangano, V.; Moretti, A.; Nousek, J.; Osborne, J. P.; Page, K. L.; Perri, M.; Racusin, J.; Romano, P.; Sbarufatti, B.; Stafford, S.; Stamatikos, M.

    2013-01-01

    We present a comprehensive statistical analysis of Swift X-ray light curves of gamma-ray bursts (GRBs) collecting data from more than 650 GRBs discovered by Swift and other facilities. The unprecedented sample size allows us to constrain the rest-frame X-ray properties of GRBs from a statistical perspective, with particular reference to intrinsic time-scales and the energetics of the different light-curve phases in a common rest-frame 0.3-30 keV energy band. Temporal variability episodes are also studied and their properties constrained. Two fundamental questions drive this effort: (i) Does the X-ray emission retain any kind of `memory' of the prompt γ-ray phase? (ii) Where is the dividing line between long and short GRB X-ray properties? We show that short GRBs decay faster, are less luminous and less energetic than long GRBs in the X-rays, but are interestingly characterized by similar intrinsic absorption. We furthermore reveal the existence of a number of statistically significant relations that link the X-ray to prompt γ-ray parameters in long GRBs; short GRBs are outliers of the majority of these two-parameter relations. However and more importantly, we report on the existence of a universal three-parameter scaling that links the X-ray and the γ-ray energy to the prompt spectral peak energy of both long and short GRBs: EX, iso∝E1.00 ± 0.06γ, iso/E0.60 ± 0.10pk.

  18. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  19. Third Grade Proficiency in DC: Little Progress (2007-2011). Policy Brief

    ERIC Educational Resources Information Center

    O'Keefe, Bonnie

    2012-01-01

    Analysis of DC Comprehensive Assessment System (DC CAS) scores from 2007 to 2011 found no evidence of statistically significant changes in third grade math or reading proficiency at the citywide level, among traditional public schools or public charter schools, among racial and ethnic groups or by economic advantage or disadvantage.…

  20. Investigating the Relationship between Conceptual and Procedural Errors in the Domain of Probability Problem-Solving.

    ERIC Educational Resources Information Center

    O'Connell, Ann Aileen

    The relationships among types of errors observed during probability problem solving were studied. Subjects were 50 graduate students in an introductory probability and statistics course. Errors were classified as text comprehension, conceptual, procedural, and arithmetic. Canonical correlation analysis was conducted on the frequencies of specific…

  1. Proceedings: Conference on Computers in Chemical Education and Research, Dekalb, Illinois, 19-23 July 1971.

    ERIC Educational Resources Information Center

    1971

    Computers have effected a comprehensive transformation of chemistry. Computers have greatly enhanced the chemist's ability to do model building, simulations, data refinement and reduction, analysis of data in terms of models, on-line data logging, automated control of experiments, quantum chemistry and statistical and mechanical calculations, and…

  2. PHOXTRACK-a tool for interpreting comprehensive datasets of post-translational modifications of proteins.

    PubMed

    Weidner, Christopher; Fischer, Cornelius; Sauer, Sascha

    2014-12-01

    We introduce PHOXTRACK (PHOsphosite-X-TRacing Analysis of Causal Kinases), a user-friendly freely available software tool for analyzing large datasets of post-translational modifications of proteins, such as phosphorylation, which are commonly gained by mass spectrometry detection. In contrast to other currently applied data analysis approaches, PHOXTRACK uses full sets of quantitative proteomics data and applies non-parametric statistics to calculate whether defined kinase-specific sets of phosphosite sequences indicate statistically significant concordant differences between various biological conditions. PHOXTRACK is an efficient tool for extracting post-translational information of comprehensive proteomics datasets to decipher key regulatory proteins and to infer biologically relevant molecular pathways. PHOXTRACK will be maintained over the next years and is freely available as an online tool for non-commercial use at http://phoxtrack.molgen.mpg.de. Users will also find a tutorial at this Web site and can additionally give feedback at https://groups.google.com/d/forum/phoxtrack-discuss. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. An exploration of the relationship between metacomprehension strategy awareness and reading comprehension performance with narrative and science texts

    NASA Astrophysics Data System (ADS)

    York, Kathleen Christine

    This mixed method study explored the relationship between metacomprehension strategy awareness and reading comprehension performance with narrative and science texts. Participants, 132 eighth-grade, predominately African American students, attending one middle school in a southeastern state, were administered a narrative and science version of the Metacomprehension Strategy Index (MSI) and asked to identify helpful strategic behaviors from six clustered subcategories (predicting and verifying; previewing; purpose setting; self-questioning; drawing from background knowledge; and summarizing and applying fix-up strategies). Participants also read and answered comprehension questions about narrative and science passages. Findings revealed no statistically significant differences in overall metacomprehension awareness with narrative and science texts. Statistically significant (p<.05) differences were found for two of the six subcategories, indicating students preview and set purpose more often with science than narrative texts. Findings also indicated overall narrative and science metacomprehension awareness and comprehension performance scores were statistically significantly (p<.01) related. Specifically, the category of summarizing and applying fix-up strategies was the strongest predictor of comprehension performance for both narrative and science texts. The qualitative phase of this study explored the relationship between metacomprehension awareness with narrative and science texts and the comprehension performance of six middle school students, three of whom scored high overall on the narrative and science text comprehension assessments in phase one of the study, and three of whom scored low. A qualitative analysis of multiple sources of data, including video-taped interviews and think-alouds, revealed the three high scoring participants engaged in competent school-based, metacognitive conversations infused with goal, self, and narrative talk and demonstrated multi-strategic engagements with narrative and science texts. In stark contrast, the three low scoring participants engaged in dissonant school-based talk infused with disclaimers, over-generalized, decontextualized, and literalized answers and demonstrated robotic, limited (primarily rereading and restating), and frustrated strategic acts when interacting with both narrative and science texts. The educational implications are discussed. This dissertation was funded by the Office of Special Education Programs, Federal Office Grant Award No. 324E031501.

  4. The change of adjacent segment after cervical disc arthroplasty compared with anterior cervical discectomy and fusion: a meta-analysis of randomized controlled trials.

    PubMed

    Dong, Liang; Xu, Zhengwei; Chen, Xiujin; Wang, Dongqi; Li, Dichen; Liu, Tuanjing; Hao, Dingjun

    2017-10-01

    Many meta-analyses have been performed to study the efficacy of cervical disc arthroplasty (CDA) compared with anterior cervical discectomy and fusion (ACDF); however, there are few data referring to adjacent segment within these meta-analyses, or investigators are unable to arrive at the same conclusion in the few meta-analyses about adjacent segment. With the increased concerns surrounding adjacent segment degeneration (ASDeg) and adjacent segment disease (ASDis) after anterior cervical surgery, it is necessary to perform a comprehensive meta-analysis to analyze adjacent segment parameters. To perform a comprehensive meta-analysis to elaborate adjacent segment motion, degeneration, disease, and reoperation of CDA compared with ACDF. Meta-analysis of randomized controlled trials (RCTs). PubMed, Embase, and Cochrane Library were searched for RCTs comparing CDA and ACDF before May 2016. The analysis parameters included follow-up time, operative segments, adjacent segment motion, ASDeg, ASDis, and adjacent segment reoperation. The risk of bias scale was used to assess the papers. Subgroup analysis and sensitivity analysis were used to analyze the reason for high heterogeneity. Twenty-nine RCTs fulfilled the inclusion criteria. Compared with ACDF, the rate of adjacent segment reoperation in the CDA group was significantly lower (p<.01), and the advantage of that group in reducing adjacent segment reoperation increases with increasing follow-up time by subgroup analysis. There was no statistically significant difference in ASDeg between CDA and ACDF within the 24-month follow-up period; however, the rate of ASDeg in CDA was significantly lower than that of ACDF with the increase in follow-up time (p<.01). There was no statistically significant difference in ASDis between CDA and ACDF (p>.05). Cervical disc arthroplasty provided a lower adjacent segment range of motion (ROM) than did ACDF, but the difference was not statistically significant. Compared with ACDF, the advantages of CDA were lower ASDeg and adjacent segment reoperation. However, there was no statistically significant difference in ASDis and adjacent segment ROM. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Learning disabled and average readers' working memory and comprehension: does metacognition play a role?

    PubMed

    Swanson, H L; Trahan, M

    1996-09-01

    The present study investigates (a) whether learning disabled readers' working memory deficits that underlie poor reading comprehension are related to a general system, and (b) whether metacognition contributes to comprehension beyond what is predicted by working memory and word knowledge. To this end, performance between learning and disabled (N = 60) and average readers (N = 60) was compared on the reading comprehension, reading rate, and vocabulary subtests of the Nelson Skills Reading Test, Sentence Span test composed of high and low imagery words, and a Metacognitive Questionnaire. As expected, differences between groups in working memory, vocabulary, and reading measures emerged, whereas ability groups were statistically comparable on the Metacognitive Questionnaire. A within-group analysis indicated that the correlation patterns between working memory, vocabulary, metacognition, and reading comprehension were not the same between ability groups. For predicting reading comprehension, the metacognitive questionnaire best predicted learning disabled readers' performance, whereas the working memory span measure that included low-imagery words best predicted average achieving readers' comprehension. Overall, the results suggest that the relationship between learning disabled readers' generalised working memory deficits and poor reading comprehension may be mediated by metacognition.

  6. Interpretation of statistical results.

    PubMed

    García Garmendia, J L; Maroto Monserrat, F

    2018-02-21

    The appropriate interpretation of the statistical results is crucial to understand the advances in medical science. The statistical tools allow us to transform the uncertainty and apparent chaos in nature to measurable parameters which are applicable to our clinical practice. The importance of understanding the meaning and actual extent of these instruments is essential for researchers, the funders of research and for professionals who require a permanent update based on good evidence and supports to decision making. Various aspects of the designs, results and statistical analysis are reviewed, trying to facilitate his comprehension from the basics to what is most common but no better understood, and bringing a constructive, non-exhaustive but realistic look. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  7. How to Perform a Systematic Review and Meta-analysis of Diagnostic Imaging Studies.

    PubMed

    Cronin, Paul; Kelly, Aine Marie; Altaee, Duaa; Foerster, Bradley; Petrou, Myria; Dwamena, Ben A

    2018-05-01

    A systematic review is a comprehensive search, critical evaluation, and synthesis of all the relevant studies on a specific (clinical) topic that can be applied to the evaluation of diagnostic and screening imaging studies. It can be a qualitative or a quantitative (meta-analysis) review of available literature. A meta-analysis uses statistical methods to combine and summarize the results of several studies. In this review, a 12-step approach to performing a systematic review (and meta-analysis) is outlined under the four domains: (1) Problem Formulation and Data Acquisition, (2) Quality Appraisal of Eligible Studies, (3) Statistical Analysis of Quantitative Data, and (4) Clinical Interpretation of the Evidence. This review is specifically geared toward the performance of a systematic review and meta-analysis of diagnostic test accuracy (imaging) studies. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  8. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. A Systematic, Intensive Statistical Investigation of Data from the Comprehensive Analysis of Reported Drugs (CARD) for Compliance and Illicit Opioid Abstinence in Substance Addiction Treatment with Buprenorphine/naloxone.

    PubMed

    Blum, Kenneth; Han, David; Modestino, Edward J; Saunders, Scott; Roy, A Kennison; Jacobs, W; Inaba, Darryl S; Baron, David; Oscar-Berman, Marlene; Hauser, Mary; Badgaiyan, Rajendra D; Smith, David E; Femino, John; Gold, Mark S

    2018-01-28

    Buprenorphine and naloxone (bup/nal), a combination partial mu receptor agonist and low-dose delta mu antagonist, is presently recommended and used to treat opioid-use disorder. However, a literature review revealed a paucity of research involving data from urine drug tests that looked at compliance and abstinence in one sample. Statistical analysis of data from the Comprehensive Analysis of Reported Drugs (CARD) was used to assess compliance and abstinence during treatment in a large cohort of bup/nal patients attending chemical-dependency programs from eastern USA in 2010 and 2011. Part 1: Bup/nal was present in 93.4% of first (n = 1,282; p <.0001) and 92.4% of last (n = 1,268; p <.0001) urine samples. Concomitantly, unreported illicit drugs were present in 47.7% (n = 655, p =.0261) of samples. Patients who were compliant to the bup/nal prescription were more likely than noncompliant patients to be abstinent during treatment (p =.0012; odds ratio = 1.69 with 95% confidence interval (1.210, 2.354). Part 2: An analysis of all samples collected in 2011 revealed a significant improvement in both compliance (p < 2.2 × 10 -16 ) and abstinence (p < 2.2 × 10 -16 ) during treatment. Conclusion/Importance: While significant use of illicit opioids during treatment with bup/nal is present, improvements in abstinence and high compliance during maintenance-assisted therapy programs may ameliorate fears of diversion in comprehensive programs. Expanded clinical datasets, the treatment modality, location, and year of sampling are important covariates, for further studies. The potential for long-term antireward effects from bup/nal use requires consideration in future investigations.

  10. The Perseus computational platform for comprehensive analysis of (prote)omics data.

    PubMed

    Tyanova, Stefka; Temu, Tikira; Sinitcyn, Pavel; Carlson, Arthur; Hein, Marco Y; Geiger, Tamar; Mann, Matthias; Cox, Jürgen

    2016-09-01

    A main bottleneck in proteomics is the downstream biological analysis of highly multivariate quantitative protein abundance data generated using mass-spectrometry-based analysis. We developed the Perseus software platform (http://www.perseus-framework.org) to support biological and biomedical researchers in interpreting protein quantification, interaction and post-translational modification data. Perseus contains a comprehensive portfolio of statistical tools for high-dimensional omics data analysis covering normalization, pattern recognition, time-series analysis, cross-omics comparisons and multiple-hypothesis testing. A machine learning module supports the classification and validation of patient groups for diagnosis and prognosis, and it also detects predictive protein signatures. Central to Perseus is a user-friendly, interactive workflow environment that provides complete documentation of computational methods used in a publication. All activities in Perseus are realized as plugins, and users can extend the software by programming their own, which can be shared through a plugin store. We anticipate that Perseus's arsenal of algorithms and its intuitive usability will empower interdisciplinary analysis of complex large data sets.

  11. FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.

    PubMed

    Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei

    2016-10-10

    Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.

  12. Curbing Early-Career Teacher Attrition: A Pan-Canadian Document Analysis of Teacher Induction and Mentorship Programs

    ERIC Educational Resources Information Center

    Kutsyuruba, Benjamin; Tregunna, Leigha

    2014-01-01

    Over the past two decades, the phenomenon of teachers abandoning the profession has been noted internationally, and has increasingly caught the attention of policy makers and educational leaders. Despite this awareness, no pan-Canadian statistics or comprehensive reviews are available. This paper reports on the exploratory, pan-Canadian document…

  13. An intelligent system based on fuzzy probabilities for medical diagnosis– a study in aphasia diagnosis*

    PubMed Central

    Moshtagh-Khorasani, Majid; Akbarzadeh-T, Mohammad-R; Jahangiri, Nader; Khoobdel, Mehdi

    2009-01-01

    BACKGROUND: Aphasia diagnosis is particularly challenging due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. METHODS: Fuzzy probability is proposed here as the basic framework for handling the uncertainties in medical diagnosis and particularly aphasia diagnosis. To efficiently construct this fuzzy probabilistic mapping, statistical analysis is performed that constructs input membership functions as well as determines an effective set of input features. RESULTS: Considering the high sensitivity of performance measures to different distribution of testing/training sets, a statistical t-test of significance is applied to compare fuzzy approach results with NN results as well as author's earlier work using fuzzy logic. The proposed fuzzy probability estimator approach clearly provides better diagnosis for both classes of data sets. Specifically, for the first and second type of fuzzy probability classifiers, i.e. spontaneous speech and comprehensive model, P-values are 2.24E-08 and 0.0059, respectively, strongly rejecting the null hypothesis. CONCLUSIONS: The technique is applied and compared on both comprehensive and spontaneous speech test data for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. Statistical analysis confirms that the proposed approach can significantly improve accuracy using fewer Aphasia features. PMID:21772867

  14. A Quantile Regression Approach to Understanding the Relations Among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students.

    PubMed

    Tighe, Elizabeth L; Schatschneider, Christopher

    2016-07-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological awareness and vocabulary knowledge at multiple points (quantiles) along the continuous distribution of reading comprehension. To demonstrate the efficacy of our multiple quantile regression analysis, we compared and contrasted our results with a traditional multiple regression analytic approach. Our results indicated that morphological awareness and vocabulary knowledge accounted for a large portion of the variance (82%-95%) in reading comprehension skills across all quantiles. Morphological awareness exhibited the greatest unique predictive ability at lower levels of reading comprehension whereas vocabulary knowledge exhibited the greatest unique predictive ability at higher levels of reading comprehension. These results indicate the utility of using multiple quantile regression to assess trajectories of component skills across multiple levels of reading comprehension. The implications of our findings for ABE programs are discussed. © Hammill Institute on Disabilities 2014.

  15. A brief introduction to probability.

    PubMed

    Di Paola, Gioacchino; Bertani, Alessandro; De Monte, Lavinia; Tuzzolino, Fabio

    2018-02-01

    The theory of probability has been debated for centuries: back in 1600, French mathematics used the rules of probability to place and win bets. Subsequently, the knowledge of probability has significantly evolved and is now an essential tool for statistics. In this paper, the basic theoretical principles of probability will be reviewed, with the aim of facilitating the comprehension of statistical inference. After a brief general introduction on probability, we will review the concept of the "probability distribution" that is a function providing the probabilities of occurrence of different possible outcomes of a categorical or continuous variable. Specific attention will be focused on normal distribution that is the most relevant distribution applied to statistical analysis.

  16. Statistical analysis of the determinations of the Sun's Galactocentric distance

    NASA Astrophysics Data System (ADS)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  17. Cancerouspdomains: comprehensive analysis of cancer type-specific recurrent somatic mutations in proteins and domains.

    PubMed

    Hashemi, Seirana; Nowzari Dalini, Abbas; Jalali, Adrin; Banaei-Moghaddam, Ali Mohammad; Razaghi-Moghadam, Zahra

    2017-08-16

    Discriminating driver mutations from the ones that play no role in cancer is a severe bottleneck in elucidating molecular mechanisms underlying cancer development. Since protein domains are representatives of functional regions within proteins, mutations on them may disturb the protein functionality. Therefore, studying mutations at domain level may point researchers to more accurate assessment of the functional impact of the mutations. This article presents a comprehensive study to map mutations from 29 cancer types to both sequence- and structure-based domains. Statistical analysis was performed to identify candidate domains in which mutations occur with high statistical significance. For each cancer type, the corresponding type-specific domains were distinguished among all candidate domains. Subsequently, cancer type-specific domains facilitated the identification of specific proteins for each cancer type. Besides, performing interactome analysis on specific proteins of each cancer type showed high levels of interconnectivity among them, which implies their functional relationship. To evaluate the role of mitochondrial genes, stem cell-specific genes and DNA repair genes in cancer development, their mutation frequency was determined via further analysis. This study has provided researchers with a publicly available data repository for studying both CATH and Pfam domain regions on protein-coding genes. Moreover, the associations between different groups of genes/domains and various cancer types have been clarified. The work is available at http://www.cancerouspdomains.ir .

  18. PinAPL-Py: A comprehensive web-application for the analysis of CRISPR/Cas9 screens.

    PubMed

    Spahn, Philipp N; Bath, Tyler; Weiss, Ryan J; Kim, Jihoon; Esko, Jeffrey D; Lewis, Nathan E; Harismendy, Olivier

    2017-11-20

    Large-scale genetic screens using CRISPR/Cas9 technology have emerged as a major tool for functional genomics. With its increased popularity, experimental biologists frequently acquire large sequencing datasets for which they often do not have an easy analysis option. While a few bioinformatic tools have been developed for this purpose, their utility is still hindered either due to limited functionality or the requirement of bioinformatic expertise. To make sequencing data analysis of CRISPR/Cas9 screens more accessible to a wide range of scientists, we developed a Platform-independent Analysis of Pooled Screens using Python (PinAPL-Py), which is operated as an intuitive web-service. PinAPL-Py implements state-of-the-art tools and statistical models, assembled in a comprehensive workflow covering sequence quality control, automated sgRNA sequence extraction, alignment, sgRNA enrichment/depletion analysis and gene ranking. The workflow is set up to use a variety of popular sgRNA libraries as well as custom libraries that can be easily uploaded. Various analysis options are offered, suitable to analyze a large variety of CRISPR/Cas9 screening experiments. Analysis output includes ranked lists of sgRNAs and genes, and publication-ready plots. PinAPL-Py helps to advance genome-wide screening efforts by combining comprehensive functionality with user-friendly implementation. PinAPL-Py is freely accessible at http://pinapl-py.ucsd.edu with instructions and test datasets.

  19. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    PubMed

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. A new computer code for discrete fracture network modelling

    NASA Astrophysics Data System (ADS)

    Xu, Chaoshui; Dowd, Peter

    2010-03-01

    The authors describe a comprehensive software package for two- and three-dimensional stochastic rock fracture simulation using marked point processes. Fracture locations can be modelled by a Poisson, a non-homogeneous, a cluster or a Cox point process; fracture geometries and properties are modelled by their respective probability distributions. Virtual sampling tools such as plane, window and scanline sampling are included in the software together with a comprehensive set of statistical tools including histogram analysis, probability plots, rose diagrams and hemispherical projections. The paper describes in detail the theoretical basis of the implementation and provides a case study in rock fracture modelling to demonstrate the application of the software.

  1. Language Characteristics and Academic Achievement: A Look at Asian and Hispanic Eighth Graders in NELS:88. Statistical Analysis Report.

    ERIC Educational Resources Information Center

    Bradby, Denise; And Others

    This report examines the demographic and language characteristics and educational aspirations of Asian American and Hispanic American eighth graders and relates that information to their mathematical ability and reading comprehension as measured by an achievement test. Special attention is paid to students who come from homes in which a…

  2. Minorities & Women in the Health Fields: Applicants, Students, and Workers. Health Manpower References.

    ERIC Educational Resources Information Center

    Philpot, Wilbertine P.; Bernstein, Stuart

    A comprehensive look at the current and future supply of women and minorities in the health professions and in health professions schools is provided in this statistical report. Its data are more extensive than those presented in either of two earlier reports, hence, it can prove useful in assisting analysis of the composition of the nation's…

  3. 2015 Distributed Wind Market Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, Alice C.; Foster, Nikolas A.F.; Homer, Juliet S.

    The U.S. Department of Energy’s (DOE’s) annual Distributed Wind Market Report provides stakeholders with statistics and analysis of the market along with insights into its trends and characteristics. By providing a comprehensive overview of the distributed wind market, this report can help plan and guide future investments and decisions by industry, utilities, federal and state agencies, and other interested parties.

  4. Unbiased metabolite profiling by liquid chromatography-quadrupole time-of-flight mass spectrometry and multivariate data analysis for herbal authentication: classification of seven Lonicera species flower buds.

    PubMed

    Gao, Wen; Yang, Hua; Qi, Lian-Wen; Liu, E-Hu; Ren, Mei-Ting; Yan, Yu-Ting; Chen, Jun; Li, Ping

    2012-07-06

    Plant-based medicines become increasingly popular over the world. Authentication of herbal raw materials is important to ensure their safety and efficacy. Some herbs belonging to closely related species but differing in medicinal properties are difficult to be identified because of similar morphological and microscopic characteristics. Chromatographic fingerprinting is an alternative method to distinguish them. Existing approaches do not allow a comprehensive analysis for herbal authentication. We have now developed a strategy consisting of (1) full metabolic profiling of herbal medicines by rapid resolution liquid chromatography (RRLC) combined with quadrupole time-of-flight mass spectrometry (QTOF MS), (2) global analysis of non-targeted compounds by molecular feature extraction algorithm, (3) multivariate statistical analysis for classification and prediction, and (4) marker compounds characterization. This approach has provided a fast and unbiased comparative multivariate analysis of the metabolite composition of 33-batch samples covering seven Lonicera species. Individual metabolic profiles are performed at the level of molecular fragments without prior structural assignment. In the entire set, the obtained classifier for seven Lonicera species flower buds showed good prediction performance and a total of 82 statistically different components were rapidly obtained by the strategy. The elemental compositions of discriminative metabolites were characterized by the accurate mass measurement of the pseudomolecular ions and their chemical types were assigned by the MS/MS spectra. The high-resolution, comprehensive and unbiased strategy for metabolite data analysis presented here is powerful and opens the new direction of authentication in herbal analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. A Proof of Concept Study of Function-Based Statistical Analysis of fNIRS Data: Syntax Comprehension in Children with Specific Language Impairment Compared to Typically-Developing Controls.

    PubMed

    Fu, Guifang; Wan, Nicholas J A; Baker, Joseph M; Montgomery, James W; Evans, Julia L; Gillam, Ronald B

    2016-01-01

    Functional near infrared spectroscopy (fNIRS) is a neuroimaging technology that enables investigators to indirectly monitor brain activity in vivo through relative changes in the concentration of oxygenated and deoxygenated hemoglobin. One of the key features of fNIRS is its superior temporal resolution, with dense measurements over very short periods of time (100 ms increments). Unfortunately, most statistical analysis approaches in the existing literature have not fully utilized the high temporal resolution of fNIRS. For example, many analysis procedures are based on linearity assumptions that only extract partial information, thereby neglecting the overall dynamic trends in fNIRS trajectories. The main goal of this article is to assess the ability of a functional data analysis (FDA) approach for detecting significant differences in hemodynamic responses recorded by fNIRS. Children with and without SLI wore two, 3 × 5 fNIRS caps situated over the bilateral parasylvian areas as they completed a language comprehension task. FDA was used to decompose the high dimensional hemodynamic curves into the mean function and a few eigenfunctions to represent the overall trend and variation structures over time. Compared to the most popular GLM, we did not assume any parametric structure and let the data speak for itself. This analysis identified significant differences between the case and control groups in the oxygenated hemodynamic mean trends in the bilateral inferior frontal and left inferior posterior parietal brain regions. We also detected significant group differences in the deoxygenated hemodynamic mean trends in the right inferior posterior parietal cortex and left temporal parietal junction. These findings, using dramatically different approaches, experimental designs, data sets, and foci, were consistent with several other reports, confirming group differences in the importance of these two areas for syntax comprehension. The proposed FDA was consistent with the temporal characteristics of fNIRS, thus providing an alternative methodology for fNIRS analyses.

  6. Propensity Score–Matched Analysis of Comprehensive Local Therapy for Oligometastatic Non-Small Cell Lung Cancer That Did Not Progress After Front-Line Chemotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheu, Tommy; Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas; Heymach, John V.

    2014-11-15

    Purpose: To retrospectively analyze factors influencing survival in patients with non-small cell lung cancer presenting with ≤3 synchronous metastatic lesions. Methods and Materials: We identified 90 patients presenting between 1998 and 2012 with non-small cell lung cancer and ≤3 metastatic lesions who had received at least 2 cycles of chemotherapy followed by surgery or radiation therapy before disease progression. The median number of chemotherapy cycles before comprehensive local therapy (CLT) (including concurrent chemoradiation as first-line therapy) was 6. Factors potentially affecting overall (OS) or progression-free survival (PFS) were evaluated with Cox proportional hazards regression. Propensity score matching was used to assessmore » the efficacy of CLT. Results: Median follow-up time was 46.6 months. Benefits in OS (27.1 vs 13.1 months) and PFS (11.3 months vs 8.0 months) were found with CLT, and the differences were statistically significant when propensity score matching was used (P ≤ .01). On adjusted analysis, CLT had a statistically significant benefit in terms of OS (hazard ratio, 0.37; 95% confidence interval, 0.20-0.70; P ≤ .01) but not PFS (P=.10). In an adjusted subgroup analysis of patients receiving CLT, favorable performance status (hazard ratio, 0.43; 95% confidence interval, 0.22-0.84; P=.01) was found to predict improved OS. Conclusions: Comprehensive local therapy was associated with improved OS in an adjusted analysis and seemed to favorably influence OS and PFS when factors such as N status, number of metastatic lesions, and disease sites were controlled for with propensity score–matched analysis. Patients with favorable performance status had improved outcomes with CLT. Ultimately, prospective, randomized trials are needed to provide definitive evidence as to the optimal treatment approach for this patient population.« less

  7. Propensity score-matched analysis of comprehensive local therapy for oligometastatic non-small cell lung cancer that did not progress after front-line chemotherapy.

    PubMed

    Sheu, Tommy; Heymach, John V; Swisher, Stephen G; Rao, Ganesh; Weinberg, Jeffrey S; Mehran, Reza; McAleer, Mary Frances; Liao, Zhongxing; Aloia, Thomas A; Gomez, Daniel R

    2014-11-15

    To retrospectively analyze factors influencing survival in patients with non-small cell lung cancer presenting with ≤3 synchronous metastatic lesions. We identified 90 patients presenting between 1998 and 2012 with non-small cell lung cancer and ≤3 metastatic lesions who had received at least 2 cycles of chemotherapy followed by surgery or radiation therapy before disease progression. The median number of chemotherapy cycles before comprehensive local therapy (CLT) (including concurrent chemoradiation as first-line therapy) was 6. Factors potentially affecting overall (OS) or progression-free survival (PFS) were evaluated with Cox proportional hazards regression. Propensity score matching was used to assess the efficacy of CLT. Median follow-up time was 46.6 months. Benefits in OS (27.1 vs 13.1 months) and PFS (11.3 months vs 8.0 months) were found with CLT, and the differences were statistically significant when propensity score matching was used (P ≤ .01). On adjusted analysis, CLT had a statistically significant benefit in terms of OS (hazard ratio, 0.37; 95% confidence interval, 0.20-0.70; P ≤ .01) but not PFS (P=.10). In an adjusted subgroup analysis of patients receiving CLT, favorable performance status (hazard ratio, 0.43; 95% confidence interval, 0.22-0.84; P=.01) was found to predict improved OS. Comprehensive local therapy was associated with improved OS in an adjusted analysis and seemed to favorably influence OS and PFS when factors such as N status, number of metastatic lesions, and disease sites were controlled for with propensity score-matched analysis. Patients with favorable performance status had improved outcomes with CLT. Ultimately, prospective, randomized trials are needed to provide definitive evidence as to the optimal treatment approach for this patient population. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.

    PubMed

    Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz

    2017-03-01

    Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. A methodological analysis of chaplaincy research: 2000-2009.

    PubMed

    Galek, Kathleen; Flannelly, Kevin J; Jankowski, Katherine R B; Handzo, George F

    2011-01-01

    The present article presents a comprehensive review and analysis of quantitative research conducted in the United States on chaplaincy and closely related topics published between 2000 and 2009. A combined search strategy identified 49 quantitative studies in 13 journals. The analysis focuses on the methodological sophistication of the studies, compared to earlier research on chaplaincy and pastoral care. Cross-sectional surveys of convenience samples still dominate the field, but sample sizes have increased somewhat over the past three decades. Reporting of the validity and reliability of measures continues to be low, although reporting of response rates has improved. Improvements in the use of inferential statistics and statistical controls were also observed, compared to previous research. The authors conclude that more experimental research is needed on chaplaincy, along with an increased use of hypothesis testing, regardless of the research designs that are used.

  10. Individual Differences in Statistical Learning Predict Children's Comprehension of Syntax

    ERIC Educational Resources Information Center

    Kidd, Evan; Arciuli, Joanne

    2016-01-01

    Variability in children's language acquisition is likely due to a number of cognitive and social variables. The current study investigated whether individual differences in statistical learning (SL), which has been implicated in language acquisition, independently predicted 6- to 8-year-old's comprehension of syntax. Sixty-eight (N = 68)…

  11. Drug target inference through pathway analysis of genomics data

    PubMed Central

    Ma, Haisu; Zhao, Hongyu

    2013-01-01

    Statistical modeling coupled with bioinformatics is commonly used for drug discovery. Although there exist many approaches for single target based drug design and target inference, recent years have seen a paradigm shift to system-level pharmacological research. Pathway analysis of genomics data represents one promising direction for computational inference of drug targets. This article aims at providing a comprehensive review on the evolving issues is this field, covering methodological developments, their pros and cons, as well as future research directions. PMID:23369829

  12. Toward improved analysis of concentration data: Embracing nondetects.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien

    2018-03-01

    Various statistical tests on concentration data serve to support decision-making regarding characterization and monitoring of contaminated media, assessing exposure to a chemical, and quantifying the associated risks. However, the routine statistical protocols cannot be directly applied because of challenges arising from nondetects or left-censored observations, which are concentration measurements below the detection limit of measuring instruments. Despite the existence of techniques based on survival analysis that can adjust for nondetects, these are seldom taken into account properly. A comprehensive review of the literature showed that managing policies regarding analysis of censored data do not always agree and that guidance from regulatory agencies may be outdated. Therefore, researchers and practitioners commonly resort to the most convenient way of tackling the censored data problem by substituting nondetects with arbitrary constants prior to data analysis, although this is generally regarded as a bias-prone approach. Hoping to improve the interpretation of concentration data, the present article aims to familiarize researchers in different disciplines with the significance of left-censored observations and provides theoretical and computational recommendations (under both frequentist and Bayesian frameworks) for adequate analysis of censored data. In particular, the present article synthesizes key findings from previous research with respect to 3 noteworthy aspects of inferential statistics: estimation of descriptive statistics, hypothesis testing, and regression analysis. Environ Toxicol Chem 2018;37:643-656. © 2017 SETAC. © 2017 SETAC.

  13. Emotion comprehension: the impact of nonverbal intelligence.

    PubMed

    Albanese, Ottavia; De Stasio, Simona; Di Chiacchio, Carlo; Fiorilli, Caterina; Pons, Francisco

    2010-01-01

    A substantial body of research has established that emotion understanding develops throughout early childhood and has identified three hierarchical developmental phases: external, mental, and reflexive. The authors analyzed nonverbal intelligence and its effect on children's improvement of emotion understanding and hypothesized that cognitive level is a consistent predictor of emotion comprehension. In all, 366 children (182 girls, 184 boys) between the ages of 3 and 10 years were tested using the Test of Emotion Comprehension and the Coloured Progressive Matrices. The data obtained by using the path analysis model revealed that nonverbal intelligence was statistically associated with the ability to recognize emotions in the 3 developmental phases. The use of this model showed the significant effect that cognitive aspect plays on the reflexive phase. The authors aim to contribute to the debate about the influence of cognitive factors on emotion understanding.

  14. Holo-analysis.

    PubMed

    Rosen, G D

    2006-06-01

    Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.

  15. Characterizing microstructural features of biomedical samples by statistical analysis of Mueller matrix images

    NASA Astrophysics Data System (ADS)

    He, Honghui; Dong, Yang; Zhou, Jialing; Ma, Hui

    2017-03-01

    As one of the salient features of light, polarization contains abundant structural and optical information of media. Recently, as a comprehensive description of polarization property, the Mueller matrix polarimetry has been applied to various biomedical studies such as cancerous tissues detections. In previous works, it has been found that the structural information encoded in the 2D Mueller matrix images can be presented by other transformed parameters with more explicit relationship to certain microstructural features. In this paper, we present a statistical analyzing method to transform the 2D Mueller matrix images into frequency distribution histograms (FDHs) and their central moments to reveal the dominant structural features of samples quantitatively. The experimental results of porcine heart, intestine, stomach, and liver tissues demonstrate that the transformation parameters and central moments based on the statistical analysis of Mueller matrix elements have simple relationships to the dominant microstructural properties of biomedical samples, including the density and orientation of fibrous structures, the depolarization power, diattenuation and absorption abilities. It is shown in this paper that the statistical analysis of 2D images of Mueller matrix elements may provide quantitative or semi-quantitative criteria for biomedical diagnosis.

  16. Statistical models of lunar rocks and regolith

    NASA Technical Reports Server (NTRS)

    Marcus, A. H.

    1973-01-01

    The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.

  17. Risk Comprehension and Judgments of Statistical Evidentiary Appeals: When a Picture Is Not Worth a Thousand Words

    ERIC Educational Resources Information Center

    Parrott, Roxanne; Silk, Kami; Dorgan, Kelly; Condit, Celeste; Harris, Tina

    2005-01-01

    Too little theory and research has considered the effects of communicating statistics in various forms on comprehension, perceptions of evidence quality, or evaluations of message persuasiveness. In a considered extension of Subjective Message Construct Theory (Morley, 1987), we advance a rationale relating evidence form to the formation of…

  18. Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer

    DTIC Science & Technology

    2006-03-01

    able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem

  19. The role of strategic forest inventories in aiding land management decision-making: Examples from the U.S

    Treesearch

    W. Keith Moser; Renate Bush; John D. Shaw; Mark H. Hansen; Mark D. Nelson

    2010-01-01

    A major challenge for today’s resource managers is the linking of standand landscape-scale dynamics. The U.S. Forest Service has made major investments in programs at both the stand- (national forest project) and landscape/regional (Forest Inventory and Analysis [FIA] program) levels. FIA produces the only comprehensive and consistent statistical information on the...

  20. A Principal Component Analysis/Fuzzy Comprehensive Evaluation for Rockburst Potential in Kimberlite

    NASA Astrophysics Data System (ADS)

    Pu, Yuanyuan; Apel, Derek; Xu, Huawei

    2018-02-01

    Kimberlite is an igneous rock which sometimes bears diamonds. Most of the diamonds mined in the world today are found in kimberlite ores. Burst potential in kimberlite has not been investigated, because kimberlite is mostly mined using open-pit mining, which poses very little threat of rock bursting. However, as the mining depth keeps increasing, the mines convert to underground mining methods, which can pose a threat of rock bursting in kimberlite. This paper focuses on the burst potential of kimberlite at a diamond mine in northern Canada. A combined model with the methods of principal component analysis (PCA) and fuzzy comprehensive evaluation (FCE) is developed to process data from 12 different locations in kimberlite pipes. Based on calculated 12 fuzzy evaluation vectors, 8 locations show a moderate burst potential, 2 locations show no burst potential, and 2 locations show strong and violent burst potential, respectively. Using statistical principles, a Mahalanobis distance is adopted to build a comprehensive fuzzy evaluation vector for the whole mine and the final evaluation for burst potential is moderate, which is verified by a practical rockbursting situation at mine site.

  1. Quantitative characterization of galectin-3-C affinity mass spectrometry measurements: Comprehensive data analysis, obstacles, shortcuts and robustness.

    PubMed

    Haramija, Marko; Peter-Katalinić, Jasna

    2017-10-30

    Affinity mass spectrometry (AMS) is an emerging tool in the field of the study of protein•carbohydrate complexes. However, experimental obstacles and data analysis are preventing faster integration of AMS methods into the glycoscience field. Here we show how analysis of direct electrospray ionization mass spectrometry (ESI-MS) AMS data can be simplified for screening purposes, even for complex AMS spectra. A direct ESI-MS assay was tested in this study and binding data for the galectin-3C•lactose complex were analyzed using a comprehensive and simplified data analysis approach. In the comprehensive data analysis approach, noise, all protein charge states, alkali ion adducts and signal overlap were taken into account. In a simplified approach, only the intensities of the fully protonated free protein and the protein•carbohydrate complex for the main protein charge state were taken into account. In our study, for high intensity signals, noise was negligible, sodiated protein and sodiated complex signals cancelled each other out when calculating the K d value, and signal overlap influenced the Kd value only to a minor extent. Influence of these parameters on low intensity signals was much higher. However, low intensity protein charge states should be avoided in quantitative AMS analyses due to poor ion statistics. The results indicate that noise, alkali ion adducts, signal overlap, as well as low intensity protein charge states, can be neglected for preliminary experiments, as well as in screening assays. One comprehensive data analysis performed as a control should be sufficient to validate this hypothesis for other binding systems as well. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Lower education level is a major risk factor for peritonitis incidence in chronic peritoneal dialysis patients: a retrospective cohort study with 12-year follow-up.

    PubMed

    Chern, Yahn-Bor; Ho, Pei-Shan; Kuo, Li-Chueh; Chen, Jin-Bor

    2013-01-01

    Peritoneal dialysis (PD)-related peritonitis remains an important complication in PD patients, potentially causing technique failure and influencing patient outcome. To date, no comprehensive study in the Taiwanese PD population has used a time-dependent statistical method to analyze the factors associated with PD-related peritonitis. Our single-center retrospective cohort study, conducted in southern Taiwan between February 1999 and July 2010, used time-dependent statistical methods to analyze the factors associated with PD-related peritonitis. The study recruited 404 PD patients for analysis, 150 of whom experienced at least 1 episode of peritonitis during the follow-up period. The incidence rate of peritonitis was highest during the first 6 months after PD start. A comparison of patients in the two groups (peritonitis vs null-peritonitis) by univariate analysis showed that the peritonitis group included fewer men (p = 0.048) and more patients of older age (≥65 years, p = 0.049). In addition, patients who had never received compulsory education showed a statistically higher incidence of PD-related peritonitis in the univariate analysis (p = 0.04). A proportional hazards model identified education level (less than elementary school vs any higher education level) as having an independent association with PD-related peritonitis [hazard ratio (HR): 1.45; 95% confidence interval (CI): 1.01 to 2.06; p = 0.045). Comorbidities measured using the Charlson comorbidity index (score >2 vs ≤2) showed borderline statistical significance (HR: 1.44; 95% CI: 1.00 to 2.13; p = 0.053). A lower education level is a major risk factor for PD-related peritonitis independent of age, sex, hypoalbuminemia, and comorbidities. Our study emphasizes that a comprehensive PD education program is crucial for PD patients with a lower education level.

  3. Periodontal disease and carotid atherosclerosis: A meta-analysis of 17,330 participants.

    PubMed

    Zeng, Xian-Tao; Leng, Wei-Dong; Lam, Yat-Yin; Yan, Bryan P; Wei, Xue-Mei; Weng, Hong; Kwong, Joey S W

    2016-01-15

    The association between periodontal disease and carotid atherosclerosis has been evaluated primarily in single-center studies, and whether periodontal disease is an independent risk factor of carotid atherosclerosis remains uncertain. This meta-analysis aimed to evaluate the association between periodontal disease and carotid atherosclerosis. We searched PubMed and Embase for relevant observational studies up to February 20, 2015. Two authors independently extracted data from included studies, and odds ratios (ORs) with 95% confidence intervals (CIs) were calculated for overall and subgroup meta-analyses. Statistical heterogeneity was assessed by the chi-squared test (P<0.1 for statistical significance) and quantified by the I(2) statistic. Data analysis was conducted using the Comprehensive Meta-Analysis (CMA) software. Fifteen observational studies involving 17,330 participants were included in the meta-analysis. The overall pooled result showed that periodontal disease was associated with carotid atherosclerosis (OR: 1.27, 95% CI: 1.14-1.41; P<0.001) but statistical heterogeneity was substantial (I(2)=78.90%). Subgroup analysis of adjusted smoking and diabetes mellitus showed borderline significance (OR: 1.08; 95% CI: 1.00-1.18; P=0.05). Sensitivity and cumulative analyses both indicated that our results were robust. Findings of our meta-analysis indicated that the presence of periodontal disease was associated with carotid atherosclerosis; however, further large-scale, well-conducted clinical studies are needed to explore the precise risk of developing carotid atherosclerosis in patients with periodontal disease. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Analysis of repeated measurement data in the clinical trials

    PubMed Central

    Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa

    2013-01-01

    Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038

  5. Evolution of Precipitation Particle Size Distributions within MC3E Systems and its Impact on Aerosol-Cloud-Precipitation Interactions: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    2017-08-08

    This is a multi-institutional, collaborative project using observations and modeling to study the evolution (e.g. formation and growth) of hydrometeors in continental convective clouds. Our contribution was in data analysis for the generation of high-value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: i) the development of novel, state-of-the-art dual-wavelength radar algorithms for the retrieval of cloud microphysical properties and ii) the evaluation of large domain, high-resolution models using comprehensive multi-sensor observations. Our research group developed statistical summaries from numerous sensors and developed retrievals of vertical airmore » motion in deep convection.« less

  6. RAD-ADAPT: Software for modelling clonogenic assay data in radiation biology.

    PubMed

    Zhang, Yaping; Hu, Kaiqiang; Beumer, Jan H; Bakkenist, Christopher J; D'Argenio, David Z

    2017-04-01

    We present a comprehensive software program, RAD-ADAPT, for the quantitative analysis of clonogenic assays in radiation biology. Two commonly used models for clonogenic assay analysis, the linear-quadratic model and single-hit multi-target model, are included in the software. RAD-ADAPT uses maximum likelihood estimation method to obtain parameter estimates with the assumption that cell colony count data follow a Poisson distribution. The program has an intuitive interface, generates model prediction plots, tabulates model parameter estimates, and allows automatic statistical comparison of parameters between different groups. The RAD-ADAPT interface is written using the statistical software R and the underlying computations are accomplished by the ADAPT software system for pharmacokinetic/pharmacodynamic systems analysis. The use of RAD-ADAPT is demonstrated using an example that examines the impact of pharmacologic ATM and ATR kinase inhibition on human lung cancer cell line A549 after ionizing radiation. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. CHESS (CgHExpreSS): a comprehensive analysis tool for the analysis of genomic alterations and their effects on the expression profile of the genome.

    PubMed

    Lee, Mikyung; Kim, Yangseok

    2009-12-16

    Genomic alterations frequently occur in many cancer patients and play important mechanistic roles in the pathogenesis of cancer. Furthermore, they can modify the expression level of genes due to altered copy number in the corresponding region of the chromosome. An accumulating body of evidence supports the possibility that strong genome-wide correlation exists between DNA content and gene expression. Therefore, more comprehensive analysis is needed to quantify the relationship between genomic alteration and gene expression. A well-designed bioinformatics tool is essential to perform this kind of integrative analysis. A few programs have already been introduced for integrative analysis. However, there are many limitations in their performance of comprehensive integrated analysis using published software because of limitations in implemented algorithms and visualization modules. To address this issue, we have implemented the Java-based program CHESS to allow integrative analysis of two experimental data sets: genomic alteration and genome-wide expression profile. CHESS is composed of a genomic alteration analysis module and an integrative analysis module. The genomic alteration analysis module detects genomic alteration by applying a threshold based method or SW-ARRAY algorithm and investigates whether the detected alteration is phenotype specific or not. On the other hand, the integrative analysis module measures the genomic alteration's influence on gene expression. It is divided into two separate parts. The first part calculates overall correlation between comparative genomic hybridization ratio and gene expression level by applying following three statistical methods: simple linear regression, Spearman rank correlation and Pearson's correlation. In the second part, CHESS detects the genes that are differentially expressed according to the genomic alteration pattern with three alternative statistical approaches: Student's t-test, Fisher's exact test and Chi square test. By successive operations of two modules, users can clarify how gene expression levels are affected by the phenotype specific genomic alterations. As CHESS was developed in both Java application and web environments, it can be run on a web browser or a local machine. It also supports all experimental platforms if a properly formatted text file is provided to include the chromosomal position of probes and their gene identifiers. CHESS is a user-friendly tool for investigating disease specific genomic alterations and quantitative relationships between those genomic alterations and genome-wide gene expression profiling.

  8. MetaComp: comprehensive analysis software for comparative meta-omics including comparative metagenomics.

    PubMed

    Zhai, Peng; Yang, Longshu; Guo, Xiao; Wang, Zhe; Guo, Jiangtao; Wang, Xiaoqi; Zhu, Huaiqiu

    2017-10-02

    During the past decade, the development of high throughput nucleic sequencing and mass spectrometry analysis techniques have enabled the characterization of microbial communities through metagenomics, metatranscriptomics, metaproteomics and metabolomics data. To reveal the diversity of microbial communities and interactions between living conditions and microbes, it is necessary to introduce comparative analysis based upon integration of all four types of data mentioned above. Comparative meta-omics, especially comparative metageomics, has been established as a routine process to highlight the significant differences in taxon composition and functional gene abundance among microbiota samples. Meanwhile, biologists are increasingly concerning about the correlations between meta-omics features and environmental factors, which may further decipher the adaptation strategy of a microbial community. We developed a graphical comprehensive analysis software named MetaComp comprising a series of statistical analysis approaches with visualized results for metagenomics and other meta-omics data comparison. This software is capable to read files generated by a variety of upstream programs. After data loading, analyses such as multivariate statistics, hypothesis testing of two-sample, multi-sample as well as two-group sample and a novel function-regression analysis of environmental factors are offered. Here, regression analysis regards meta-omic features as independent variable and environmental factors as dependent variables. Moreover, MetaComp is capable to automatically choose an appropriate two-group sample test based upon the traits of input abundance profiles. We further evaluate the performance of its choice, and exhibit applications for metagenomics, metaproteomics and metabolomics samples. MetaComp, an integrative software capable for applying to all meta-omics data, originally distills the influence of living environment on microbial community by regression analysis. Moreover, since the automatically chosen two-group sample test is verified to be outperformed, MetaComp is friendly to users without adequate statistical training. These improvements are aiming to overcome the new challenges under big data era for all meta-omics data. MetaComp is available at: http://cqb.pku.edu.cn/ZhuLab/MetaComp/ and https://github.com/pzhaipku/MetaComp/ .

  9. A comprehensive comparison of RNA-Seq-based transcriptome analysis from reads to differential gene expression and cross-comparison with microarrays: a case study in Saccharomyces cerevisiae

    PubMed Central

    Nookaew, Intawat; Papini, Marta; Pornputtapong, Natapol; Scalcinati, Gionata; Fagerberg, Linn; Uhlén, Matthias; Nielsen, Jens

    2012-01-01

    RNA-seq, has recently become an attractive method of choice in the studies of transcriptomes, promising several advantages compared with microarrays. In this study, we sought to assess the contribution of the different analytical steps involved in the analysis of RNA-seq data generated with the Illumina platform, and to perform a cross-platform comparison based on the results obtained through Affymetrix microarray. As a case study for our work we, used the Saccharomyces cerevisiae strain CEN.PK 113-7D, grown under two different conditions (batch and chemostat). Here, we asses the influence of genetic variation on the estimation of gene expression level using three different aligners for read-mapping (Gsnap, Stampy and TopHat) on S288c genome, the capabilities of five different statistical methods to detect differential gene expression (baySeq, Cuffdiff, DESeq, edgeR and NOISeq) and we explored the consistency between RNA-seq analysis using reference genome and de novo assembly approach. High reproducibility among biological replicates (correlation ≥0.99) and high consistency between the two platforms for analysis of gene expression levels (correlation ≥0.91) are reported. The results from differential gene expression identification derived from the different statistical methods, as well as their integrated analysis results based on gene ontology annotation are in good agreement. Overall, our study provides a useful and comprehensive comparison between the two platforms (RNA-seq and microrrays) for gene expression analysis and addresses the contribution of the different steps involved in the analysis of RNA-seq data. PMID:22965124

  10. Study/experimental/research design: much more than statistics.

    PubMed

    Knight, Kenneth L

    2010-01-01

    The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes "Methods" sections hard to read and understand. To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

  11. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition

    PubMed Central

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-01-01

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition. PMID:24961428

  12. Multilingualism and fMRI: Longitudinal Study of Second Language Acquisition.

    PubMed

    Andrews, Edna; Frigau, Luca; Voyvodic-Casabo, Clara; Voyvodic, James; Wright, John

    2013-05-28

    BOLD fMRI is often used for the study of human language. However, there are still very few attempts to conduct longitudinal fMRI studies in the study of language acquisition by measuring auditory comprehension and reading. The following paper is the first in a series concerning a unique longitudinal study devoted to the analysis of bi- and multilingual subjects who are: (1) already proficient in at least two languages; or (2) are acquiring Russian as a second/third language. The focus of the current analysis is to present data from the auditory sections of a set of three scans acquired from April, 2011 through April, 2012 on a five-person subject pool who are learning Russian during the study. All subjects were scanned using the same protocol for auditory comprehension on the same General Electric LX 3T Signa scanner in Duke University Hospital. Using a multivariate analysis of covariance (MANCOVA) for statistical analysis, proficiency measurements are shown to correlate significantly with scan results in the Russian conditions over time. The importance of both the left and right hemispheres in language processing is discussed. Special attention is devoted to the importance of contextualizing imaging data with corresponding behavioral and empirical testing data using a multivariate analysis of variance. This is the only study to date that includes: (1) longitudinal fMRI data with subject-based proficiency and behavioral data acquired in the same time frame; and (2) statistical modeling that demonstrates the importance of covariate language proficiency data for understanding imaging results of language acquisition.

  13. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  14. Enhancing predictive accuracy and reproducibility in clinical evaluation research: Commentary on the special section of the Journal of Evaluation in Clinical Practice.

    PubMed

    Bryant, Fred B

    2016-12-01

    This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.

  15. How language production shapes language form and comprehension

    PubMed Central

    MacDonald, Maryellen C.

    2012-01-01

    Language production processes can provide insight into how language comprehension works and language typology—why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form. PMID:23637689

  16. Salutogenic factors for mental health promotion in work settings and organizations.

    PubMed

    Graeser, Silke

    2011-12-01

    Accompanied by an increasing awareness of companies and organizations for mental health conditions in work settings and organizations, the salutogenic perspective provides a promising approach to identify supportive factors and resources of organizations to promote mental health. Based on the sense of coherence (SOC) - usually treated as an individual and personality trait concept - an organization-based SOC scale was developed to identify potential salutogenic factors of a university as an organization and work place. Based on results of two samples of employees (n = 362, n = 204), factors associated with the organization-based SOC were evaluated. Statistical analysis yielded significant correlations between mental health and the setting-based SOC as well as the three factors of the SOC yielded by factor analysis yielded three factors comprehensibility, manageability and meaningfulness. Significant statistic results of bivariate and multivariate analyses emphasize the significance of aspects such as participation and comprehensibility referring to the organization, social cohesion and social climate on the social level, and recognition on the individual level for an organization-based SOC. Potential approaches for the further development of interventions for work-place health promotion based on salutogenic factors and resources on the individual, social and organization level are elaborated and the transcultural dimensions of these factors discussed.

  17. The relationship among pressure ulcer risk factors, incidence and nursing documentation in hospital-acquired pressure ulcer patients in intensive care units.

    PubMed

    Li, Dan

    2016-08-01

    To explore the quality/comprehensiveness of nursing documentation of pressure ulcers and to investigate the relationship between the nursing documentation and the incidence of pressure ulcers in four intensive care units. Pressure ulcer prevention requires consistent assessments and documentation to decrease pressure ulcer incidence. Currently, most research is focused on devices to prevent pressure ulcers. Studies have rarely considered the relationship among pressure ulcer risk factors, incidence and nursing documentation. Thus, a study to investigate this relationship is needed to fill this information gap. A retrospective, comparative, descriptive, correlational study. A convenience sample of 196 intensive care units patients at the selected medical centre comprised the study sample. All medical records of patients admitted to intensive care units between the time periods of September 1, 2011 through September 30, 2012 were audited. Data used in the analysis included 98 pressure ulcer patients and 98 non-pressure ulcer patients. The quality and comprehensiveness of pressure ulcer documentation were measured by the modified European Pressure Ulcer Advisory Panel Pressure Ulcers Assessment Instrument and the Comprehensiveness in Nursing Documentation instrument. The correlations between quality/comprehensiveness of pressure ulcer documentation and incidence of pressure ulcers were not statistically significant. Patients with pressure ulcers had longer length of stay than patients without pressure ulcers stay. There were no statistically significant differences in quality/comprehensiveness scores of pressure ulcer documentation between dayshift and nightshift. This study revealed a lack of quality/comprehensiveness in nursing documentation of pressure ulcers. This study demonstrates that staff nurses often perform poorly on documenting pressure ulcer appearance, staging and treatment. Moreover, nursing documentation of pressure ulcers does not provide a complete picture of patients' care needs that require nursing interventions. The implication of this study involves pressure ulcer prevention and litigable risk of nursing documentation. © 2016 John Wiley & Sons Ltd.

  18. Relevance of graph literacy in the development of patient-centered communication tools.

    PubMed

    Nayak, Jasmir G; Hartzler, Andrea L; Macleod, Liam C; Izard, Jason P; Dalkin, Bruce M; Gore, John L

    2016-03-01

    To determine the literacy skill sets of patients in the context of graphical interpretation of interactive dashboards. We assessed literacy characteristics of prostate cancer patients and assessed comprehension of quality of life dashboards. Health literacy, numeracy and graph literacy were assessed with validated tools. We divided patients into low vs. high numeracy and graph literacy. We report descriptive statistics on literacy, dashboard comprehension, and relationships between groups. We used correlation and multiple linear regressions to examine factors associated with dashboard comprehension. Despite high health literacy in educated patients (78% college educated), there was variation in numeracy and graph literacy. Numeracy and graph literacy scores were correlated (r=0.37). In those with low literacy, graph literacy scores most strongly correlated with dashboard comprehension (r=0.59-0.90). On multivariate analysis, graph literacy was independently associated with dashboard comprehension, adjusting for age, education, and numeracy level. Even among higher educated patients; variation in the ability to comprehend graphs exists. Clinicians must be aware of these differential proficiencies when counseling patients. Tools for patient-centered communication that employ visual displays need to account for literacy capabilities to ensure that patients can effectively engage these resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  20. Characterizing chaotic melodies in automatic music composition

    NASA Astrophysics Data System (ADS)

    Coca, Andrés E.; Tost, Gerard O.; Zhao, Liang

    2010-09-01

    In this paper, we initially present an algorithm for automatic composition of melodies using chaotic dynamical systems. Afterward, we characterize chaotic music in a comprehensive way as comprising three perspectives: musical discrimination, dynamical influence on musical features, and musical perception. With respect to the first perspective, the coherence between generated chaotic melodies (continuous as well as discrete chaotic melodies) and a set of classical reference melodies is characterized by statistical descriptors and melodic measures. The significant differences among the three types of melodies are determined by discriminant analysis. Regarding the second perspective, the influence of dynamical features of chaotic attractors, e.g., Lyapunov exponent, Hurst coefficient, and correlation dimension, on melodic features is determined by canonical correlation analysis. The last perspective is related to perception of originality, complexity, and degree of melodiousness (Euler's gradus suavitatis) of chaotic and classical melodies by nonparametric statistical tests.

  1. ProUCL version 4.1.00 Documentation Downloads

    EPA Pesticide Factsheets

    ProUCL version 4.1.00 represents a comprehensive statistical software package equipped with statistical methods and graphical tools needed to address many environmental sampling and statistical issues as described in various these guidance documents.

  2. Oral cancer screening: knowledge is not enough.

    PubMed

    Tax, C L; Haslam, S Kim; Brillant, Mgs; Doucette, H J; Cameron, J E; Wade, S E

    2017-08-01

    The purpose of this cross-sectional study was to investigate whether dental hygienists are transferring their knowledge of oral cancer screening into practice. This study also wanted to gain insight into the barriers that might prevent dental hygienists from performing these screenings. A 27-item survey instrument was constructed to study the oral cancer screening practices of licensed dental hygienists in Nova Scotia. A total of 623 practicing dental hygienists received the survey. The response rate was 34% (n = 212) yielding a maximum margin of error of 5.47 at a 95% confidence level. Descriptive statistics were calculated using IBM SPSS Statistics v21 software (Armonk, NY:IBM Corp). Qualitative thematic analysis was performed on any open-ended responses. This study revealed that while dental hygienists perceived themselves as being knowledgeable about oral cancer screening, they were not transferring this knowledge to actual practice. Only a small percentage (13%) of respondents were performing a comprehensive extra-oral examination, and 7% were performing a comprehensive intra-oral examination. The respondents identified several barriers that prevented them from completing a comprehensive oral cancer screening. Early detection of oral cancer reduces mortality rates so there is a professional responsibility to ensure that comprehensive oral cancer screenings are being performed on patients. Dental hygienists may not have the authority in a dental practice to overcome all of the barriers that are preventing them from performing these screenings. Public awareness about oral cancer screenings could increase the demand for screenings and thereby play a role in changing practice norms. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  3. A virtual climate library of surface temperature over North America for 1979-2015

    NASA Astrophysics Data System (ADS)

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-10-01

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.

  4. A virtual climate library of surface temperature over North America for 1979–2015

    PubMed Central

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-01-01

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context—for example, to document trends in extreme events in response to climate change—is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979–2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life. PMID:29039842

  5. A virtual climate library of surface temperature over North America for 1979-2015.

    PubMed

    Kravtsov, Sergey; Roebber, Paul; Brazauskas, Vytaras

    2017-10-17

    The most comprehensive continuous-coverage modern climatic data sets, known as reanalyses, come from combining state-of-the-art numerical weather prediction (NWP) models with diverse available observations. These reanalysis products estimate the path of climate evolution that actually happened, and their use in a probabilistic context-for example, to document trends in extreme events in response to climate change-is, therefore, limited. Free runs of NWP models without data assimilation can in principle be used for the latter purpose, but such simulations are computationally expensive and are prone to systematic biases. Here we produce a high-resolution, 100-member ensemble simulation of surface atmospheric temperature over North America for the 1979-2015 period using a comprehensive spatially extended non-stationary statistical model derived from the data based on the North American Regional Reanalysis. The surrogate climate realizations generated by this model are independent from, yet nearly statistically congruent with reality. This data set provides unique opportunities for the analysis of weather-related risk, with applications in agriculture, energy development, and protection of human life.

  6. Statistical Abstract of the United States: 2012. 131st Edition

    ERIC Educational Resources Information Center

    US Census Bureau, 2011

    2011-01-01

    "The Statistical Abstract of the United States," published from 1878 to 2012, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States. It is designed to serve as a convenient volume for statistical reference, and as a guide to other statistical publications and…

  7. Metabolomic fingerprinting employing DART-TOFMS for authentication of tomatoes and peppers from organic and conventional farming.

    PubMed

    Novotná, H; Kmiecik, O; Gałązka, M; Krtková, V; Hurajová, A; Schulzová, V; Hallmann, E; Rembiałkowska, E; Hajšlová, J

    2012-01-01

    The rapidly growing demand for organic food requires the availability of analytical tools enabling their authentication. Recently, metabolomic fingerprinting/profiling has been demonstrated as a challenging option for a comprehensive characterisation of small molecules occurring in plants, since their pattern may reflect the impact of various external factors. In a two-year pilot study, concerned with the classification of organic versus conventional crops, ambient mass spectrometry consisting of a direct analysis in real time (DART) ion source and a time-of-flight mass spectrometer (TOFMS) was employed. This novel methodology was tested on 40 tomato and 24 pepper samples grown under specified conditions. To calculate statistical models, the obtained data (mass spectra) were processed by the principal component analysis (PCA) followed by linear discriminant analysis (LDA). The results from the positive ionisation mode enabled better differentiation between organic and conventional samples than the results from the negative mode. In this case, the recognition ability obtained by LDA was 97.5% for tomato and 100% for pepper samples and the prediction abilities were above 80% for both sample sets. The results suggest that the year of production had stronger influence on the metabolomic fingerprints compared with the type of farming (organic versus conventional). In any case, DART-TOFMS is a promising tool for rapid screening of samples. Establishing comprehensive (multi-sample) long-term databases may further help to improve the quality of statistical classification models.

  8. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  9. Statistical analysis of modal properties of a cable-stayed bridge through long-term structural health monitoring with wireless smart sensor networks

    NASA Astrophysics Data System (ADS)

    Asadollahi, Parisa; Li, Jian

    2016-04-01

    Understanding the dynamic behavior of complex structures such as long-span bridges requires dense deployment of sensors. Traditional wired sensor systems are generally expensive and time-consuming to install due to cabling. With wireless communication and on-board computation capabilities, wireless smart sensor networks have the advantages of being low cost, easy to deploy and maintain and therefore facilitate dense instrumentation for structural health monitoring. A long-term monitoring project was recently carried out for a cable-stayed bridge in South Korea with a dense array of 113 smart sensors, which feature the world's largest wireless smart sensor network for civil structural monitoring. This paper presents a comprehensive statistical analysis of the modal properties including natural frequencies, damping ratios and mode shapes of the monitored cable-stayed bridge. Data analyzed in this paper is composed of structural vibration signals monitored during a 12-month period under ambient excitations. The correlation between environmental temperature and the modal frequencies is also investigated. The results showed the long-term statistical structural behavior of the bridge, which serves as the basis for Bayesian statistical updating for the numerical model.

  10. Speech comprehension and emotional/behavioral problems in children with specific language impairment (SLI).

    PubMed

    Gregl, Ana; Kirigin, Marin; Bilać, Snjeiana; Sućeska Ligutić, Radojka; Jaksić, Nenad; Jakovljević, Miro

    2014-09-01

    This research aims to investigate differences in speech comprehension between children with specific language impairment (SLI) and their developmentally normal peers, and the relationship between speech comprehension and emotional/behavioral problems on Achenbach's Child Behavior Checklist (CBCL) and Caregiver Teacher's Report Form (C-TRF) according to the DSMIV The clinical sample comprised 97preschool children with SLI, while the peer sample comprised 60 developmentally normal preschool children. Children with SLI had significant delays in speech comprehension and more emotional/behavioral problems than peers. In children with SLI, speech comprehension significantly correlated with scores on Attention Deficit/Hyperactivity Problems (CBCL and C-TRF), and Pervasive Developmental Problems scales (CBCL)(p<0.05). In the peer sample, speech comprehension significantly correlated with scores on Affective Problems and Attention Deficit/Hyperactivity Problems (C-TRF) scales. Regression analysis showed that 12.8% of variance in speech comprehension is saturated with 5 CBCL variables, of which Attention Deficit/Hyperactivity (beta = -0.281) and Pervasive Developmental Problems (beta = -0.280) are statistically significant (p < 0.05). In the reduced regression model Attention Deficit/Hyperactivity explains 7.3% of the variance in speech comprehension, (beta = -0.270, p < 0.01). It is possible that, to a certain degree, the same neurodevelopmental process lies in the background of problems with speech comprehension, problems with attention and hyperactivity, and pervasive developmental problems. This study confirms the importance of triage for behavioral problems and attention training in the rehabilitation of children with SLI and children with normal language development that exhibit ADHD symptoms.

  11. Implementation of a novel communication tool and its effect on patient comprehension of care and satisfaction.

    PubMed

    Simmons, Stefanie Anne; Sharp, Brian; Fowler, Jennifer; Singal, Bonita

    2013-05-01

    Emergency department (ED) communication has been demonstrated as requiring improvement and ED patients have repeatedly demonstrated poor comprehension of the care they receive. Through patient focus groups, the authors developed a novel tool designed to improve communication and patient comprehension. This is a prospective, randomised controlled clinical trial to test the efficacy of a novel, patient-centred communication tool. Patients in a small community hospital ED were randomised to receive the instrument, which was utilised by the entire ED care team and served as a checklist or guide to the patients' ED stay. At the end of the ED stay, patients completed a survey of their comprehension of the care and a communication assessment tool-team survey (a validated instrument to assess satisfaction with communication). Three blinded chart reviewers scored patients' comprehension of their ED care as concordant, partially concordant or discordant with charted care. The authors tested whether there was a difference in satisfaction using a two-sample t test and a difference in comprehension using ordinal logistic regression analysis. 146 patients were enrolled in the study with 72 randomised to receive the communication instrument. There was no significant difference between groups in comprehension (OR=0.65, 95% CI 0.34 to 1.23, p=0.18) or communication assessment tool-team scores (difference=0.2, 95% CI: -3.4 to 3.8, p=0.91). Using their novel communication tool, the authors were not able to show a statistically significant improvement in either comprehension or satisfaction, though a tendency towards improved comprehension was seen.

  12. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  13. Time-resolved metabolomics reveals metabolic modulation in rice foliage

    PubMed Central

    Sato, Shigeru; Arita, Masanori; Soga, Tomoyoshi; Nishioka, Takaaki; Tomita, Masaru

    2008-01-01

    Background To elucidate the interaction of dynamics among modules that constitute biological systems, comprehensive datasets obtained from "omics" technologies have been used. In recent plant metabolomics approaches, the reconstruction of metabolic correlation networks has been attempted using statistical techniques. However, the results were unsatisfactory and effective data-mining techniques that apply appropriate comprehensive datasets are needed. Results Using capillary electrophoresis mass spectrometry (CE-MS) and capillary electrophoresis diode-array detection (CE-DAD), we analyzed the dynamic changes in the level of 56 basic metabolites in plant foliage (Oryza sativa L. ssp. japonica) at hourly intervals over a 24-hr period. Unsupervised clustering of comprehensive metabolic profiles using Kohonen's self-organizing map (SOM) allowed classification of the biochemical pathways activated by the light and dark cycle. The carbon and nitrogen (C/N) metabolism in both periods was also visualized as a phenotypic linkage map that connects network modules on the basis of traditional metabolic pathways rather than pairwise correlations among metabolites. The regulatory networks of C/N assimilation/dissimilation at each time point were consistent with previous works on plant metabolism. In response to environmental stress, glutathione and spermidine fluctuated synchronously with their regulatory targets. Adenine nucleosides and nicotinamide coenzymes were regulated by phosphorylation and dephosphorylation. We also demonstrated that SOM analysis was applicable to the estimation of unidentifiable metabolites in metabolome analysis. Hierarchical clustering of a correlation coefficient matrix could help identify the bottleneck enzymes that regulate metabolic networks. Conclusion Our results showed that our SOM analysis with appropriate metabolic time-courses effectively revealed the synchronous dynamics among metabolic modules and elucidated the underlying biochemical functions. The application of discrimination of unidentified metabolites and the identification of bottleneck enzymatic steps even to non-targeted comprehensive analysis promise to facilitate an understanding of large-scale interactions among components in biological systems. PMID:18564421

  14. Improving information retrieval in functional analysis.

    PubMed

    Rodriguez, Juan C; González, Germán A; Fresno, Cristóbal; Llera, Andrea S; Fernández, Elmer A

    2016-12-01

    Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. On Teaching about the Coefficient of Variation in Introductory Statistics Courses

    ERIC Educational Resources Information Center

    Trafimow, David

    2014-01-01

    The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.

  16. Nevada's Children: Selected Educational and Social Statistics. Nevada and National.

    ERIC Educational Resources Information Center

    Horner, Mary P., Comp.

    This statistical report describes the successes and shortcomings of education in Nevada and compares some statistics concerning education in Nevada to national norms. The report, which provides a comprehensive array of information helpful to policy makers and citizens, is divided into three sections. The first section presents statistics about…

  17. Statistics Report on TEQSA Registered Higher Education Providers

    ERIC Educational Resources Information Center

    Australian Government Tertiary Education Quality and Standards Agency, 2015

    2015-01-01

    This statistics report provides a comprehensive snapshot of national statistics on all parts of the sector for the year 2013, by bringing together data collected directly by TEQSA with data sourced from the main higher education statistics collections managed by the Australian Government Department of Education and Training. The report provides…

  18. Frontiers of Two-Dimensional Correlation Spectroscopy. Part 1. New concepts and noteworthy developments

    NASA Astrophysics Data System (ADS)

    Noda, Isao

    2014-07-01

    A comprehensive survey review of new and noteworthy developments, which are advancing forward the frontiers in the field of 2D correlation spectroscopy during the last four years, is compiled. This review covers books, proceedings, and review articles published on 2D correlation spectroscopy, a number of significant conceptual developments in the field, data pretreatment methods and other pertinent topics, as well as patent and publication trends and citation activities. Developments discussed include projection 2D correlation analysis, concatenated 2D correlation, and correlation under multiple perturbation effects, as well as orthogonal sample design, predicting 2D correlation spectra, manipulating and comparing 2D spectra, correlation strategy based on segmented data blocks, such as moving-window analysis, features like determination of sequential order and enhanced spectral resolution, statistical 2D spectroscopy using covariance and other statistical metrics, hetero-correlation analysis, and sample-sample correlation technique. Data pretreatment operations prior to 2D correlation analysis are discussed, including the correction for physical effects, background and baseline subtraction, selection of reference spectrum, normalization and scaling of data, derivatives spectra and deconvolution technique, and smoothing and noise reduction. Other pertinent topics include chemometrics and statistical considerations, peak position shift phenomena, variable sampling increments, computation and software, display schemes, such as color coded format, slice and power spectra, tabulation, and other schemes.

  19. A user-friendly workflow for analysis of Illumina gene expression bead array data available at the arrayanalysis.org portal.

    PubMed

    Eijssen, Lars M T; Goelela, Varshna S; Kelder, Thomas; Adriaens, Michiel E; Evelo, Chris T; Radonjic, Marijana

    2015-06-30

    Illumina whole-genome expression bead arrays are a widely used platform for transcriptomics. Most of the tools available for the analysis of the resulting data are not easily applicable by less experienced users. ArrayAnalysis.org provides researchers with an easy-to-use and comprehensive interface to the functionality of R and Bioconductor packages for microarray data analysis. As a modular open source project, it allows developers to contribute modules that provide support for additional types of data or extend workflows. To enable data analysis of Illumina bead arrays for a broad user community, we have developed a module for ArrayAnalysis.org that provides a free and user-friendly web interface for quality control and pre-processing for these arrays. This module can be used together with existing modules for statistical and pathway analysis to provide a full workflow for Illumina gene expression data analysis. The module accepts data exported from Illumina's GenomeStudio, and provides the user with quality control plots and normalized data. The outputs are directly linked to the existing statistics module of ArrayAnalysis.org, but can also be downloaded for further downstream analysis in third-party tools. The Illumina bead arrays analysis module is available at http://www.arrayanalysis.org . A user guide, a tutorial demonstrating the analysis of an example dataset, and R scripts are available. The module can be used as a starting point for statistical evaluation and pathway analysis provided on the website or to generate processed input data for a broad range of applications in life sciences research.

  20. Time-variant random interval natural frequency analysis of structures

    NASA Astrophysics Data System (ADS)

    Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin

    2018-02-01

    This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.

  1. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  2. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    PubMed

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. The Association Between Nutrition Facts Label Utilization and Comprehension among Latinos in Two East Los Angeles Neighborhoods

    PubMed Central

    Sharif, Mienah Z.; Rizzo, Shemra; Prelip, Michael L; Glik, Deborah C; Belin, Thomas R; Langellier, Brent A; Kuo, Alice A.; Garza, Jeremiah R; Ortega, Alexander N

    2014-01-01

    Background The Nutrition Facts label can facilitate healthy dietary practices. There is a dearth of research on Latinos’ utilization and comprehension of the Nutrition Facts label. Objective To measure Nutrition Facts label use and comprehension and to identify their correlates among Latinos in East Los Angeles. Design Cross-sectional interviewer-administered survey using a computer assisted personal interview (CAPI) software conducted in either English or Spanish in the participant’s home. Participants/Setting Eligibility criteria were: living in a household within the block clusters identified, being age 18 or over, speaking English or Spanish, identifying as Latino and as the household’s main food purchaser and preparer. Analyses were based on 269 eligible respondents. Statistical analyses performed Chi-square test and multivariate logistic regression analysis assessed the association between the main outcomes and demographics. Multiple imputation addressed missing data. Results Sixty percent reported using the label; only 13% showed adequate comprehension of the label. Utilization was associated with being female, speaking Spanish and being below the poverty line. Comprehension was associated with younger age, not being married, and higher education. Utilization was not associated with comprehension. Conclusions Latinos who are using the Nutrition Facts label are not correctly interpreting the available information. Targeted education is needed to improve Nutrition Facts label use and comprehension, to directly improve diet, particularly among males, older Latinos, and those with less than a high school education. PMID:24974172

  4. Causality

    NASA Astrophysics Data System (ADS)

    Pearl, Judea

    2000-03-01

    Written by one of the pre-eminent researchers in the field, this book provides a comprehensive exposition of modern analysis of causation. It shows how causality has grown from a nebulous concept into a mathematical theory with significant applications in the fields of statistics, artificial intelligence, philosophy, cognitive science, and the health and social sciences. Pearl presents a unified account of the probabilistic, manipulative, counterfactual and structural approaches to causation, and devises simple mathematical tools for analyzing the relationships between causal connections, statistical associations, actions and observations. The book will open the way for including causal analysis in the standard curriculum of statistics, artifical intelligence, business, epidemiology, social science and economics. Students in these areas will find natural models, simple identification procedures, and precise mathematical definitions of causal concepts that traditional texts have tended to evade or make unduly complicated. This book will be of interest to professionals and students in a wide variety of fields. Anyone who wishes to elucidate meaningful relationships from data, predict effects of actions and policies, assess explanations of reported events, or form theories of causal understanding and causal speech will find this book stimulating and invaluable.

  5. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    PubMed Central

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  6. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

    PubMed

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  7. The writer independent online handwriting recognition system frog on hand and cluster generative statistical dynamic time warping.

    PubMed

    Bahlmann, Claus; Burkhardt, Hans

    2004-03-01

    In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device.

  8. Metrological traceability in education: A practical online system for measuring and managing middle school mathematics instruction

    NASA Astrophysics Data System (ADS)

    Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.

    2015-02-01

    Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.

  9. Statistical modeling of optical attenuation measurements in continental fog conditions

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Saeed; Amin, Muhammad; Awan, Muhammad Saleem; Minhas, Abid Ali; Saleem, Jawad; Khan, Rahimdad

    2017-03-01

    Free-space optics is an innovative technology that uses atmosphere as a propagation medium to provide higher data rates. These links are heavily affected by atmospheric channel mainly because of fog and clouds that act to scatter and even block the modulated beam of light from reaching the receiver end, hence imposing severe attenuation. A comprehensive statistical study of the fog effects and deep physical understanding of the fog phenomena are very important for suggesting improvements (reliability and efficiency) in such communication systems. In this regard, 6-months real-time measured fog attenuation data are considered and statistically investigated. A detailed statistical analysis related to each fog event for that period is presented; the best probability density functions are selected on the basis of Akaike information criterion, while the estimates of unknown parameters are computed by maximum likelihood estimation technique. The results show that most fog attenuation events follow normal mixture distribution and some follow the Weibull distribution.

  10. Environmental Health Practice: Statistically Based Performance Measurement

    PubMed Central

    Enander, Richard T.; Gagnon, Ronald N.; Hanumara, R. Choudary; Park, Eugene; Armstrong, Thomas; Gute, David M.

    2007-01-01

    Objectives. State environmental and health protection agencies have traditionally relied on a facility-by-facility inspection-enforcement paradigm to achieve compliance with government regulations. We evaluated the effectiveness of a new approach that uses a self-certification random sampling design. Methods. Comprehensive environmental and occupational health data from a 3-year statewide industry self-certification initiative were collected from representative automotive refinishing facilities located in Rhode Island. Statistical comparisons between baseline and postintervention data facilitated a quantitative evaluation of statewide performance. Results. The analysis of field data collected from 82 randomly selected automotive refinishing facilities showed statistically significant improvements (P<.05, Fisher exact test) in 4 major performance categories: occupational health and safety, air pollution control, hazardous waste management, and wastewater discharge. Statistical significance was also shown when a modified Bonferroni adjustment for multiple comparisons was performed. Conclusions. Our findings suggest that the new self-certification approach to environmental and worker protection is effective and can be used as an adjunct to further enhance state and federal enforcement programs. PMID:17267709

  11. Learning Outcomes in a Laboratory Environment vs. Classroom for Statistics Instruction: An Alternative Approach Using Statistical Software

    ERIC Educational Resources Information Center

    McCulloch, Ryan Sterling

    2017-01-01

    The role of any statistics course is to increase the understanding and comprehension of statistical concepts and those goals can be achieved via both theoretical instruction and statistical software training. However, many introductory courses either forego advanced software usage, or leave its use to the student as a peripheral activity. The…

  12. The Effect of Local Smokefree Regulations on Birth Outcomes and Prenatal Smoking.

    PubMed

    Bartholomew, Karla S; Abouk, Rahi

    2016-07-01

    Objectives We assessed the impact of varying levels of smokefree regulations on birth outcomes and prenatal smoking. Methods We exploited variations in timing and regulation restrictiveness of West Virginia's county smokefree regulations to assess their impact on birthweight, gestational age, low birthweight, very low birthweight, preterm birth, and prenatal smoking. We conducted regression analysis using state Vital Statistics individual-level data for singletons born to West Virginia residents between 1995-2010 (N = 293,715). Results Only more comprehensive smokefree regulations were associated with statistically significant favorable effects on birth outcomes in the full sample: Comprehensive (workplace/restaurant/bar ban) demonstrated increased birthweight (29 grams, p < 0.05) and gestational age (1.64 days, p < 0.01), as well as reductions in very low birthweight (-0.4 %, p < 0.05) and preterm birth (-1.5 %, p < 0.01); Restrictive (workplace/restaurant ban) demonstrated a small decrease in very low birthweight (-0.2 %, p < 0.05). Among less restrictive regulations: Moderate (workplace ban) was associated with a 23 g (p < 0.01) decrease in birthweight; Limited (partial ban) had no effect. Comprehensive's improvements extended to most maternal groups, and were broadest among mothers 21+ years, non-smokers, and unmarried mothers. Prenatal smoking declined slightly (-1.7 %, p < 0.01) only among married women with Comprehensive. Conclusions Regulation restrictiveness is a determining factor in the impact of smokefree regulations on birth outcomes, with comprehensive smokefree regulations showing promise in improving birth outcomes. Favorable effects on birth outcomes appear to stem from reduced secondhand smoke exposure rather than reduced prenatal smoking prevalence. This study is limited by an inability to measure secondhand smoke exposure and the paucity of data on policy implementation and enforcement.

  13. Design and Implementation of a Comprehensive Web-based Survey for Ovarian Cancer Survivorship with an Analysis of Prediagnosis Symptoms via Text Mining

    PubMed Central

    Sun, Jiayang; Bogie, Kath M; Teagno, Joe; Sun, Yu-Hsiang (Sam); Carter, Rebecca R; Cui, Licong; Zhang, Guo-Qiang

    2014-01-01

    Ovarian cancer (OvCa) is the most lethal gynecologic disease in the United States, with an overall 5-year survival rate of 44.5%, about half of the 89.2% for all breast cancer patients. To identify factors that possibly contribute to the long-term survivorship of women with OvCa, we conducted a comprehensive online Ovarian Cancer Survivorship Survey from 2009 to 2013. This paper presents the design and implementation of our survey, introduces its resulting data source, the OVA-CRADLE™ (Clinical Research Analytics and Data Lifecycle Environment), and illustrates a sample application of the survey and data by an analysis of prediagnosis symptoms, using text mining and statistics. The OVA-CRADLE™ is an application of our patented Physio-MIMI technology, facilitating Web-based access, online query and exploration of data. The prediagnostic symptoms and association of early-stage OvCa diagnosis with endometriosis provide potentially important indicators for future studies in this field. PMID:25861211

  14. Reframing Serial Murder Within Empirical Research.

    PubMed

    Gurian, Elizabeth A

    2017-04-01

    Empirical research on serial murder is limited due to the lack of consensus on a definition, the continued use of primarily descriptive statistics, and linkage to popular culture depictions. These limitations also inhibit our understanding of these offenders and affect credibility in the field of research. Therefore, this comprehensive overview of a sample of 508 cases (738 total offenders, including partnered groups of two or more offenders) provides analyses of solo male, solo female, and partnered serial killers to elucidate statistical differences and similarities in offending and adjudication patterns among the three groups. This analysis of serial homicide offenders not only supports previous research on offending patterns present in the serial homicide literature but also reveals that empirically based analyses can enhance our understanding beyond traditional case studies and descriptive statistics. Further research based on these empirical analyses can aid in the development of more accurate classifications and definitions of serial murderers.

  15. Statistical power comparisons at 3T and 7T with a GO / NOGO task.

    PubMed

    Torrisi, Salvatore; Chen, Gang; Glen, Daniel; Bandettini, Peter A; Baker, Chris I; Reynolds, Richard; Yen-Ting Liu, Jeffrey; Leshin, Joseph; Balderston, Nicholas; Grillon, Christian; Ernst, Monique

    2018-07-15

    The field of cognitive neuroscience is weighing evidence about whether to move from standard field strength to ultra-high field (UHF). The present study contributes to the evidence by comparing a cognitive neuroscience paradigm at 3 Tesla (3T) and 7 Tesla (7T). The goal was to test and demonstrate the practical effects of field strength on a standard GO/NOGO task using accessible preprocessing and analysis tools. Two independent matched healthy samples (N = 31 each) were analyzed at 3T and 7T. Results show gains at 7T in statistical strength, the detection of smaller effects and group-level power. With an increased availability of UHF scanners, these gains may be exploited by cognitive neuroscientists and other neuroimaging researchers to develop more efficient or comprehensive experimental designs and, given the same sample size, achieve greater statistical power at 7T. Published by Elsevier Inc.

  16. Geostatistics and GIS: tools for characterizing environmental contamination.

    PubMed

    Henshaw, Shannon L; Curriero, Frank C; Shields, Timothy M; Glass, Gregory E; Strickland, Paul T; Breysse, Patrick N

    2004-08-01

    Geostatistics is a set of statistical techniques used in the analysis of georeferenced data that can be applied to environmental contamination and remediation studies. In this study, the 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (DDE) contamination at a Superfund site in western Maryland is evaluated. Concern about the site and its future clean up has triggered interest within the community because residential development surrounds the area. Spatial statistical methods, of which geostatistics is a subset, are becoming increasingly popular, in part due to the availability of geographic information system (GIS) software in a variety of application packages. In this article, the joint use of ArcGIS software and the R statistical computing environment are demonstrated as an approach for comprehensive geostatistical analyses. The spatial regression method, kriging, is used to provide predictions of DDE levels at unsampled locations both within the site and the surrounding areas where residential development is ongoing.

  17. Computational methods to extract meaning from text and advance theories of human cognition.

    PubMed

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. Copyright © 2010 Cognitive Science Society, Inc.

  18. Education Statistics Quarterly, Spring 2001.

    ERIC Educational Resources Information Center

    Education Statistics Quarterly, 2001

    2001-01-01

    The "Education Statistics Quarterly" gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products and funding opportunities developed over a 3-month period. Each issue…

  19. Tile-based Fisher ratio analysis of comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOFMS) data using a null distribution approach.

    PubMed

    Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E

    2015-04-07

    Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.

  20. MALDI-TOF Mass Spectrometry Enables a Comprehensive and Fast Analysis of Dynamics and Qualities of Stress Responses of Lactobacillus paracasei subsp. paracasei F19

    PubMed Central

    Schott, Ann-Sophie; Behr, Jürgen; Quinn, Jennifer; Vogel, Rudi F.

    2016-01-01

    Lactic acid bacteria (LAB) are widely used as starter cultures in the manufacture of foods. Upon preparation, these cultures undergo various stresses resulting in losses of survival and fitness. In order to find conditions for the subsequent identification of proteomic biomarkers and their exploitation for preconditioning of strains, we subjected Lactobacillus (Lb.) paracasei subsp. paracasei TMW 1.1434 (F19) to different stress qualities (osmotic stress, oxidative stress, temperature stress, pH stress and starvation stress). We analysed the dynamics of its stress responses based on the expression of stress proteins using MALDI-TOF mass spectrometry (MS), which has so far been used for species identification. Exploiting the methodology of accumulating protein expression profiles by MALDI-TOF MS followed by the statistical evaluation with cluster analysis and discriminant analysis of principle components (DAPC), it was possible to monitor the expression of low molecular weight stress proteins, identify a specific time point when the expression of stress proteins reached its maximum, and statistically differentiate types of adaptive responses into groups. Above the specific result for F19 and its stress response, these results demonstrate the discriminatory power of MALDI-TOF MS to characterize even dynamics of stress responses of bacteria and enable a knowledge-based focus on the laborious identification of biomarkers and stress proteins. To our knowledge, the implementation of MALDI-TOF MS protein profiling for the fast and comprehensive analysis of various stress responses is new to the field of bacterial stress responses. Consequently, we generally propose MALDI-TOF MS as an easy and quick method to characterize responses of microbes to different environmental conditions, to focus efforts of more elaborate approaches on time points and dynamics of stress responses. PMID:27783652

  1. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less

  2. Detecting Disease Specific Pathway Substructures through an Integrated Systems Biology Approach

    PubMed Central

    Alaimo, Salvatore; Marceca, Gioacchino Paolo; Ferro, Alfredo; Pulvirenti, Alfredo

    2017-01-01

    In the era of network medicine, pathway analysis methods play a central role in the prediction of phenotype from high throughput experiments. In this paper, we present a network-based systems biology approach capable of extracting disease-perturbed subpathways within pathway networks in connection with expression data taken from The Cancer Genome Atlas (TCGA). Our system extends pathways with missing regulatory elements, such as microRNAs, and their interactions with genes. The framework enables the extraction, visualization, and analysis of statistically significant disease-specific subpathways through an easy to use web interface. Our analysis shows that the methodology is able to fill the gap in current techniques, allowing a more comprehensive analysis of the phenomena underlying disease states. PMID:29657291

  3. Introduction to the DISRUPT postprandial database: subjects, studies and methodologies.

    PubMed

    Jackson, Kim G; Clarke, Dave T; Murray, Peter; Lovegrove, Julie A; O'Malley, Brendan; Minihane, Anne M; Williams, Christine M

    2010-03-01

    Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.

  4. Post Second World War immigration from Balkan countries to Turkey.

    PubMed

    Kirisci, K

    1995-01-01

    "Although there are some works, both in English and Turkish, that have studied migration into the Ottoman empire from the Balkans during the 19th century...it is difficult to find any systematic and comprehensive literature that examines the period since the establishment of the Turkish Republic.... This article aims at filling some of this gap....[The article offers] an analysis of the size and causes of migration from the Balkans to Turkey since the end of the Second World War. The statistics for tables used in this article, unless stated otherwise, have been obtained from the General Directorate of Village Works in Ankara, which is responsible for keeping the statistical records on immigrants arriving in Turkey." excerpt

  5. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  6. A Comparison of the Achievement of Statistics Students Enrolled in Online and Face-to-Face Settings

    ERIC Educational Resources Information Center

    Christmann, Edwin P.

    2017-01-01

    This study compared the achievement of male and female students who were enrolled in an online univariate statistics course to students enrolled in a traditional face-to-face univariate statistics course. The subjects, 47 graduate students enrolled in univariate statistics classes at a public, comprehensive university, were randomly assigned to…

  7. Whole-Range Assessment: A Simple Method for Analysing Allelopathic Dose-Response Data

    PubMed Central

    An, Min; Pratley, J. E.; Haig, T.; Liu, D.L.

    2005-01-01

    Based on the typical biological responses of an organism to allelochemicals (hormesis), concepts of whole-range assessment and inhibition index were developed for improved analysis of allelopathic data. Examples of their application are presented using data drawn from the literature. The method is concise and comprehensive, and makes data grouping and multiple comparisons simple, logical, and possible. It improves data interpretation, enhances research outcomes, and is a statistically efficient summary of the plant response profiles. PMID:19330165

  8. Towards a Comprehensive Approach to the Analysis of Categorical Data.

    DTIC Science & Technology

    1983-06-01

    have made methodological contributions to the topic include R.L. Anderson, K. Hinkelman, 0. Kempthorne, and K. Koehler . The next section of this paper...their sufficient statistics that originated in the 1930’s with Fisher (e.g. see Dempster, 1971, and the discussion in Andersen. 1980 ). From the...ones from the original density for which h, h, .... h are MSS’s, and h ,h. h are the corresponding MSS’s in the conditional density (see Andersen, 1980

  9. USING STATISTICAL METHODS FOR WATER QUALITY MANAGEMENT: ISSUES, PROBLEMS AND SOLUTIONS

    EPA Science Inventory

    This book is readable, comprehensible and I anticipate, usable. The author has an enthusiasm which comes out in the text. Statistics is presented as a living breathing subject, still being debated, defined, and refined. This statistics book actually has examples in the field...

  10. Implementing a Web-Based Decision Support System to Spatially and Statistically Analyze Ecological Conditions of the Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Nguyen, A.; Mueller, C.; Brooks, A. N.; Kislik, E. A.; Baney, O. N.; Ramirez, C.; Schmidt, C.; Torres-Perez, J. L.

    2014-12-01

    The Sierra Nevada is experiencing changes in hydrologic regimes, such as decreases in snowmelt and peak runoff, which affect forest health and the availability of water resources. Currently, the USDA Forest Service Region 5 is undergoing Forest Plan revisions to include climate change impacts into mitigation and adaptation strategies. However, there are few processes in place to conduct quantitative assessments of forest conditions in relation to mountain hydrology, while easily and effectively delivering that information to forest managers. To assist the USDA Forest Service, this study is the final phase of a three-term project to create a Decision Support System (DSS) to allow ease of access to historical and forecasted hydrologic, climatic, and terrestrial conditions for the entire Sierra Nevada. This data is featured within three components of the DSS: the Mapping Viewer, Statistical Analysis Portal, and Geospatial Data Gateway. Utilizing ArcGIS Online, the Sierra DSS Mapping Viewer enables users to visually analyze and locate areas of interest. Once the areas of interest are targeted, the Statistical Analysis Portal provides subbasin level statistics for each variable over time by utilizing a recently developed web-based data analysis and visualization tool called Plotly. This tool allows users to generate graphs and conduct statistical analyses for the Sierra Nevada without the need to download the dataset of interest. For more comprehensive analysis, users are also able to download datasets via the Geospatial Data Gateway. The third phase of this project focused on Python-based data processing, the adaptation of the multiple capabilities of ArcGIS Online and Plotly, and the integration of the three Sierra DSS components within a website designed specifically for the USDA Forest Service.

  11. Characterization of Low-Molecular-Weight Heparins by Strong Anion-Exchange Chromatography.

    PubMed

    Sadowski, Radosław; Gadzała-Kopciuch, Renata; Kowalkowski, Tomasz; Widomski, Paweł; Jujeczka, Ludwik; Buszewski, Bogusław

    2017-11-01

    Currently, detailed structural characterization of low-molecular-weight heparin (LMWH) products is an analytical subject of great interest. In this work, we carried out a comprehensive structural analysis of LMWHs and applied a modified pharmacopeial method, as well as methods developed by other researchers, to the analysis of novel biosimilar LMWH products; and, for the first time, compared the qualitative and quantitative composition of commercially available drugs (enoxaparin, nadroparin, and dalteparin). For this purpose, we used strong anion-exchange (SAX) chromatography with spectrophotometric detection because this method is more helpful, easier, and faster than other separation techniques for the detailed disaccharide analysis of new LMWH drugs. In addition, we subjected the obtained results to statistical analysis (factor analysis, t-test, and Newman-Keuls post hoc test).

  12. Novel features and enhancements in BioBin, a tool for the biologically inspired binning and association analysis of rare variants

    PubMed Central

    Byrska-Bishop, Marta; Wallace, John; Frase, Alexander T; Ritchie, Marylyn D

    2018-01-01

    Abstract Motivation BioBin is an automated bioinformatics tool for the multi-level biological binning of sequence variants. Herein, we present a significant update to BioBin which expands the software to facilitate a comprehensive rare variant analysis and incorporates novel features and analysis enhancements. Results In BioBin 2.3, we extend our software tool by implementing statistical association testing, updating the binning algorithm, as well as incorporating novel analysis features providing for a robust, highly customizable, and unified rare variant analysis tool. Availability and implementation The BioBin software package is open source and freely available to users at http://www.ritchielab.com/software/biobin-download Contact mdritchie@geisinger.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:28968757

  13. Association of obesity with hypertension and type 2 diabetes mellitus in India: A meta-analysis of observational studies

    PubMed Central

    Babu, Giridhara R; Murthy, G V S; Ana, Yamuna; Patel, Prital; Deepa, R; Neelon, Sara E Benjamin; Kinra, Sanjay; Reddy, K Srinath

    2018-01-01

    AIM To perform a meta-analysis of the association of obesity with hypertension and type 2 diabetes mellitus (T2DM) in India among adults. METHODS To conduct meta-analysis, we performed comprehensive, electronic literature search in the PubMed, CINAHL Plus, and Google Scholar. We restricted the analysis to studies with documentation of some measure of obesity namely; body mass index, waist-hip ratio, waist circumference and diagnosis of hypertension or diagnosis of T2DM. By obtaining summary estimates of all included studies, the meta-analysis was performed using both RevMan version 5 and “metan” command STATA version 11. Heterogeneity was measured by I2 statistic. Funnel plot analysis has been done to assess the study publication bias. RESULTS Of the 956 studies screened, 18 met the eligibility criteria. The pooled odds ratio between obesity and hypertension was 3.82 (95%CI: 3.39 to 4.25). The heterogeneity around this estimate (I2 statistic) was 0%, indicating low variability. The pooled odds ratio from the included studies showed a statistically significant association between obesity and T2DM (OR = 1.14, 95%CI: 1.04 to 1.24) with a high degree of variability. CONCLUSION Despite methodological differences, obesity showed significant, potentially plausible association with hypertension and T2DM in studies conducted in India. Being a modifiable risk factor, our study informs setting policy priority and intervention efforts to prevent debilitating complications. PMID:29359028

  14. Association of obesity with hypertension and type 2 diabetes mellitus in India: A meta-analysis of observational studies.

    PubMed

    Babu, Giridhara R; Murthy, G V S; Ana, Yamuna; Patel, Prital; Deepa, R; Neelon, Sara E Benjamin; Kinra, Sanjay; Reddy, K Srinath

    2018-01-15

    To perform a meta-analysis of the association of obesity with hypertension and type 2 diabetes mellitus (T2DM) in India among adults. To conduct meta-analysis, we performed comprehensive, electronic literature search in the PubMed, CINAHL Plus, and Google Scholar. We restricted the analysis to studies with documentation of some measure of obesity namely; body mass index, waist-hip ratio, waist circumference and diagnosis of hypertension or diagnosis of T2DM. By obtaining summary estimates of all included studies, the meta-analysis was performed using both RevMan version 5 and "metan" command STATA version 11. Heterogeneity was measured by I 2 statistic. Funnel plot analysis has been done to assess the study publication bias. Of the 956 studies screened, 18 met the eligibility criteria. The pooled odds ratio between obesity and hypertension was 3.82 (95%CI: 3.39 to 4.25). The heterogeneity around this estimate (I2 statistic) was 0%, indicating low variability. The pooled odds ratio from the included studies showed a statistically significant association between obesity and T2DM (OR = 1.14, 95%CI: 1.04 to 1.24) with a high degree of variability. Despite methodological differences, obesity showed significant, potentially plausible association with hypertension and T2DM in studies conducted in India. Being a modifiable risk factor, our study informs setting policy priority and intervention efforts to prevent debilitating complications.

  15. [Analysis on the current use and trend of drugs for digestive system through comprehensive statistics index, in Hangzhou].

    PubMed

    Chen, Xin-yu; Zhang, Chun-quan; Zhang, Hong; Liu, Wei; Wang, Jun-yan

    2010-05-01

    To investigate the use and trend of drugs on digestive system in Hangzhou area, under the comprehensive statistics index (CSI). Using the analytical method related to the sum of consumption and CSI, the application of digestive system drugs of different manufacturers in Hangzhou from 2005 to 2007 was analyzed. Other than H2 receptor antagonist, the total consumption of digestive system drugs increased yearly, in terms of the total consumption, the first 4 leading ones were proton pump inhibitors, micro ecology medicines, antiemetic drugs and gastroprokinetic agents. The Laspeyres index of drugs on digestive system increased to different extent. The Laspeyres indices of proton pump inhibitors, probiotics, antiemetic drugs and gastroprokinetic agents were 1.396 50, 1.020 42, 1.728 90, 1.148 50 in 2006 while 2.081 10, 1.217 55, 2.223 50, 1.156 60 in 2007, respectively. Through CSI, the results showed the situation of use and trend of digestive system related drugs in Hangzhou. Factors as rationality, efficiency and costs of the drugs as well as the etiology of the disease were also explored to some degree.

  16. A weighted U-statistic for genetic association analyses of sequencing data.

    PubMed

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  17. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    PubMed

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

  18. Arrhythmias Following Comprehensive Stage II Surgical Palliation in Single Ventricle Patients.

    PubMed

    Wilhelm, Carolyn M; Paulus, Diane; Cua, Clifford L; Kertesz, Naomi J; Cheatham, John P; Galantowicz, Mark; Fernandez, Richard P

    2016-03-01

    Post-operative arrhythmias are common in pediatric patients following cardiac surgery. Following hybrid palliation in single ventricle patients, a comprehensive stage II palliation is performed. The incidence of arrhythmias in patients following comprehensive stage II palliation is unknown. The purpose of this study is to determine the incidence of arrhythmias following comprehensive stage II palliation. A single-center retrospective chart review was performed on all single ventricle patients undergoing a comprehensive stage II palliation from January 2010 to May 2014. Pre-operative, operative, and post-operative data were collected. A clinically significant arrhythmia was defined as an arrhythmia which led to cardiopulmonary resuscitation or required treatment with either pacing or antiarrhythmic medication. Statistical analysis was performed with Wilcoxon rank-sum test and Fisher's exact test with p < 0.05 significant. Forty-eight single ventricle patients were reviewed (32 hypoplastic left heart syndrome, 16 other single ventricle variants). Age at surgery was 185 ± 56 days. Cardiopulmonary bypass time was 259 ± 45 min. Average vasoactive-inotropic score was 5.97 ± 7.58. Six patients (12.5 %) had clinically significant arrhythmias: four sinus bradycardia, one 2:1 atrioventricular block, and one slow junctional rhythm. No tachyarrhythmias were documented for this patient population. Presence of arrhythmia was associated with elevated lactate (p = 0.04) and cardiac arrest (p = 0.002). Following comprehensive stage II palliation, single ventricle patients are at low risk for development of tachyarrhythmias. The most frequent arrhythmia seen in these patients was sinus bradycardia associated with respiratory compromise.

  19. The effect of a rehabilitation nursing intervention model on improving the comprehensive health status of patients with hand burns.

    PubMed

    Li, Lin; Dai, Jia-Xi; Xu, Le; Huang, Zhen-Xia; Pan, Qiong; Zhang, Xi; Jiang, Mei-Yun; Chen, Zhao-Hong

    2017-06-01

    To observe the effect of a rehabilitation intervention on the comprehensive health status of patients with hand burns. Most studies of hand-burn patients have focused on functional recovery. There have been no studies involving a biological-psychological-social rehabilitation model of hand-burn patients. A randomized controlled design was used. Patients with hand burns were recruited to the study, and sixty patients participated. Participants were separated into two groups: (1) The rehabilitation intervention model group (n=30) completed the rehabilitation intervention model, which included the following measures: enhanced social support, intensive health education, comprehensive psychological intervention, and graded exercise. (2) The control group (n=30) completed routine treatment. Intervention lasted 5 weeks. Analysis of variance (ANOVA) and Student's t test were conducted. The rehabilitation intervention group had significantly better scores than the control group for comprehensive health, physical function, psychological function, social function, and general health. The differences between the index scores of the two groups were statistically significant. The rehabilitation intervention improved the comprehensive health status of patients with hand burns and has favorable clinical application. The comprehensive rehabilitation intervention model used here provides scientific guidance for medical staff aiming to improve the integrated health status of hand-burn patients and accelerate their recovery. What does this paper contribute to the wider global clinical community? Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Moral Virtue and Practical Wisdom: Theme Comprehension in Children, Youth and Adults

    PubMed Central

    Narvaez, Darcia; Gleason, Tracy; Mitchell, Christyan

    2010-01-01

    Three hypotheses were tested about the relation of moral comprehension to prudential comprehension by contrasting comprehension of themes in moral stories with comprehension of themes in prudential stories among third grade, fifth grade and college students (n = 168) in Study 1, and among college students, young and middle aged adults, and older adults (n = 96) in Study 2. In both studies, all groups were statistically significantly better at moral theme comprehension than prudential theme comprehension, suggesting that moral comprehension may develop prior to prudential comprehension. In Study 2, all groups performed equally on moral theme generation whereas both adult groups were significantly better than college students on prudential theme generation. Overall, the findings of these studies provide modest evidence that moral and prudential comprehension each develop separately, and that the latter may develop more slowly. PMID:21171549

  1. Transportation statistics annual report 2000

    DOT National Transportation Integrated Search

    2001-01-01

    The Transportation Statistics Annual Report (TSAR) is a Congressionally mandated publication with wide distribution. The TSAR provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of...

  2. Results of the NaCo Large Program: probing the occurrence of exoplanets and brown dwarfs at wide orbit

    NASA Astrophysics Data System (ADS)

    Vigan, A.; Chauvin, G.; Bonavita, M.; Desidera, S.; Bonnefoy, M.; Mesa, D.; Beuzit, J.-L.; Augereau, J.-C.; Biller, B.; Boccaletti, A.; Brugaletta, E.; Buenzli, E.; Carson, J.; Covino, E.; Delorme, P.; Eggenberger, A.; Feldt, M.; Hagelberg, J.; Henning, T.; Lagrange, A.-M.; Lanzafame, A.; Ménard, F.; Messina, S.; Meyer, M.; Montagnier, G.; Mordasini, C.; Mouillet, D.; Moutou, C.; Mugnier, L.; Quanz, S. P.; Reggiani, M.; Ségransan, D.; Thalmann, C.; Waters, R.; Zurlo, A.

    2014-01-01

    Over the past decade, a growing number of deep imaging surveys have started to provide meaningful constraints on the population of extrasolar giant planets at large orbital separation. Primary targets for these surveys have been carefully selected based on their age, distance and spectral type, and often on their membership to young nearby associations where all stars share common kinematics, photometric and spectroscopic properties. The next step is a wider statistical analysis of the frequency and properties of low mass companions as a function of stellar mass and orbital separation. In late 2009, we initiated a coordinated European Large Program using angular differential imaging in the H band (1.66 μm) with NaCo at the VLT. Our aim is to provide a comprehensive and statistically significant study of the occurrence of extrasolar giant planets and brown dwarfs at large (5-500 AU) orbital separation around ~150 young, nearby stars, a large fraction of which have never been observed at very deep contrast. The survey has now been completed and we present the data analysis and detection limits for the observed sample, for which we reach the planetary-mass domain at separations of >~50 AU on average. We also present the results of the statistical analysis that has been performed over the 75 targets newly observed at high-contrast. We discuss the details of the statistical analysis and the physical constraints that our survey provides for the frequency and formation scenario of planetary mass companions at large separation.

  3. Toward a comprehensive and systematic methylome signature in colorectal cancers.

    PubMed

    Ashktorab, Hassan; Rahi, Hamed; Wansley, Daniel; Varma, Sudhir; Shokrani, Babak; Lee, Edward; Daremipouran, Mohammad; Laiyemo, Adeyinka; Goel, Ajay; Carethers, John M; Brim, Hassan

    2013-08-01

    CpG Island Methylator Phenotype (CIMP) is one of the underlying mechanisms in colorectal cancer (CRC). This study aimed to define a methylome signature in CRC through a methylation microarray analysis and a compilation of promising CIMP markers from the literature. Illumina HumanMethylation27 (IHM27) array data was generated and analyzed based on statistical differences in methylation data (1st approach) or based on overall differences in methylation percentages using lower 95% CI (2nd approach). Pyrosequencing was performed for the validation of nine genes. A meta-analysis was used to identify CIMP and non-CIMP markers that were hypermethylated in CRC but did not yet make it to the CIMP genes' list. Our 1st approach for array data analysis demonstrated the limitations in selecting genes for further validation, highlighting the need for the 2nd bioinformatics approach to adequately select genes with differential aberrant methylation. A more comprehensive list, which included non-CIMP genes, such as APC, EVL, CD109, PTEN, TWIST1, DCC, PTPRD, SFRP1, ICAM5, RASSF1A, EYA4, 30ST2, LAMA1, KCNQ5, ADHEF1, and TFPI2, was established. Array data are useful to categorize and cluster colonic lesions based on their global methylation profiles; however, its usefulness in identifying robust methylation markers is limited and rely on the data analysis method. We have identified 16 non-CIMP-panel genes for which we provide rationale for inclusion in a more comprehensive characterization of CIMP+ CRCs. The identification of a definitive list for methylome specific genes in CRC will contribute to better clinical management of CRC patients.

  4. Jllumina - A comprehensive Java-based API for statistical Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data processing.

    PubMed

    Almeida, Diogo; Skov, Ida; Lund, Jesper; Mohammadnejad, Afsaneh; Silva, Artur; Vandin, Fabio; Tan, Qihua; Baumbach, Jan; Röttger, Richard

    2016-10-01

    Measuring differential methylation of the DNA is the nowadays most common approach to linking epigenetic modifications to diseases (called epigenome-wide association studies, EWAS). For its low cost, its efficiency and easy handling, the Illumina HumanMethylation450 BeadChip and its successor, the Infinium MethylationEPIC BeadChip, is the by far most popular techniques for conduction EWAS in large patient cohorts. Despite the popularity of this chip technology, raw data processing and statistical analysis of the array data remains far from trivial and still lacks dedicated software libraries enabling high quality and statistically sound downstream analyses. As of yet, only R-based solutions are freely available for low-level processing of the Illumina chip data. However, the lack of alternative libraries poses a hurdle for the development of new bioinformatic tools, in particular when it comes to web services or applications where run time and memory consumption matter, or EWAS data analysis is an integrative part of a bigger framework or data analysis pipeline. We have therefore developed and implemented Jllumina, an open-source Java library for raw data manipulation of Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data, supporting the developer with Java functions covering reading and preprocessing the raw data, down to statistical assessment, permutation tests, and identification of differentially methylated loci. Jllumina is fully parallelizable and publicly available at http://dimmer.compbio.sdu.dk/download.html.

  5. Jllumina - A comprehensive Java-based API for statistical Illumina Infinium HumanMethylation450 and MethylationEPIC data processing.

    PubMed

    Almeida, Diogo; Skov, Ida; Lund, Jesper; Mohammadnejad, Afsaneh; Silva, Artur; Vandin, Fabio; Tan, Qihua; Baumbach, Jan; Röttger, Richard

    2016-12-18

    Measuring differential methylation of the DNA is the nowadays most common approach to linking epigenetic modifications to diseases (called epigenome-wide association studies, EWAS). For its low cost, its efficiency and easy handling, the Illumina HumanMethylation450 BeadChip and its successor, the Infinium MethylationEPIC BeadChip, is the by far most popular techniques for conduction EWAS in large patient cohorts. Despite the popularity of this chip technology, raw data processing and statistical analysis of the array data remains far from trivial and still lacks dedicated software libraries enabling high quality and statistically sound downstream analyses. As of yet, only R-based solutions are freely available for low-level processing of the Illumina chip data. However, the lack of alternative libraries poses a hurdle for the development of new bioinformatic tools, in particular when it comes to web services or applications where run time and memory consumption matter, or EWAS data analysis is an integrative part of a bigger framework or data analysis pipeline. We have therefore developed and implemented Jllumina, an open-source Java library for raw data manipulation of Illumina Infinium HumanMethylation450 and Infinium MethylationEPIC BeadChip data, supporting the developer with Java functions covering reading and preprocessing the raw data, down to statistical assessment, permutation tests, and identification of differentially methylated loci. Jllumina is fully parallelizable and publicly available at http://dimmer.compbio.sdu.dk/download.html.

  6. The Effect of Folate and Folate Plus Zinc Supplementation on Endocrine Parameters and Sperm Characteristics in Sub-Fertile Men: A Systematic Review and Meta-Analysis.

    PubMed

    Irani, Morvarid; Amirian, Malihe; Sadeghi, Ramin; Lez, Justine Le; Latifnejad Roudsari, Robab

    2017-08-29

    To evaluate the effect of folate and folate plus zinc supplementation on endocrine parameters and sperm characteristics in sub fertile men. We conducted a systematic review and meta-analysis. Electronic databases of Medline, Scopus , Google scholar and Persian databases (SID, Iran medex, Magiran, Medlib, Iran doc) were searched from 1966 to December 2016 using a set of relevant keywords including "folate or folic acid AND (infertility, infertile, sterility)".All available randomized controlled trials (RCTs), conducted on a sample of sub fertile men with semen analyses, who took oral folic acid or folate plus zinc, were included. Data collected included endocrine parameters and sperm characteristics. Statistical analyses were done by Comprehensive Meta-analysis Version 2. In total, seven studies were included. Six studies had sufficient data for meta-analysis. "Sperm concentration was statistically higher in men supplemented with folate than with placebo (P < .001)". However, folate supplementation alone did not seem to be more effective than the placebo on the morphology (P = .056) and motility of the sperms (P = .652). Folate plus zinc supplementation did not show any statistically different effect on serum testosterone (P = .86), inhibin B (P = .84), FSH (P = .054), and sperm motility (P = .169) as compared to the placebo. Yet, folate plus zinc showed statistically higher effect on the sperm concentration (P < .001), morphology (P < .001), and serum folate level (P < .001) as compared to placebo. Folate plus zinc supplementation has a positive effect on sperm characteristics in sub fertile men. However, these results should be interpreted with caution due to the important heterogeneity of the studies included in this meta-analysis. Further trials are still needed to confirm the current findings.

  7. MetaboLyzer: A Novel Statistical Workflow for Analyzing Post-Processed LC/MS Metabolomics Data

    PubMed Central

    Mak, Tytus D.; Laiakis, Evagelia C.; Goudarzi, Maryam; Fornace, Albert J.

    2014-01-01

    Metabolomics, the global study of small molecules in a particular system, has in the last few years risen to become a primary –omics platform for the study of metabolic processes. With the ever-increasing pool of quantitative data yielded from metabolomic research, specialized methods and tools with which to analyze and extract meaningful conclusions from these data are becoming more and more crucial. Furthermore, the depth of knowledge and expertise required to undertake a metabolomics oriented study is a daunting obstacle to investigators new to the field. As such, we have created a new statistical analysis workflow, MetaboLyzer, which aims to both simplify analysis for investigators new to metabolomics, as well as provide experienced investigators the flexibility to conduct sophisticated analysis. MetaboLyzer’s workflow is specifically tailored to the unique characteristics and idiosyncrasies of postprocessed liquid chromatography/mass spectrometry (LC/MS) based metabolomic datasets. It utilizes a wide gamut of statistical tests, procedures, and methodologies that belong to classical biostatistics, as well as several novel statistical techniques that we have developed specifically for metabolomics data. Furthermore, MetaboLyzer conducts rapid putative ion identification and putative biologically relevant analysis via incorporation of four major small molecule databases: KEGG, HMDB, Lipid Maps, and BioCyc. MetaboLyzer incorporates these aspects into a comprehensive workflow that outputs easy to understand statistically significant and potentially biologically relevant information in the form of heatmaps, volcano plots, 3D visualization plots, correlation maps, and metabolic pathway hit histograms. For demonstration purposes, a urine metabolomics data set from a previously reported radiobiology study in which samples were collected from mice exposed to gamma radiation was analyzed. MetaboLyzer was able to identify 243 statistically significant ions out of a total of 1942. Numerous putative metabolites and pathways were found to be biologically significant from the putative ion identification workflow. PMID:24266674

  8. Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis

    NASA Astrophysics Data System (ADS)

    Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.

    2014-03-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.

  9. South Carolina Higher Education Statistical Abstract, 2014. 36th Edition

    ERIC Educational Resources Information Center

    Armour, Mim, Ed.

    2014-01-01

    The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2014 edition of the Statistical Abstract marks the 36th year of…

  10. South Carolina Higher Education Statistical Abstract, 2015. 37th Edition

    ERIC Educational Resources Information Center

    Armour, Mim, Ed.

    2015-01-01

    The South Carolina Higher Education Statistical Abstract is a comprehensive, single-source compilation of tables and graphs which report data frequently requested by the Governor, Legislators, college and university staff, other state government officials, and the general public. The 2015 edition of the Statistical Abstract marks the 37th year of…

  11. Statistical Handbook on Consumption and Wealth in the United States.

    ERIC Educational Resources Information Center

    Kaul, Chandrika, Ed.; Tomaselli-Moschovitis, Valerie, Ed.

    This easy-to-use statistical handbook features the most up-to-date and comprehensive data related to U.S. wealth and consumer spending patterns. More than 300 statistical tables and charts are organized into 8 detailed sections. Intended for students, teachers, and general users, the handbook contains these sections: (1) "General Economic…

  12. Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.

    1976-01-01

    The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.

  13. Improved Bond Equations for Fiber-Reinforced Polymer Bars in Concrete.

    PubMed

    Pour, Sadaf Moallemi; Alam, M Shahria; Milani, Abbas S

    2016-08-30

    This paper explores a set of new equations to predict the bond strength between fiber reinforced polymer (FRP) rebar and concrete. The proposed equations are based on a comprehensive statistical analysis and existing experimental results in the literature. Namely, the most effective parameters on bond behavior of FRP concrete were first identified by applying a factorial analysis on a part of the available database. Then the database that contains 250 pullout tests were divided into four groups based on the concrete compressive strength and the rebar surface. Afterward, nonlinear regression analysis was performed for each study group in order to determine the bond equations. The results show that the proposed equations can predict bond strengths more accurately compared to the other previously reported models.

  14. Overlapping Genetic and Child-Specific Nonshared Environmental Influences on Listening Comprehension, Reading Motivation, and Reading Comprehension

    PubMed Central

    Schenker, Victoria J.; Petrill, Stephen A.

    2015-01-01

    This study investigated the genetic and environmental influences on observed associations between listening comprehension, reading motivation, and reading comprehension. Univariate and multivariate quantitative genetic models were conducted in a sample of 284 pairs of twins at a mean age of 9.81 years. Genetic and nonshared environmental factors accounted for statistically significant variance in listening and reading comprehension, and nonshared environmental factors accounted for variance in reading motivation. Furthermore, listening comprehension demonstrated unique genetic and nonshared environmental influences but also had overlapping genetic influences with reading comprehension. Reading motivation and reading comprehension each had unique and overlapping nonshared environmental contributions. Therefore, listening comprehension appears to be related to reading primarily due to genetic factors whereas motivation appears to affect reading via child-specific, nonshared environmental effects. PMID:26321677

  15. Overlapping genetic and child-specific nonshared environmental influences on listening comprehension, reading motivation, and reading comprehension.

    PubMed

    Schenker, Victoria J; Petrill, Stephen A

    2015-01-01

    This study investigated the genetic and environmental influences on observed associations between listening comprehension, reading motivation, and reading comprehension. Univariate and multivariate quantitative genetic models were conducted in a sample of 284 pairs of twins at a mean age of 9.81 years. Genetic and nonshared environmental factors accounted for statistically significant variance in listening and reading comprehension, and nonshared environmental factors accounted for variance in reading motivation. Furthermore, listening comprehension demonstrated unique genetic and nonshared environmental influences but also had overlapping genetic influences with reading comprehension. Reading motivation and reading comprehension each had unique and overlapping nonshared environmental contributions. Therefore, listening comprehension appears to be related to reading primarily due to genetic factors whereas motivation appears to affect reading via child-specific, nonshared environmental effects. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Accuracy statistics in predicting Independent Activities of Daily Living (IADL) capacity with comprehensive and brief neuropsychological test batteries.

    PubMed

    Karzmark, Peter; Deutsch, Gayle K

    2018-01-01

    This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.

  17. Beyond Readability: Investigating Coherence of Clinical Text for Consumers

    PubMed Central

    Hetzel, Scott; Dalrymple, Prudence; Keselman, Alla

    2011-01-01

    Background A basic tenet of consumer health informatics is that understandable health resources empower the public. Text comprehension holds great promise for helping to characterize consumer problems in understanding health texts. The need for efficient ways to assess consumer-oriented health texts and the availability of computationally supported tools led us to explore the effect of various text characteristics on readers’ understanding of health texts, as well as to develop novel approaches to assessing these characteristics. Objective The goal of this study was to compare the impact of two different approaches to enhancing readability, and three interventions, on individuals’ comprehension of short, complex passages of health text. Methods Participants were 80 university staff, faculty, or students. Each participant was asked to “retell” the content of two health texts: one a clinical trial in the domain of diabetes mellitus, and the other typical Visit Notes. These texts were transformed for the intervention arms of the study. Two interventions provided terminology support via (1) standard dictionary or (2) contextualized vocabulary definitions. The third intervention provided coherence improvement. We assessed participants’ comprehension of the clinical texts through propositional analysis, an open-ended questionnaire, and analysis of the number of errors made. Results For the clinical trial text, the effect of text condition was not significant in any of the comparisons, suggesting no differences in recall, despite the varying levels of support (P = .84). For the Visit Note, however, the difference in the median total propositions recalled between the Coherent and the (Original + Dictionary) conditions was significant (P = .04). This suggests that participants in the Coherent condition recalled more of the original Visit Notes content than did participants in the Original and the Dictionary conditions combined. However, no difference was seen between (Original + Dictionary) and Vocabulary (P = .36) nor Coherent and Vocabulary (P = .62). No statistically significant effect of any document transformation was found either in the open-ended questionnaire (clinical trial: P = .86, Visit Note: P = .20) or in the error rate (clinical trial: P = .47, Visit Note: P = .25). However, post hoc power analysis suggested that increasing the sample size by approximately 6 participants per condition would result in a significant difference for the Visit Note, but not for the clinical trial text. Conclusions Statistically, the results of this study attest that improving coherence has a small effect on consumer comprehension of clinical text, but the task is extremely labor intensive and not scalable. Further research is needed using texts from more diverse clinical domains and more heterogeneous participants, including actual patients. Since comprehensibility of clinical text appears difficult to automate, informatics support tools may most productively support the health care professionals tasked with making clinical information understandable to patients. PMID:22138127

  18. Primary Health Care Evaluation: the view of clients and professionals about the Family Health Strategy.

    PubMed

    da Silva, Simone Albino; Baitelo, Tamara Cristina; Fracolli, Lislaine Aparecida

    2015-01-01

    to evaluate the attributes of primary health care as for access; longitudinality; comprehensiveness; coordination; family counseling and community counseling in the Family Health Strategy, triangulating and comparing the views of stakeholders involved in the care process. evaluative research with a quantitative approach and cross-sectional design. Data collected using the Primary Care Assessment Tool for interviews with 527 adult clients, 34 health professionals, and 330 parents of children up to two years old, related to 33 family health teams, in eleven municipalities. Analysis conducted in the Statistical Package for Social Sciences software, with a confidence interval of 95% and error of 0.1. the three groups assessed the first contact access - accessibility with low scores. Professionals evaluated with a high score the other attributes. Clients assigned low score evaluations for the attributes: community counseling; family counseling; comprehensiveness - services rendered; comprehensiveness - available services. the quality of performance self-reported by the professionals of the Family Health Strategy is not perceived or valued by clients, and the actions and services may have been developed inappropriately or insufficiently to be apprehended by the experience of clients.

  19. PlantNATsDB: a comprehensive database of plant natural antisense transcripts.

    PubMed

    Chen, Dijun; Yuan, Chunhui; Zhang, Jian; Zhang, Zhao; Bai, Lin; Meng, Yijun; Chen, Ling-Ling; Chen, Ming

    2012-01-01

    Natural antisense transcripts (NATs), as one type of regulatory RNAs, occur prevalently in plant genomes and play significant roles in physiological and pathological processes. Although their important biological functions have been reported widely, a comprehensive database is lacking up to now. Consequently, we constructed a plant NAT database (PlantNATsDB) involving approximately 2 million NAT pairs in 69 plant species. GO annotation and high-throughput small RNA sequencing data currently available were integrated to investigate the biological function of NATs. PlantNATsDB provides various user-friendly web interfaces to facilitate the presentation of NATs and an integrated, graphical network browser to display the complex networks formed by different NATs. Moreover, a 'Gene Set Analysis' module based on GO annotation was designed to dig out the statistical significantly overrepresented GO categories from the specific NAT network. PlantNATsDB is currently the most comprehensive resource of NATs in the plant kingdom, which can serve as a reference database to investigate the regulatory function of NATs. The PlantNATsDB is freely available at http://bis.zju.edu.cn/pnatdb/.

  20. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    PubMed Central

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh; Webb-Robertson, Bobbie-Jo; Hafen, Ryan; Ramey, John; Rodland, Karin D.

    2012-01-01

    Introduction The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful molecular signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities for more sophisticated approaches to integrating purely statistical and expert knowledge-based approaches. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges that have been encountered in deriving valid and useful signatures of disease. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to identify predictive signatures of disease are key to future success in the biomarker field. We will describe our recommendations for possible approaches to this problem including metrics for the evaluation of biomarkers. PMID:23335946

  1. Challenges in Biomarker Discovery: Combining Expert Insights with Statistical Analysis of Complex Omics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDermott, Jason E.; Wang, Jing; Mitchell, Hugh D.

    2013-01-01

    The advent of high throughput technologies capable of comprehensive analysis of genes, transcripts, proteins and other significant biological molecules has provided an unprecedented opportunity for the identification of molecular markers of disease processes. However, it has simultaneously complicated the problem of extracting meaningful signatures of biological processes from these complex datasets. The process of biomarker discovery and characterization provides opportunities both for purely statistical and expert knowledge-based approaches and would benefit from improved integration of the two. Areas covered In this review we will present examples of current practices for biomarker discovery from complex omic datasets and the challenges thatmore » have been encountered. We will then present a high-level review of data-driven (statistical) and knowledge-based methods applied to biomarker discovery, highlighting some current efforts to combine the two distinct approaches. Expert opinion Effective, reproducible and objective tools for combining data-driven and knowledge-based approaches to biomarker discovery and characterization are key to future success in the biomarker field. We will describe our recommendations of possible approaches to this problem including metrics for the evaluation of biomarkers.« less

  2. Selecting the "Best" Factor Structure and Moving Measurement Validation Forward: An Illustration.

    PubMed

    Schmitt, Thomas A; Sass, Daniel A; Chappelle, Wayne; Thompson, William

    2018-04-09

    Despite the broad literature base on factor analysis best practices, research seeking to evaluate a measure's psychometric properties frequently fails to consider or follow these recommendations. This leads to incorrect factor structures, numerous and often overly complex competing factor models and, perhaps most harmful, biased model results. Our goal is to demonstrate a practical and actionable process for factor analysis through (a) an overview of six statistical and psychometric issues and approaches to be aware of, investigate, and report when engaging in factor structure validation, along with a flowchart for recommended procedures to understand latent factor structures; (b) demonstrating these issues to provide a summary of the updated Posttraumatic Stress Disorder Checklist (PCL-5) factor models and a rationale for validation; and (c) conducting a comprehensive statistical and psychometric validation of the PCL-5 factor structure to demonstrate all the issues we described earlier. Considering previous research, the PCL-5 was evaluated using a sample of 1,403 U.S. Air Force remotely piloted aircraft operators with high levels of battlefield exposure. Previously proposed PCL-5 factor structures were not supported by the data, but instead a bifactor model is arguably more statistically appropriate.

  3. Choroidal Thickness Analysis in Patients with Usher Syndrome Type 2 Using EDI OCT.

    PubMed

    Colombo, L; Sala, B; Montesano, G; Pierrottet, C; De Cillà, S; Maltese, P; Bertelli, M; Rossetti, L

    2015-01-01

    To portray Usher Syndrome type 2, analyzing choroidal thickness and comparing data reported in published literature on RP and healthy subjects. Methods. 20 eyes of 10 patients with clinical signs and genetic diagnosis of Usher Syndrome type 2. Each patient underwent a complete ophthalmologic examination including Best Corrected Visual Acuity (BCVA), intraocular pressure (IOP), axial length (AL), automated visual field (VF), and EDI OCT. Both retinal and choroidal measures were measured. Statistical analysis was performed to correlate choroidal thickness with age, BCVA, IOP, AL, VF, and RT. Comparison with data about healthy people and nonsyndromic RP patients was performed. Results. Mean subfoveal choroidal thickness (SFCT) was 248.21 ± 79.88 microns. SFCT was statistically significant correlated with age (correlation coefficient -0.7248179, p < 0.01). No statistically significant correlation was found between SFCT and BCVA, IOP, AL, VF, and RT. SFCT was reduced if compared to healthy subjects (p < 0.01). No difference was found when compared to choroidal thickness from nonsyndromic RP patients (p = 0.2138). Conclusions. Our study demonstrated in vivo choroidal thickness reduction in patients with Usher Syndrome type 2. These data are important for the comprehension of mechanisms of disease and for the evaluation of therapeutic approaches.

  4. Toward standardized reporting for a cohort study on functioning: The Swiss Spinal Cord Injury Cohort Study.

    PubMed

    Prodinger, Birgit; Ballert, Carolina S; Brach, Mirjam; Brinkhof, Martin W G; Cieza, Alarcos; Hug, Kerstin; Jordan, Xavier; Post, Marcel W M; Scheel-Sailer, Anke; Schubert, Martin; Tennant, Alan; Stucki, Gerold

    2016-02-01

    Functioning is an important outcome to measure in cohort studies. Clear and operational outcomes are needed to judge the quality of a cohort study. This paper outlines guiding principles for reporting functioning in cohort studies and addresses some outstanding issues. Principles of how to standardize reporting of data from a cohort study on functioning, by deriving scores that are most useful for further statistical analysis and reporting, are outlined. The Swiss Spinal Cord Injury Cohort Study Community Survey serves as a case in point to provide a practical application of these principles. Development of reporting scores must be conceptually coherent and metrically sound. The International Classification of Functioning, Disability and Health (ICF) can serve as the frame of reference for this, with its categories serving as reference units for reporting. To derive a score for further statistical analysis and reporting, items measuring a single latent trait must be invariant across groups. The Rasch measurement model is well suited to test these assumptions. Our approach is a valuable guide for researchers and clinicians, as it fosters comparability of data, strengthens the comprehensiveness of scope, and provides invariant, interval-scaled data for further statistical analyses of functioning.

  5. Data article on the effectiveness of entrepreneurship curriculum contents on entrepreneurial interest and knowledge of Nigerian university students.

    PubMed

    Olokundun, Maxwell; Iyiola, Oluwole; Ibidunni, Stephen; Ogbari, Mercy; Falola, Hezekiah; Salau, Odunayo; Peter, Fred; Borishade, Taiye

    2018-06-01

    The article presented data on the effectiveness of entrepreneurship curriculum contents on university students' entrepreneurial interest and knowledge. The study focused on the perceptions of Nigerian university students. Emphasis was laid on the first four universities in Nigeria to offer a degree programme in entrepreneurship. The study adopted quantitative approach with a descriptive research design to establish trends related to the objective of the study. Survey was be used as quantitative research method. The population of this study included all students in the selected universities. Data was analyzed with the use of Statistical Package for Social Sciences (SPSS). Mean score was used as statistical tool of analysis. The field data set is made widely accessible to enable critical or a more comprehensive investigation.

  6. Screening of oil sources by using comprehensive two-dimensional gas chromatography/time-of-flight mass spectrometry and multivariate statistical analysis.

    PubMed

    Zhang, Wanfeng; Zhu, Shukui; He, Sheng; Wang, Yanxin

    2015-02-06

    Using comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC×GC/TOFMS), volatile and semi-volatile organic compounds in crude oil samples from different reservoirs or regions were analyzed for the development of a molecular fingerprint database. Based on the GC×GC/TOFMS fingerprints of crude oils, principal component analysis (PCA) and cluster analysis were used to distinguish the oil sources and find biomarkers. As a supervised technique, the geological characteristics of crude oils, including thermal maturity, sedimentary environment etc., are assigned to the principal components. The results show that tri-aromatic steroid (TAS) series are the suitable marker compounds in crude oils for the oil screening, and the relative abundances of individual TAS compounds have excellent correlation with oil sources. In order to correct the effects of some other external factors except oil sources, the variables were defined as the content ratio of some target compounds and 13 parameters were proposed for the screening of oil sources. With the developed model, the crude oils were easily discriminated, and the result is in good agreement with the practical geological setting. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.

    PubMed

    Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan

    2018-01-12

    Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).

  8. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics

    PubMed Central

    Allen, Peter J.; Dorozenko, Kate P.; Roberts, Lynne D.

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these “experts” were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an “answer.” Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities. PMID:26909064

  9. Difficult Decisions: A Qualitative Exploration of the Statistical Decision Making Process from the Perspectives of Psychology Students and Academics.

    PubMed

    Allen, Peter J; Dorozenko, Kate P; Roberts, Lynne D

    2016-01-01

    Quantitative research methods are essential to the development of professional competence in psychology. They are also an area of weakness for many students. In particular, students are known to struggle with the skill of selecting quantitative analytical strategies appropriate for common research questions, hypotheses and data types. To begin understanding this apparent deficit, we presented nine psychology undergraduates (who had all completed at least one quantitative methods course) with brief research vignettes, and asked them to explicate the process they would follow to identify an appropriate statistical technique for each. Thematic analysis revealed that all participants found this task challenging, and even those who had completed several research methods courses struggled to articulate how they would approach the vignettes on more than a very superficial and intuitive level. While some students recognized that there is a systematic decision making process that can be followed, none could describe it clearly or completely. We then presented the same vignettes to 10 psychology academics with particular expertise in conducting research and/or research methods instruction. Predictably, these "experts" were able to describe a far more systematic, comprehensive, flexible, and nuanced approach to statistical decision making, which begins early in the research process, and pays consideration to multiple contextual factors. They were sensitive to the challenges that students experience when making statistical decisions, which they attributed partially to how research methods and statistics are commonly taught. This sensitivity was reflected in their pedagogic practices. When asked to consider the format and features of an aid that could facilitate the statistical decision making process, both groups expressed a preference for an accessible, comprehensive and reputable resource that follows a basic decision tree logic. For the academics in particular, this aid should function as a teaching tool, which engages the user with each choice-point in the decision making process, rather than simply providing an "answer." Based on these findings, we offer suggestions for tools and strategies that could be deployed in the research methods classroom to facilitate and strengthen students' statistical decision making abilities.

  10. Analysis of regional deformation and strain accumulation data adjacent to the San Andreas fault

    NASA Technical Reports Server (NTRS)

    Turcotte, Donald L.

    1991-01-01

    A new approach to the understanding of crustal deformation was developed under this grant. This approach combined aspects of fractals, chaos, and self-organized criticality to provide a comprehensive theory for deformation on distributed faults. It is hypothesized that crustal deformation is an example of comminution: Deformation takes place on a fractal distribution of faults resulting in a fractal distribution of seismicity. Our primary effort under this grant was devoted to developing an understanding of distributed deformation in the continental crust. An initial effort was carried out on the fractal clustering of earthquakes in time. It was shown that earthquakes do not obey random Poisson statistics, but can be approximated in many cases by coupled, scale-invariant fractal statistics. We applied our approach to the statistics of earthquakes in the New Hebrides region of the southwest Pacific because of the very high level of seismicity there. This work was written up and published in the Bulletin of the Seismological Society of America. This approach was also applied to the statistics of the seismicity on the San Andreas fault system.

  11. MutAIT: an online genetic toxicology data portal and analysis tools.

    PubMed

    Avancini, Daniele; Menzies, Georgina E; Morgan, Claire; Wills, John; Johnson, George E; White, Paul A; Lewis, Paul D

    2016-05-01

    Assessment of genetic toxicity and/or carcinogenic activity is an essential element of chemical screening programs employed to protect human health. Dose-response and gene mutation data are frequently analysed by industry, academia and governmental agencies for regulatory evaluations and decision making. Over the years, a number of efforts at different institutions have led to the creation and curation of databases to house genetic toxicology data, largely, with the aim of providing public access to facilitate research and regulatory assessments. This article provides a brief introduction to a new genetic toxicology portal called Mutation Analysis Informatics Tools (MutAIT) (www.mutait.org) that provides easy access to two of the largest genetic toxicology databases, the Mammalian Gene Mutation Database (MGMD) and TransgenicDB. TransgenicDB is a comprehensive collection of transgenic rodent mutation data initially compiled and collated by Health Canada. The updated MGMD contains approximately 50 000 individual mutation spectral records from the published literature. The portal not only gives access to an enormous quantity of genetic toxicology data, but also provides statistical tools for dose-response analysis and calculation of benchmark dose. Two important R packages for dose-response analysis are provided as web-distributed applications with user-friendly graphical interfaces. The 'drsmooth' package performs dose-response shape analysis and determines various points of departure (PoD) metrics and the 'PROAST' package provides algorithms for dose-response modelling. The MutAIT statistical tools, which are currently being enhanced, provide users with an efficient and comprehensive platform to conduct quantitative dose-response analyses and determine PoD values that can then be used to calculate human exposure limits or margins of exposure. © The Author 2015. Published by Oxford University Press on behalf of the UK Environmental Mutagen Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Striking changes in tea metabolites due to elevational effects.

    PubMed

    Kfoury, Nicole; Morimoto, Joshua; Kern, Amanda; Scott, Eric R; Orians, Colin M; Ahmed, Selena; Griffin, Timothy; Cash, Sean B; Stepp, John Richard; Xue, Dayuan; Long, Chunlin; Robbat, Albert

    2018-10-30

    Climate effects on crop quality at the molecular level are not well-understood. Gas and liquid chromatography-mass spectrometry were used to measure changes of hundreds of compounds in tea at different elevations in Yunnan Province, China. Some increased in concentration while others decreased by 100's of percent. Orthogonal projection to latent structures-discriminant analysis revealed compounds exhibiting analgesic, antianxiety, antibacterial, anticancer, antidepressant, antifungal, anti-inflammatory, antioxidant, anti-stress, and cardioprotective properties statistically (p = 0.003) differentiated high from low elevation tea. Also, sweet, floral, honey-like notes were higher in concentration in the former while the latter displayed grassy, hay-like aroma. In addition, multivariate analysis of variance showed low elevation tea had statistically (p = 0.0062) higher concentrations of caffeine, epicatechin gallate, gallocatechin, and catechin; all bitter compounds. Although volatiles represent a small fraction of the total mass, this is the first comprehensive report illustrating how normal variations in temperature, 5 °C, due to elevational effects impact tea quality. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Enabling a Comprehensive Teaching Strategy: Video Lectures

    ERIC Educational Resources Information Center

    Brecht, H. David; Ogilby, Suzanne M.

    2008-01-01

    This study empirically tests the feasibility and effectiveness of video lectures as a form of video instruction that enables a comprehensive teaching strategy used throughout a traditional classroom course. It examines student use patterns and the videos' effects on student learning, using qualitative and nonparametric statistical analyses of…

  14. Most Likely to Succeed: Exploring Predictor Variables for the Counselor Preparation Comprehensive Examination

    ERIC Educational Resources Information Center

    Hartwig, Elizabeth Kjellstrand; Van Overschelde, James P.

    2016-01-01

    The authors investigated predictor variables for the Counselor Preparation Comprehensive Examination (CPCE) to examine whether academic variables, demographic variables, and test version were associated with graduate counseling students' CPCE scores. Multiple regression analyses revealed all 3 variables were statistically significant predictors of…

  15. BioInfra.Prot: A comprehensive proteomics workflow including data standardization, protein inference, expression analysis and data publication.

    PubMed

    Turewicz, Michael; Kohl, Michael; Ahrens, Maike; Mayer, Gerhard; Uszkoreit, Julian; Naboulsi, Wael; Bracht, Thilo; Megger, Dominik A; Sitek, Barbara; Marcus, Katrin; Eisenacher, Martin

    2017-11-10

    The analysis of high-throughput mass spectrometry-based proteomics data must address the specific challenges of this technology. To this end, the comprehensive proteomics workflow offered by the de.NBI service center BioInfra.Prot provides indispensable components for the computational and statistical analysis of this kind of data. These components include tools and methods for spectrum identification and protein inference, protein quantification, expression analysis as well as data standardization and data publication. All particular methods of the workflow which address these tasks are state-of-the-art or cutting edge. As has been shown in previous publications, each of these methods is adequate to solve its specific task and gives competitive results. However, the methods included in the workflow are continuously reviewed, updated and improved to adapt to new scientific developments. All of these particular components and methods are available as stand-alone BioInfra.Prot services or as a complete workflow. Since BioInfra.Prot provides manifold fast communication channels to get access to all components of the workflow (e.g., via the BioInfra.Prot ticket system: bioinfraprot@rub.de) users can easily benefit from this service and get support by experts. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Applications of Remote Sensing and GIS(Geographic Information System) in Crime Analysis of Gujranwala City.

    NASA Astrophysics Data System (ADS)

    Munawar, Iqra

    2016-07-01

    Crime mapping is a dynamic process. It can be used to assist all stages of the problem solving process. Mapping crime can help police protect citizens more effectively. The decision to utilize a certain type of map or design element may change based on the purpose of a map, the audience or the available data. If the purpose of the crime analysis map is to assist in the identification of a particular problem, selected data may be mapped to identify patterns of activity that have been previously undetected. The main objective of this research was to study the spatial distribution patterns of the four common crimes i.e Narcotics, Arms, Burglary and Robbery in Gujranwala City using spatial statistical techniques to identify the hotspots. Hotspots or location of clusters were identified using Getis-Ord Gi* Statistic. Crime analysis mapping can be used to conduct a comprehensive spatial analysis of the problem. Graphic presentations of such findings provide a powerful medium to communicate conditions, patterns and trends thus creating an avenue for analysts to bring about significant policy changes. Moreover Crime mapping also helps in the reduction of crime rate.

  17. A Comprehensive Database and Analysis Framework To Incorporate Multiscale Data Types and Enable Integrated Analysis of Bioactive Polyphenols.

    PubMed

    Ho, Lap; Cheng, Haoxiang; Wang, Jun; Simon, James E; Wu, Qingli; Zhao, Danyue; Carry, Eileen; Ferruzzi, Mario G; Faith, Jeremiah; Valcarcel, Breanna; Hao, Ke; Pasinetti, Giulio M

    2018-03-05

    The development of a given botanical preparation for eventual clinical application requires extensive, detailed characterizations of the chemical composition, as well as the biological availability, biological activity, and safety profiles of the botanical. These issues are typically addressed using diverse experimental protocols and model systems. Based on this consideration, in this study we established a comprehensive database and analysis framework for the collection, collation, and integrative analysis of diverse, multiscale data sets. Using this framework, we conducted an integrative analysis of heterogeneous data from in vivo and in vitro investigation of a complex bioactive dietary polyphenol-rich preparation (BDPP) and built an integrated network linking data sets generated from this multitude of diverse experimental paradigms. We established a comprehensive database and analysis framework as well as a systematic and logical means to catalogue and collate the diverse array of information gathered, which is securely stored and added to in a standardized manner to enable fast query. We demonstrated the utility of the database in (1) a statistical ranking scheme to prioritize response to treatments and (2) in depth reconstruction of functionality studies. By examination of these data sets, the system allows analytical querying of heterogeneous data and the access of information related to interactions, mechanism of actions, functions, etc., which ultimately provide a global overview of complex biological responses. Collectively, we present an integrative analysis framework that leads to novel insights on the biological activities of a complex botanical such as BDPP that is based on data-driven characterizations of interactions between BDPP-derived phenolic metabolites and their mechanisms of action, as well as synergism and/or potential cancellation of biological functions. Out integrative analytical approach provides novel means for a systematic integrative analysis of heterogeneous data types in the development of complex botanicals such as polyphenols for eventual clinical and translational applications.

  18. Quality of statistical reporting in developmental disability journals.

    PubMed

    Namasivayam, Aravind K; Yan, Tina; Wong, Wing Yiu Stephanie; van Lieshout, Pascal

    2015-12-01

    Null hypothesis significance testing (NHST) dominates quantitative data analysis, but its use is controversial and has been heavily criticized. The American Psychological Association has advocated the reporting of effect sizes (ES), confidence intervals (CIs), and statistical power analysis to complement NHST results to provide a more comprehensive understanding of research findings. The aim of this paper is to carry out a sample survey of statistical reporting practices in two journals with the highest h5-index scores in the areas of developmental disability and rehabilitation. Using a checklist that includes critical recommendations by American Psychological Association, we examined 100 randomly selected articles out of 456 articles reporting inferential statistics in the year 2013 in the Journal of Autism and Developmental Disorders (JADD) and Research in Developmental Disabilities (RDD). The results showed that for both journals, ES were reported only half the time (JADD 59.3%; RDD 55.87%). These findings are similar to psychology journals, but are in stark contrast to ES reporting in educational journals (73%). Furthermore, a priori power and sample size determination (JADD 10%; RDD 6%), along with reporting and interpreting precision measures (CI: JADD 13.33%; RDD 16.67%), were the least reported metrics in these journals, but not dissimilar to journals in other disciplines. To advance the science in developmental disability and rehabilitation and to bridge the research-to-practice divide, reforms in statistical reporting, such as providing supplemental measures to NHST, are clearly needed.

  19. Can You Read Me Now? Disciplinary Literacy Reading Strategies in the 7th Grade Science Classroom

    NASA Astrophysics Data System (ADS)

    McQuaid, Kelly Kathleen

    Adolescent readers require a broad range of reading skills to deal with the challenges of reading complex text. Some researchers argue for a discipline-specific focus to address the low reading proficiency rates among secondary students. Disciplinary literacy attends to the different ways disciplines, such as science, generate and communicate knowledge. The purpose of this quasi-experimental study was to examine if and to what degree disciplinary literacy reading strategies impact student learning outcomes in reading comprehension and science content knowledge for 132 7th grade science students in five Southern Arizona charter schools and whether reading ability moderates that impact. The theoretical foundation for this study rested on expert-novice theory and Halliday's theory of critical moments of language development. It is not known if and to what degree disciplinary literacy reading strategies impact student learning outcomes in reading comprehension and science content knowledge for 7th grade science students and whether or not reading ability has a moderating effect on those student learning outcomes. The results for MANCOVA did not produce statistically significant results nor did the moderation analysis for the influence of reading ability on reading comprehension in the disciplinary literacy group. However, the moderation analysis for the influence of reading ability on science content knowledge resulted in conditional significant results for low (p < .01) and average readers (p <. 05). Low to average readers in the disciplinary literacy group appeared to benefit the most from reading comprehension instruction focused on learning science content in the science classroom.

  20. [Effects of a coaching program on comprehensive lifestyle modification for women with gestational diabetes mellitus].

    PubMed

    Ko, Jung Mi; Lee, Jong Kyung

    2014-12-01

    The purpose of this study was to investigate the effects of using a Coaching Program on Comprehensive Lifestyle Modification with pregnant women who have gestational diabetes. The research design for this study was a non-equivalent control group quasi-experimental study. Pregnant women with gestational diabetes were recruited from D women's hospital located in Gyeonggi Province from April to October, 2013. Participants in this study were 34 for the control group and 34 for the experimental group. The experimental group participated in the Coaching Program on Comprehensive Lifestyle Modification. The program consisted of education, small group coaching and telephone coaching over 4weeks. Statistical analysis was performed using the SPSS 21.0 program. There were significant improvements in self-care behavior, and decreases in depression, fasting blood sugar and HbA1C in the experimental group compared to the control group. However, no significant differences were found between the two groups for knowledge of gestational diabetes mellitus. The Coaching Program on Comprehensive Lifestyle Modification used in this study was found to be effective in improving self-care behavior and reducing depression, fasting blood sugar and HbA1C, and is recommended for use in clinical practice as an effective nursing intervention for pregnant women with gestational diabetes.

  1. Testing a Nursing-Specific Model of Electronic Patient Record documentation with regard to information completeness, comprehensiveness and consistency.

    PubMed

    von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw

    2012-10-01

    To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.

  2. Stratification of Recanalization for Patients with Endovascular Treatment of Intracranial Aneurysms

    PubMed Central

    Ogilvy, Christopher S.; Chua, Michelle H.; Fusco, Matthew R.; Reddy, Arra S.; Thomas, Ajith J.

    2015-01-01

    Background With increasing utilization of endovascular techniques in the treatment of both ruptured and unruptured intracranial aneurysms, the issue of obliteration efficacy has become increasingly important. Objective Our goal was to systematically develop a comprehensive model for predicting retreatment with various types of endovascular treatment. Methods We retrospectively reviewed medical records that were prospectively collected for 305 patients who received endovascular treatment for intracranial aneurysms from 2007 to 2013. Multivariable logistic regression was performed on candidate predictors identified by univariable screening analysis to detect independent predictors of retreatment. A composite risk score was constructed based on the proportional contribution of independent predictors in the multivariable model. Results Size (>10 mm), aneurysm rupture, stent assistance, and post-treatment degree of aneurysm occlusion were independently associated with retreatment while intraluminal thrombosis and flow diversion demonstrated a trend towards retreatment. The Aneurysm Recanalization Stratification Scale was constructed by assigning the following weights to statistically and clinically significant predictors. Aneurysm-specific factors: Size (>10 mm), 2 points; rupture, 2 points; presence of thrombus, 2 points. Treatment-related factors: Stent assistance, -1 point; flow diversion, -2 points; Raymond Roy 2 occlusion, 1 point; Raymond Roy 3 occlusion, 2 points. This scale demonstrated good discrimination with a C-statistic of 0.799. Conclusion Surgical decision-making and patient-centered informed consent require comprehensive and accessible information on treatment efficacy. We have constructed the Aneurysm Recanalization Stratification Scale to enhance this decision-making process. This is the first comprehensive model that has been developed to quantitatively predict the risk of retreatment following endovascular therapy. PMID:25621984

  3. The Impact of Language Experience on Language and Reading: A Statistical Learning Approach

    ERIC Educational Resources Information Center

    Seidenberg, Mark S.; MacDonald, Maryellen C.

    2018-01-01

    This article reviews the important role of statistical learning for language and reading development. Although statistical learning--the unconscious encoding of patterns in language input--has become widely known as a force in infants' early interpretation of speech, the role of this kind of learning for language and reading comprehension in…

  4. The cost-effectiveness of NBPTS teacher certification.

    PubMed

    Yeh, Stuart S

    2010-06-01

    A cost-effectiveness analysis of the National Board for Professional Teaching Standards (NBPTS) program suggests that Board certification is less cost-effective than a range of alternative approaches for raising student achievement, including comprehensive school reform, class size reduction, a 10% increase in per pupil expenditure, the use of value-added statistical methods to identify effective teachers, and the implementation of systems where student performance in math and reading is rapidly assessed 2-5 times per week. The most cost-effective approach, rapid assessment, is three magnitudes as cost-effective as Board certification.

  5. Excellence through Special Education? Lessons from the Finnish School Reform

    NASA Astrophysics Data System (ADS)

    Kivirauma, Joel; Ruoho, Kari

    2007-05-01

    The present article focuses on connections between part-time special education and the good results of Finnish students in PISA studies. After a brief summary of the comprehensive school system and special education in Finland, PISA results are analysed. The analysis shows that the relative amount of special education targeted at language problems is highest in Finland among those countries from which comparative statistics are available. The writers argue that this preventive language-oriented part-time special education is an important factor behind the good PISA results.

  6. Linguistic Strategies for Improving Informed Consent in Clinical Trials Among Low Health Literacy Patients.

    PubMed

    Krieger, Janice L; Neil, Jordan M; Strekalova, Yulia A; Sarge, Melanie A

    2017-03-01

    Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. © The Author 2016. Published by Oxford University Press.

  7. Linguistic Strategies for Improving Informed Consent in Clinical Trials Among Low Health Literacy Patients

    PubMed Central

    Neil, Jordan M.; Strekalova, Yulia A.; Sarge, Melanie A.

    2017-01-01

    Abstract Background: Improving informed consent to participate in randomized clinical trials (RCTs) is a key challenge in cancer communication. The current study examines strategies for enhancing randomization comprehension among patients with diverse levels of health literacy and identifies cognitive and affective predictors of intentions to participate in cancer RCTs. Methods: Using a post-test-only experimental design, cancer patients (n = 500) were randomly assigned to receive one of three message conditions for explaining randomization (ie, plain language condition, gambling metaphor, benign metaphor) or a control message. All statistical tests were two-sided. Results: Health literacy was a statistically significant moderator of randomization comprehension (P = .03). Among participants with the lowest levels of health literacy, the benign metaphor resulted in greater comprehension of randomization as compared with plain language (P = .04) and control (P = .004) messages. Among participants with the highest levels of health literacy, the gambling metaphor resulted in greater randomization comprehension as compared with the benign metaphor (P = .04). A serial mediation model showed a statistically significant negative indirect effect of comprehension on behavioral intention through personal relevance of RCTs and anxiety associated with participation in RCTs (P < .001). Conclusions: The effectiveness of metaphors for explaining randomization depends on health literacy, with a benign metaphor being particularly effective for patients at the lower end of the health literacy spectrum. The theoretical model demonstrates the cognitive and affective predictors of behavioral intention to participate in cancer RCTs and offers guidance on how future research should employ communication strategies to improve the informed consent processes. PMID:27794035

  8. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  9. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  10. Prediction, Error, and Adaptation during Online Sentence Comprehension

    ERIC Educational Resources Information Center

    Fine, Alex Brabham

    2013-01-01

    A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. This thesis focuses on this problem specifically in the domain of sentence comprehension, where linguistic variability poses computational challenges to the processes underlying…

  11. Differences in Students' Reading Comprehension of International Financial Reporting Standards: A South African Case

    ERIC Educational Resources Information Center

    Coetzee, Stephen A.; Janse van Rensburg, Cecile; Schmulian, Astrid

    2016-01-01

    This study explores differences in students' reading comprehension of International Financial Reporting Standards in a South African financial reporting class with a heterogeneous student cohort. Statistically significant differences were identified for prior academic performance, language of instruction, first language and enrolment in the…

  12. Transportation statistics annual report 1994

    DOT National Transportation Integrated Search

    1994-01-01

    The Transportation Statistics Annual Report (TSAR) provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of the system, how it is used, how well it works, how it affects people and t...

  13. The implementation of the Strategy Europe 2020 objectives in European Union countries: the concept analysis and statistical evaluation.

    PubMed

    Stec, Małgorzata; Grzebyk, Mariola

    2018-01-01

    The European Union (EU), striving to create economic dominance on the global market, has prepared a comprehensive development programme, which initially was the Lisbon Strategy and then the Strategy Europe 2020. The attainment of the strategic goals included in the prospective development programmes shall transform the EU into the most competitive economy in the world based on knowledge. This paper presents a statistical evaluation of progress being made by EU member states in meeting Europe 2020. For the basis of the assessment, the authors proposed a general synthetic measure in dynamic terms, which allows to objectively compare EU member states by 10 major statistical indicators. The results indicate that most of EU countries show average progress in realisation of Europe's development programme which may suggest that the goals may not be achieved in the prescribed time. It is particularly important to monitor the implementation of Europe 2020 to arrive at the right decisions which will guarantee the accomplishment of the EU's development strategy.

  14. Can hospital episode statistics support appraisal and revalidation? Randomised study of physician attitudes.

    PubMed

    Croft, Giles P; Williams, John G; Mann, Robin Y; Cohen, David; Phillips, Ceri J

    2007-08-01

    Hospital episode statistics were originally designed to monitor activity and allocate resources in the NHS. Recently their uses have widened to include analysis of individuals' activity, to inform appraisal and revalidation, and monitor performance. This study investigated physician attitudes to the validity and usefulness of these data for such purposes, and the effect of supporting individuals in data interpretation. A randomised study was conducted with consultant physicians in England, Wales and Scotland. The intervention group was supported by a clinician and an information analyst in obtaining and analysing their own data. The control group was unsupported. Attitudes to the data and confidence in their ability to reflect clinical practice were examined before and after the intervention. It was concluded that hospital episode statistics are not presently fit for monitoring the performance of individual physicians. A more comprehensive description of activity is required for these purposes. Improvements in the quality of existing data through clinical engagement at a local level, however, are possible.

  15. Climate Change Impacts at Department of Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotamarthi, Rao; Wang, Jiali; Zoebel, Zach

    This project is aimed at providing the U.S. Department of Defense (DoD) with a comprehensive analysis of the uncertainty associated with generating climate projections at the regional scale that can be used by stakeholders and decision makers to quantify and plan for the impacts of future climate change at specific locations. The merits and limitations of commonly used downscaling models, ranging from simple to complex, are compared, and their appropriateness for application at installation scales is evaluated. Downscaled climate projections are generated at selected DoD installations using dynamic and statistical methods with an emphasis on generating probability distributions of climatemore » variables and their associated uncertainties. The sites selection and selection of variables and parameters for downscaling was based on a comprehensive understanding of the current and projected roles that weather and climate play in operating, maintaining, and planning DoD facilities and installations.« less

  16. False-Belief Understanding and Language Ability Mediate the Relationship between Emotion Comprehension and Prosocial Orientation in Preschoolers.

    PubMed

    Ornaghi, Veronica; Pepe, Alessandro; Grazzani, Ilaria

    2016-01-01

    Emotion comprehension (EC) is known to be a key correlate and predictor of prosociality from early childhood. In the present study, we examined this relationship within the broad theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to be positively correlated with both EC and prosocial orientation. Similarly, language ability is known to play a key role in children's socio-emotional development. The combined contribution of false-belief understanding and language to explaining the relationship between EC and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers' false-belief understanding and language ability each contribute to modeling the relationship between children's comprehension of emotion and their disposition to act prosocially toward others, after controlling for age and gender. Participants were 101 4- to 6-year-old children (54% boys), who were administered measures of language ability, false-belief understanding, EC and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers' EC on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed.

  17. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  18. Proteome-wide quantitative multiplexed profiling of protein expression: carbon-source dependency in Saccharomyces cerevisiae

    PubMed Central

    Paulo, Joao A.; O’Connell, Jeremy D.; Gaun, Aleksandr; Gygi, Steven P.

    2015-01-01

    The global proteomic alterations in the budding yeast Saccharomyces cerevisiae due to differences in carbon sources can be comprehensively examined using mass spectrometry–based multiplexing strategies. In this study, we investigate changes in the S. cerevisiae proteome resulting from cultures grown in minimal media using galactose, glucose, or raffinose as the carbon source. We used a tandem mass tag 9-plex strategy to determine alterations in relative protein abundance due to a particular carbon source, in triplicate, thereby permitting subsequent statistical analyses. We quantified more than 4700 proteins across all nine samples; 1003 proteins demonstrated statistically significant differences in abundance in at least one condition. The majority of altered proteins were classified as functioning in metabolic processes and as having cellular origins of plasma membrane and mitochondria. In contrast, proteins remaining relatively unchanged in abundance included those having nucleic acid–related processes, such as transcription and RNA processing. In addition, the comprehensiveness of the data set enabled the analysis of subsets of functionally related proteins, such as phosphatases, kinases, and transcription factors. As a resource, these data can be mined further in efforts to understand better the roles of carbon source fermentation in yeast metabolic pathways and the alterations observed therein, potentially for industrial applications, such as biofuel feedstock production. PMID:26399295

  19. Insect transformation with piggyBac: getting the number of injections just right

    PubMed Central

    Morrison, N. I.; Shimeld, S. M.

    2016-01-01

    Abstract The insertion of exogenous genetic cargo into insects using transposable elements is a powerful research tool with potential applications in meeting food security and public health challenges facing humanity. piggyBac is the transposable element most commonly utilized for insect germline transformation. The described efficiency of this process is variable in the published literature, and a comprehensive review of transformation efficiency in insects is lacking. This study compared and contrasted all available published data with a comprehensive data set provided by a biotechnology group specializing in insect transformation. Based on analysis of these data, with particular focus on the more complete observational data from the biotechnology group, we designed a decision tool to aid researchers' decision‐making when using piggyBac to transform insects by microinjection. A combination of statistical techniques was used to define appropriate summary statistics of piggyBac transformation efficiency by species and insect order. Publication bias was assessed by comparing the data sets. The bias was assessed using strategies co‐opted from the medical literature. The work culminated in building the Goldilocks decision tool, a Markov‐Chain Monte‐Carlo simulation operated via a graphical interface and providing guidance on best practice for those seeking to transform insects using piggyBac. PMID:27027400

  20. Late paleozoic fusulinoidean gigantism driven by atmospheric hyperoxia.

    PubMed

    Payne, Jonathan L; Groves, John R; Jost, Adam B; Nguyen, Thienan; Moffitt, Sarah E; Hill, Tessa M; Skotheim, Jan M

    2012-09-01

    Atmospheric hyperoxia, with pO(2) in excess of 30%, has long been hypothesized to account for late Paleozoic (360-250 million years ago) gigantism in numerous higher taxa. However, this hypothesis has not been evaluated statistically because comprehensive size data have not been compiled previously at sufficient temporal resolution to permit quantitative analysis. In this study, we test the hyperoxia-gigantism hypothesis by examining the fossil record of fusulinoidean foraminifers, a dramatic example of protistan gigantism with some individuals exceeding 10 cm in length and exceeding their relatives by six orders of magnitude in biovolume. We assembled and examined comprehensive regional and global, species-level datasets containing 270 and 1823 species, respectively. A statistical model of size evolution forced by atmospheric pO(2) is conclusively favored over alternative models based on random walks or a constant tendency toward size increase. Moreover, the ratios of volume to surface area in the largest fusulinoideans are consistent in magnitude and trend with a mathematical model based on oxygen transport limitation. We further validate the hyperoxia-gigantism model through an examination of modern foraminiferal species living along a measured gradient in oxygen concentration. These findings provide the first quantitative confirmation of a direct connection between Paleozoic gigantism and atmospheric hyperoxia. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  1. The Comprehension Problems of Children with Poor Reading Comprehension Despite Adequate Decoding: A Meta-Analysis

    ERIC Educational Resources Information Center

    Spencer, Mercedes; Wagner, Richard K.

    2018-01-01

    The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language…

  2. Uncertainty

    USGS Publications Warehouse

    Hunt, Randall J.

    2012-01-01

    Management decisions will often be directly informed by model predictions. However, we now know there can be no expectation of a single ‘true’ model; thus, model results are uncertain. Understandable reporting of underlying uncertainty provides necessary context to decision-makers, as model results are used for management decisions. This, in turn, forms a mechanism by which groundwater models inform a risk-management framework because uncertainty around a prediction provides the basis for estimating the probability or likelihood of some event occurring. Given that the consequences of management decisions vary, it follows that the extent of and resources devoted to an uncertainty analysis may depend on the consequences. For events with low impact, a qualitative, limited uncertainty analysis may be sufficient for informing a decision. For events with a high impact, on the other hand, the risks might be better assessed and associated decisions made using a more robust and comprehensive uncertainty analysis. The purpose of this chapter is to provide guidance on uncertainty analysis through discussion of concepts and approaches, which can vary from heuristic (i.e. the modeller’s assessment of prediction uncertainty based on trial and error and experience) to a comprehensive, sophisticated, statistics-based uncertainty analysis. Most of the material presented here is taken from Doherty et al. (2010) if not otherwise cited. Although the treatment here is necessarily brief, the reader can find citations for the source material and additional references within this chapter.

  3. Clinical decision support tools: personal digital assistant versus online dietary supplement databases.

    PubMed

    Clauson, Kevin A; Polen, Hyla H; Peak, Amy S; Marsh, Wallace A; DiScala, Sandra L

    2008-11-01

    Clinical decision support tools (CDSTs) on personal digital assistants (PDAs) and online databases assist healthcare practitioners who make decisions about dietary supplements. To assess and compare the content of PDA dietary supplement databases and their online counterparts used as CDSTs. A total of 102 question-and-answer pairs were developed within 10 weighted categories of the most clinically relevant aspects of dietary supplement therapy. PDA versions of AltMedDex, Lexi-Natural, Natural Medicines Comprehensive Database, and Natural Standard and their online counterparts were assessed by scope (percent of correct answers present), completeness (3-point scale), ease of use, and a composite score integrating all 3 criteria. Descriptive statistics and inferential statistics, including a chi(2) test, Scheffé's multiple comparison test, McNemar's test, and the Wilcoxon signed rank test were used to analyze data. The scope scores for PDA databases were: Natural Medicines Comprehensive Database 84.3%, Natural Standard 58.8%, Lexi-Natural 50.0%, and AltMedDex 36.3%, with Natural Medicines Comprehensive Database statistically superior (p < 0.01). Completeness scores were: Natural Medicines Comprehensive Database 78.4%, Natural Standard 51.0%, Lexi-Natural 43.5%, and AltMedDex 29.7%. Lexi-Natural was superior in ease of use (p < 0.01). Composite scores for PDA databases were: Natural Medicines Comprehensive Database 79.3, Natural Standard 53.0, Lexi-Natural 48.0, and AltMedDex 32.5, with Natural Medicines Comprehensive Database superior (p < 0.01). There was no difference between the scope for PDA and online database pairs with Lexi-Natural (50.0% and 53.9%, respectively) or Natural Medicines Comprehensive Database (84.3% and 84.3%, respectively) (p > 0.05), whereas differences existed for AltMedDex (36.3% vs 74.5%, respectively) and Natural Standard (58.8% vs 80.4%, respectively) (p < 0.01). For composite scores, AltMedDex and Natural Standard online were better than their PDA counterparts (p < 0.01). Natural Medicines Comprehensive Database achieved significantly higher scope, completeness, and composite scores compared with other dietary supplement PDA CDSTs in this study. There was no difference between the PDA and online databases for Lexi-Natural and Natural Medicines Comprehensive Database, whereas online versions of AltMedDex and Natural Standard were significantly better than their PDA counterparts.

  4. Increasing URM Undergraduate Student Success through Assessment-Driven Interventions: A Multiyear Study Using Freshman-Level General Biology as a Model System

    PubMed Central

    Carmichael, Mary C.; St. Clair, Candace; Edwards, Andrea M.; Barrett, Peter; McFerrin, Harris; Davenport, Ian; Awad, Mohamed; Kundu, Anup; Ireland, Shubha Kale

    2016-01-01

    Xavier University of Louisiana leads the nation in awarding BS degrees in the biological sciences to African-American students. In this multiyear study with ∼5500 participants, data-driven interventions were adopted to improve student academic performance in a freshman-level general biology course. The three hour-long exams were common and administered concurrently to all students. New exam questions were developed using Bloom’s taxonomy, and exam results were analyzed statistically with validated assessment tools. All but the comprehensive final exam were returned to students for self-evaluation and remediation. Among other approaches, course rigor was monitored by using an identical set of 60 questions on the final exam across 10 semesters. Analysis of the identical sets of 60 final exam questions revealed that overall averages increased from 72.9% (2010) to 83.5% (2015). Regression analysis demonstrated a statistically significant correlation between high-risk students and their averages on the 60 questions. Additional analysis demonstrated statistically significant improvements for at least one letter grade from midterm to final and a 20% increase in the course pass rates over time, also for the high-risk population. These results support the hypothesis that our data-driven interventions and assessment techniques are successful in improving student retention, particularly for our academically at-risk students. PMID:27543637

  5. Bayesian models: A statistical primer for ecologists

    USGS Publications Warehouse

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  6. Burr-hole Irrigation with Closed-system Drainage for the Treatment of Chronic Subdural Hematoma: A Meta-analysis

    PubMed Central

    XU, Chen; CHEN, Shiwen; YUAN, Lutao; JING, Yao

    2016-01-01

    There is controversy among neurosurgeons regarding whether irrigation or drainage is necessary for achieving a lower revision rate for the treatment of chronic subdural hematoma (CSDH) using burr-hole craniostomy (BHC). Therefore, we performed a meta-analysis of all available published reports. Multiple electronic health databases were searched to identify all studies published between 1989 and June 2012 that compared irrigation and drainage. Data were processed by using Review Manager 5.1.6. Effect sizes are expressed as pooled odds ratio (OR) estimates. Due to heterogeneity between studies, we used the random effect of the inverse variance weighted method to perform the meta-analysis. Thirteen published reports were selected for this meta-analysis. The comprehensive results indicated that there were no statistically significant differences in mortality or complication rates between drainage and no drainage (P > 0.05). Additionally, there were no differences in recurrence between irrigation and no irrigation (P > 0.05). However, the difference between drainage and no drainage in recurrence rate reached statistical significance (P < 0.01). The results from this meta-analysis suggest that burr-hole surgery with closed-system drainage can reduce the recurrence of CSDH; however, irrigation is not necessary for every patient. PMID:26377830

  7. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses

    PubMed Central

    Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.

    2015-01-01

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576

  8. Improved Bond Equations for Fiber-Reinforced Polymer Bars in Concrete

    PubMed Central

    Pour, Sadaf Moallemi; Alam, M. Shahria; Milani, Abbas S.

    2016-01-01

    This paper explores a set of new equations to predict the bond strength between fiber reinforced polymer (FRP) rebar and concrete. The proposed equations are based on a comprehensive statistical analysis and existing experimental results in the literature. Namely, the most effective parameters on bond behavior of FRP concrete were first identified by applying a factorial analysis on a part of the available database. Then the database that contains 250 pullout tests were divided into four groups based on the concrete compressive strength and the rebar surface. Afterward, nonlinear regression analysis was performed for each study group in order to determine the bond equations. The results show that the proposed equations can predict bond strengths more accurately compared to the other previously reported models. PMID:28773859

  9. The mortality of companies

    PubMed Central

    Daepp, Madeleine I. G.; Hamilton, Marcus J.; West, Geoffrey B.; Bettencourt, Luís M. A.

    2015-01-01

    The firm is a fundamental economic unit of contemporary human societies. Studies on the general quantitative and statistical character of firms have produced mixed results regarding their lifespans and mortality. We examine a comprehensive database of more than 25 000 publicly traded North American companies, from 1950 to 2009, to derive the statistics of firm lifespans. Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age. We show that the typical half-life of a publicly traded company is about a decade, regardless of business sector. Our results shed new light on the dynamics of births and deaths of publicly traded companies and identify some of the necessary ingredients of a general theory of firms. PMID:25833247

  10. Associations between host characteristics and antimicrobial resistance of Salmonella typhimurium.

    PubMed

    Ruddat, I; Tietze, E; Ziehm, D; Kreienbrock, L

    2014-10-01

    A collection of Salmonella Typhimurium isolates obtained from sporadic salmonellosis cases in humans from Lower Saxony, Germany between June 2008 and May 2010 was used to perform an exploratory risk-factor analysis on antimicrobial resistance (AMR) using comprehensive host information on sociodemographic attributes, medical history, food habits and animal contact. Multivariate resistance profiles of minimum inhibitory concentrations for 13 antimicrobial agents were analysed using a non-parametric approach with multifactorial models adjusted for phage types. Statistically significant associations were observed for consumption of antimicrobial agents, region type and three factors on egg-purchasing behaviour, indicating that besides antimicrobial use the proximity to other community members, health consciousness and other lifestyle-related attributes may play a role in the dissemination of resistances. Furthermore, a statistically significant increase in AMR from the first study year to the second year was observed.

  11. The mortality of companies.

    PubMed

    Daepp, Madeleine I G; Hamilton, Marcus J; West, Geoffrey B; Bettencourt, Luís M A

    2015-05-06

    The firm is a fundamental economic unit of contemporary human societies. Studies on the general quantitative and statistical character of firms have produced mixed results regarding their lifespans and mortality. We examine a comprehensive database of more than 25 000 publicly traded North American companies, from 1950 to 2009, to derive the statistics of firm lifespans. Based on detailed survival analysis, we show that the mortality of publicly traded companies manifests an approximately constant hazard rate over long periods of observation. This regularity indicates that mortality rates are independent of a company's age. We show that the typical half-life of a publicly traded company is about a decade, regardless of business sector. Our results shed new light on the dynamics of births and deaths of publicly traded companies and identify some of the necessary ingredients of a general theory of firms.

  12. RCT: Module 2.03, Counting Errors and Statistics, Course 8768

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hillmer, Kurt T.

    2017-04-01

    Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student withmore » the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.« less

  13. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Couchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  14. A CS1 pedagogical approach to parallel thinking

    NASA Astrophysics Data System (ADS)

    Rague, Brian William

    Almost all collegiate programs in Computer Science offer an introductory course in programming primarily devoted to communicating the foundational principles of software design and development. The ACM designates this introduction to computer programming course for first-year students as CS1, during which methodologies for solving problems within a discrete computational context are presented. Logical thinking is highlighted, guided primarily by a sequential approach to algorithm development and made manifest by typically using the latest, commercially successful programming language. In response to the most recent developments in accessible multicore computers, instructors of these introductory classes may wish to include training on how to design workable parallel code. Novel issues arise when programming concurrent applications which can make teaching these concepts to beginning programmers a seemingly formidable task. Student comprehension of design strategies related to parallel systems should be monitored to ensure an effective classroom experience. This research investigated the feasibility of integrating parallel computing concepts into the first-year CS classroom. To quantitatively assess student comprehension of parallel computing, an experimental educational study using a two-factor mixed group design was conducted to evaluate two instructional interventions in addition to a control group: (1) topic lecture only, and (2) topic lecture with laboratory work using a software visualization Parallel Analysis Tool (PAT) specifically designed for this project. A new evaluation instrument developed for this study, the Perceptions of Parallelism Survey (PoPS), was used to measure student learning regarding parallel systems. The results from this educational study show a statistically significant main effect among the repeated measures, implying that student comprehension levels of parallel concepts as measured by the PoPS improve immediately after the delivery of any initial three-week CS1 level module when compared with student comprehension levels just prior to starting the course. Survey results measured during the ninth week of the course reveal that performance levels remained high compared to pre-course performance scores. A second result produced by this study reveals no statistically significant interaction effect between the intervention method and student performance as measured by the evaluation instrument over three separate testing periods. However, visual inspection of survey score trends and the low p-value generated by the interaction analysis (0.062) indicate that further studies may verify improved concept retention levels for the lecture w/PAT group.

  15. MNE software for processing MEG and EEG data

    PubMed Central

    Gramfort, A.; Luessi, M.; Larson, E.; Engemann, D.; Strohmeier, D.; Brodbeck, C.; Parkkonen, L.; Hämäläinen, M.

    2013-01-01

    Magnetoencephalography and electroencephalography (M/EEG) measure the weak electromagnetic signals originating from neural currents in the brain. Using these signals to characterize and locate brain activity is a challenging task, as evidenced by several decades of methodological contributions. MNE, whose name stems from its capability to compute cortically-constrained minimum-norm current estimates from M/EEG data, is a software package that provides comprehensive analysis tools and workflows including preprocessing, source estimation, time–frequency analysis, statistical analysis, and several methods to estimate functional connectivity between distributed brain regions. The present paper gives detailed information about the MNE package and describes typical use cases while also warning about potential caveats in analysis. The MNE package is a collaborative effort of multiple institutes striving to implement and share best methods and to facilitate distribution of analysis pipelines to advance reproducibility of research. Full documentation is available at http://martinos.org/mne. PMID:24161808

  16. Face recognition using an enhanced independent component analysis approach.

    PubMed

    Kwak, Keun-Chang; Pedrycz, Witold

    2007-03-01

    This paper is concerned with an enhanced independent component analysis (ICA) and its application to face recognition. Typically, face representations obtained by ICA involve unsupervised learning and high-order statistics. In this paper, we develop an enhancement of the generic ICA by augmenting this method by the Fisher linear discriminant analysis (LDA); hence, its abbreviation, FICA. The FICA is systematically developed and presented along with its underlying architecture. A comparative analysis explores four distance metrics, as well as classification with support vector machines (SVMs). We demonstrate that the FICA approach leads to the formation of well-separated classes in low-dimension subspace and is endowed with a great deal of insensitivity to large variation in illumination and facial expression. The comprehensive experiments are completed for the facial-recognition technology (FERET) face database; a comparative analysis demonstrates that FICA comes with improved classification rates when compared with some other conventional approaches such as eigenface, fisherface, and the ICA itself.

  17. Use of Bloom's Taxonomy in Developing Reading Comprehension Specifications

    ERIC Educational Resources Information Center

    Luebke, Stephen; Lorie, James

    2013-01-01

    This article is a brief account of the use of Bloom's Taxonomy of Educational Objectives (Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956) by staff of the Law School Admission Council in the 1990 development of redesigned specifications for the Reading Comprehension section of the Law School Admission Test. Summary item statistics for the…

  18. Cost-Effectiveness of Comprehensive School Reform in Low Achieving Schools

    ERIC Educational Resources Information Center

    Ross, John A.; Scott, Garth; Sibbald, Tim M.

    2012-01-01

    We evaluated the cost-effectiveness of Struggling Schools, a user-generated approach to Comprehensive School Reform implemented in 100 low achieving schools serving disadvantaged students in a Canadian province. The results show that while Struggling Schools had a statistically significant positive effect on Grade 3 Reading achievement, d = 0.48…

  19. Oakton Community College Comprehensive Annual Financial Report, Fiscal Year Ended June 30, 1996.

    ERIC Educational Resources Information Center

    Hilquist, David E.

    Consisting primarily of tables, this report provides financial data on Oakton Community College in Illinois for the fiscal year ending on June 30, 1996. This comprehensive annual financial report consists of an introductory section, financial section, statistical section, and special reports section. The introductory section includes a transmittal…

  20. Effects of a Decoding Program on a Child with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Infantino, Josephine; Hempenstall, Kerry

    2006-01-01

    This case study examined the effects of a parent-presented Direct Instruction decoding program on the reading and language skills of a child with high functioning Autism Spectrum Disorder. Following the 23 hour intervention, reading comprehension, listening comprehension and fluency skills improved to grade level, whilst statistically significant…

  1. Elaborating Selected Statistical Concepts with Common Experience.

    ERIC Educational Resources Information Center

    Weaver, Kenneth A.

    1992-01-01

    Presents ways of elaborating statistical concepts so as to make course material more meaningful for students. Describes examples using exclamations, circus and cartoon characters, and falling leaves to illustrate variability, null hypothesis testing, and confidence interval. Concludes that the exercises increase student comprehension of the text…

  2. Improving DHH students' grammar through an individualized software program.

    PubMed

    Cannon, Joanna E; Easterbrooks, Susan R; Gagné, Phill; Beal-Alvarez, Jennifer

    2011-01-01

    The purpose of this study was to determine if the frequent use of a targeted, computer software grammar instruction program, used as an individualized classroom activity, would influence the comprehension of morphosyntax structures (determiners, tense, and complementizers) in deaf/hard-of-hearing (DHH) participants who use American Sign Language (ASL). Twenty-six students from an urban day school for the deaf participated in this study. Two hierarchical linear modeling growth curve analyses showed that the influence of LanguageLinks: Syntax Assessment and Intervention (LL) resulted in statistically significant gains in participants' comprehension of morphosyntax structures. Two dependent t tests revealed statistically significant results between the pre- and postintervention assessments on the Diagnostic Evaluation of Language Variation-Norm Referenced. The daily use of LL increased the morphosyntax comprehension of the participants in this study and may be a promising practice for DHH students who use ASL.

  3. A Study of Bicycle and Passenger Car Collisions Based on Insurance Claims Data

    PubMed Central

    Isaksson-Hellman, Irene

    2012-01-01

    In Sweden, bicycle crashes are under-reported in the official statistics that are based on police reports. Statistics from hospital reports show that cyclists constitute the highest percentage of severely injured road users compared to other road user groups. However, hospital reports lack detailed information about the crash. To get a more comprehensive view, additional data are needed to accurately reflect the casualty situation for cyclists. An analysis based on 438 cases of bicycle and passenger car collisions is presented, using data collected from insurance claims. The most frequent crash situations are described with factors as to where and when collisions occur, age and gender of the involved cyclists and drivers. Information on environmental circumstances such as road status, weather- and light conditions, speedlimits and traffic environment is also included. Based on the various crash events, a total of 32 different scenarios have been categorized, and it was found that more than 75% were different kinds of intersection related situations. From the data, it was concluded that factors such as estimated impact speed and age significantly influence injury severity. The insurance claims data complement the official statistics and provide a more comprehensive view of bicycle and passenger car collisions by considering all levels of crash and injury severity. The detailed descriptions of the crash situations also provide an opportunity to find countermeasures to prevent or mitigate collisions. The results provide a useful basis, and facilitates the work of reducing the number of bicycle and passenger car collisions with serious consequences. PMID:23169111

  4. Racism as a determinant of health: a protocol for conducting a systematic review and meta-analysis.

    PubMed

    Paradies, Yin; Priest, Naomi; Ben, Jehonathan; Truong, Mandy; Gupta, Arpana; Pieterse, Alex; Kelaher, Margaret; Gee, Gilbert

    2013-09-23

    Racism is increasingly recognized as a key determinant of health. A growing body of epidemiological evidence shows strong associations between self-reported racism and poor health outcomes across diverse minority groups in developed countries. While the relationship between racism and health has received increasing attention over the last two decades, a comprehensive meta-analysis focused on the health effects of racism has yet to be conducted. The aim of this review protocol is to provide a structure from which to conduct a systematic review and meta-analysis of studies that assess the relationship between racism and health. This research will consist of a systematic review and meta-analysis. Studies will be considered for review if they are empirical studies reporting quantitative data on the association between racism and health for adults and/or children of all ages from any racial/ethnic/cultural groups. Outcome measures will include general health and well-being, physical health, mental health, healthcare use and health behaviors. Scientific databases (for example, Medline) will be searched using a comprehensive search strategy and reference lists will be manually searched for relevant studies. In addition, use of online search engines (for example, Google Scholar), key websites, and personal contact with experts will also be undertaken. Screening of search results and extraction of data from included studies will be independently conducted by at least two authors, including assessment of inter-rater reliability. Studies included in the review will be appraised for quality using tools tailored to each study design. Summary statistics of study characteristics and findings will be compiled and findings synthesized in a narrative summary as well as a meta-analysis. This review aims to examine associations between reported racism and health outcomes. This comprehensive and systematic review and meta-analysis of empirical research will provide a rigorous and reliable evidence base for future research, policy and practice, including information on the extent of available evidence for a range of racial/ethnic minority groups.

  5. Impact of an integrated science and reading intervention (INSCIREAD) on bilingual students' misconceptions, reading comprehension, and transferability of strategies

    NASA Astrophysics Data System (ADS)

    Martinez, Patricia

    This thesis describes a research study that resulted in an instructional model directed at helping fourth grade diverse students improve their science knowledge, their reading comprehension, their awareness of the relationship between science and reading, and their ability to transfer strategies. The focus of the instructional model emerged from the intersection of constructs in science and reading literacy; the model identifies cognitive strategies that can be used in science and reading, and inquiry-based instruction related to the science content read by participants. The intervention is termed INSCIREAD (Instruction in Science and Reading). The GoInquire web-based system (2006) was used to develop students' content knowledge in slow landform change. Seventy-eight students participated in the study. The treatment group comprised 49 students without disabilities and 8 students with disabilities. The control group comprised 21 students without disabilities. The design of the study is a combination of a mixed-methods quasi-experimental design (Study 1), and a single subject design with groups as the unit of analysis (Study 2). The results from the quantitative measures demonstrated that the text recall data analysis from Study 1 yielded near significant statistical levels when comparing the performance of students without disabilities in the treatment group to that of the control group. Visual analyses of the results from the text recall data from Study 2 showed at least minimal change in all groups. The results of the data analysis of the level of the generated questions show there was a statistically significant increase in the scores students without disabilities obtained in the questions they generated from the pre to the posttest. The analyses conducted to detect incongruities, to summarize and rate importance, and to determine the number of propositions on a science and reading concept map data showed a statistically significant difference between students without disabilities in the treatment and the control groups on post-intervention scores. The analysis of the data from the number of misconceptions of students without disabilities showed that the frequency of 4 of the 11 misconceptions changed significantly from pre to post elicitation stages. The analyses of the qualitative measures of the think alouds and interviews generally supported the above findings.

  6. Education Statistics Quarterly, Summer 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  7. Education Statistics Quarterly, Spring 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  8. Harmonizing health information systems with information systems in other social and economic sectors.

    PubMed Central

    Macfarlane, Sarah B.

    2005-01-01

    Efforts to strengthen health information systems in low- and middle-income countries should include forging links with systems in other social and economic sectors. Governments are seeking comprehensive socioeconomic data on the basis of which to implement strategies for poverty reduction and to monitor achievement of the Millennium Development Goals. The health sector is looking to take action on the social factors that determine health outcomes. But there are duplications and inconsistencies between sectors in the collection, reporting, storage and analysis of socioeconomic data. National offices of statistics give higher priority to collection and analysis of economic than to social statistics. The Report of the Commission for Africa has estimated that an additional US$ 60 million a year is needed to improve systems to collect and analyse statistics in Africa. Some donors recognize that such systems have been weakened by numerous international demands for indicators, and have pledged support for national initiatives to strengthen statistical systems, as well as sectoral information systems such as those in health and education. Many governments are working to coordinate information systems to monitor and evaluate poverty reduction strategies. There is therefore an opportunity for the health sector to collaborate with other sectors to lever international resources to rationalize definition and measurement of indicators common to several sectors; streamline the content, frequency and timing of household surveys; and harmonize national and subnational databases that store socioeconomic data. Without long-term commitment to improve training and build career structures for statisticians and information technicians working in the health and other sectors, improvements in information and statistical systems cannot be sustained. PMID:16184278

  9. Large-scale gene function analysis with the PANTHER classification system.

    PubMed

    Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D

    2013-08-01

    The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.

  10. Thermal heterogeneity within aqueous materials quantified by 1H NMR spectroscopy: Multiparametric validation in silico and in vitro

    NASA Astrophysics Data System (ADS)

    Lutz, Norbert W.; Bernard, Monique

    2018-02-01

    We recently suggested a new paradigm for statistical analysis of thermal heterogeneity in (semi-)aqueous materials by 1H NMR spectroscopy, using water as a temperature probe. Here, we present a comprehensive in silico and in vitro validation that demonstrates the ability of this new technique to provide accurate quantitative parameters characterizing the statistical distribution of temperature values in a volume of (semi-)aqueous matter. First, line shape parameters of numerically simulated water 1H NMR spectra are systematically varied to study a range of mathematically well-defined temperature distributions. Then, corresponding models based on measured 1H NMR spectra of agarose gel are analyzed. In addition, dedicated samples based on hydrogels or biological tissue are designed to produce temperature gradients changing over time, and dynamic NMR spectroscopy is employed to analyze the resulting temperature profiles at sub-second temporal resolution. Accuracy and consistency of the previously introduced statistical descriptors of temperature heterogeneity are determined: weighted median and mean temperature, standard deviation, temperature range, temperature mode(s), kurtosis, skewness, entropy, and relative areas under temperature curves. Potential and limitations of this method for quantitative analysis of thermal heterogeneity in (semi-)aqueous materials are discussed in view of prospective applications in materials science as well as biology and medicine.

  11. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling

    PubMed Central

    Wood, John

    2017-01-01

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080

  12. Comprehensive detection of genes causing a phenotype using phenotype sequencing and pathway analysis.

    PubMed

    Harper, Marc; Gronenberg, Luisa; Liao, James; Lee, Christopher

    2014-01-01

    Discovering all the genetic causes of a phenotype is an important goal in functional genomics. We combine an experimental design for detecting independent genetic causes of a phenotype with a high-throughput sequencing analysis that maximizes sensitivity for comprehensively identifying them. Testing this approach on a set of 24 mutant strains generated for a metabolic phenotype with many known genetic causes, we show that this pathway-based phenotype sequencing analysis greatly improves sensitivity of detection compared with previous methods, and reveals a wide range of pathways that can cause this phenotype. We demonstrate our approach on a metabolic re-engineering phenotype, the PEP/OAA metabolic node in E. coli, which is crucial to a substantial number of metabolic pathways and under renewed interest for biofuel research. Out of 2157 mutations in these strains, pathway-phenoseq discriminated just five gene groups (12 genes) as statistically significant causes of the phenotype. Experimentally, these five gene groups, and the next two high-scoring pathway-phenoseq groups, either have a clear connection to the PEP metabolite level or offer an alternative path of producing oxaloacetate (OAA), and thus clearly explain the phenotype. These high-scoring gene groups also show strong evidence of positive selection pressure, compared with strictly neutral selection in the rest of the genome.

  13. Versatile Analysis of Single-Molecule Tracking Data by Comprehensive Testing against Monte Carlo Simulations

    PubMed Central

    Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.

    2008-01-01

    We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933

  14. Association of community-based dental education components with fourth-year dental students' clinical performance.

    PubMed

    Major, Nicole; McQuistan, Michelle R; Qian, Fang

    2014-08-01

    The purpose of this study was to assess which components of a community-based dental education (CBDE) program at The University of Iowa College of Dentistry & Dental Clinics were associated with overall student performance. This retrospective study analyzed data for 444 fourth-year students who graduated in 2006 through 2011. Information pertaining to students' CBDE rotations and their final grades from the comprehensive clinic (in two areas: Production and Competence) were used for statistical analysis. Bivariate analyses indicated that students who completed CBDE in the fall were more likely to receive an A or B in Production compared to students who completed CBDE in the spring. However, students who completed CBDE in the beginning or end of the academic year were more likely to receive an A or B in Competence compared to those who completed CBDE in the middle of the year. Students who treated a variety of patient types during CBDE experiences (comprehensive and emergency care vs. mainly comprehensive care) were more likely to receive better grades in Production, while CBDE clinic type was not associated with grades. Dental schools should consider how CBDE may impact students' performance in their institutional clinics when developing and evaluating CBDE programs.

  15. Use of a modified Comprehensive Pain Evaluation Questionnaire: Characteristics and functional status of patients on entry to a tertiary care pain clinic

    PubMed Central

    Nelli, Jennifer M; Nicholson, Keith; Lakha, S Fatima; Louffat, Ada F; Chapparo, Luis; Furlan, Julio; Mailis-Gagnon, Angela

    2012-01-01

    BACKGROUND: With increasing knowledge of chronic pain, clinicians have attempted to assess chronic pain patients with lengthy assessment tools. OBJECTIVES: To describe the functional and emotional status of patients presenting to a tertiary care pain clinic; to assess the reliability and validity of a diagnostic classification system for chronic pain patients modelled after the Multidimensional Pain Inventory; to provide psychometric data on a modified Comprehensive Pain Evaluation Questionnaire (CPEQ); and to evaluate the relationship between the modified CPEQ construct scores and clusters with Diagnostic and Statistical Manual, Fourth Edition – Text Revision Pain Disorder diagnoses. METHODS: Data on 300 new patients over the course of nine months were collected using standardized assessment procedures plus a modified CPEQ at the Comprehensive Pain Program, Toronto Western Hospital, Toronto, Ontario. RESULTS: Cluster analysis of the modified CPEQ revealed three patient profiles, labelled Adaptive Copers, Dysfunctional, and Interpersonally Distressed, which closely resembled those previously reported. The distribution of modified CPEQ construct T scores across profile subtypes was similar to that previously reported for the original CPEQ. A novel finding was that of a strong relationship between the modified CPEQ clusters and constructs with Diagnostic and Statistical Manual, Fourth Edition – Text Revision Pain Disorder diagnoses. DISCUSSION AND CONCLUSIONS: The CPEQ, either the original or modified version, yields reproducible results consistent with the results of other studies. This technique may usefully classify chronic pain patients, but more work is needed to determine the meaning of the CPEQ clusters, what psychological or biomedical variables are associated with CPEQ constructs or clusters, and whether this instrument may assist in treatment planning or predict response to treatment. PMID:22518368

  16. Predicting reading comprehension academic achievement in late adolescents with velo-cardio-facial (22q11.2 deletion) syndrome (VCFS): A longitudinal study

    PubMed Central

    Antshel, Kevin M.; Hier, Bridget O.; Fremont, Wanda; Faraone, Stephen V.; Kates, Wendy R.

    2015-01-01

    Background The primary objective of the current study was to examine the childhood predictors of adolescent reading comprehension in velo-cardio-facial syndrome (VCFS). Although much research has focused on mathematics skills among individuals with VCFS, no studies have examined predictors of reading comprehension. Methods 69 late adolescents with VCFS , 23 siblings of youth with VCFS and 30 community controls participated in a longitudinal research project and had repeat neuropsychological test batteries and psychiatric evaluations every 3 years. The Wechsler Individual Achievement Test – 2nd edition (WIAT-II) Reading Comprehension subtest served as our primary outcome variable. Results Consistent with previous research, children and adolescents with VCFS had mean reading comprehension scores on the WIAT-II which were approximately two standard deviations below the mean and word reading scores approximately one standard deviation below the mean. A more novel finding is that relative to both control groups, individuals with VCFS demonstrated a longitudinal decline in reading comprehension abilities yet a slight increase in word reading abilities. In the combined control sample, WISC-III FSIQ, WIAT-II Word Reading, WISC-III Vocabulary and CVLT-C List A Trial 1 accounted for 75% of the variance in Time 3 WIAT-II Reading Comprehension scores. In the VCFS sample, WISC-III FSIQ, BASC-Teacher Aggression, CVLT-C Intrusions, Tower of London, Visual Span Backwards, WCST non-perseverative errors, WIAT-II Word Reading and WISC-III Freedom from Distractibility index accounted for 85% of the variance in Time 3 WIAT-II Reading Comprehension scores. A principal component analysis with promax rotation computed on the statistically significant Time 1 predictor variables in the VCFS sample resulted in three factors: Word reading decoding / Interference control, Self-Control / Self-Monitoring and Working Memory. Conclusions Childhood predictors of late adolescent reading comprehension in VCFS differ in some meaningful ways from predictors in the non-VCFS population. These results offer some guidance for how best to consider intervention efforts to improve reading comprehension in the VCFS population. PMID:24861691

  17. Statistical properties of radiation from VUV and X-ray free electron laser

    NASA Astrophysics Data System (ADS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1998-03-01

    The paper presents a comprehensive analysis of the statistical properties of the radiation from a self-amplified spontaneous emission (SASE) free electron laser operating in linear and nonlinear mode. The investigation has been performed in a one-dimensional approximation assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied in detail: time and spectral field correlations, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after the monochromator installed at the FEL amplifier exit and radiation spectrum. The linear high gain limit is studied analytically. It is shown that the radiation from a SASE FEL operating in the linear regime possesses all the features corresponding to completely chaotic polarized radiation. A detailed study of statistical properties of the radiation from a SASE FEL operating in linear and nonlinear regime has been performed by means of time-dependent simulation codes. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.

  18. Relationships of French and English Morphophonemic Orthographies to Word Reading, Spelling, and Reading Comprehension during Early and Middle Childhood

    ERIC Educational Resources Information Center

    Abbott, Robert D.; Fayol, Michel; Zorman, Michel; Casalis, Séverine; Nagy, William; Berninger, Virginia W.

    2016-01-01

    Two longitudinal studies of word reading, spelling, and reading comprehension identified commonalities and differences in morphophonemic orthographies--French (Study 1, n = 1,313) or English (Study 2, n = 114) in early childhood (Grade 2)and middle childhood (Grade 5). For French and English, statistically significant concurrent relationships…

  19. Metabolomics and Integrative Omics for the Development of Thai Traditional Medicine

    PubMed Central

    Khoomrung, Sakda; Wanichthanarak, Kwanjeera; Nookaew, Intawat; Thamsermsang, Onusa; Seubnooch, Patcharamon; Laohapand, Tawee; Akarasereenont, Pravit

    2017-01-01

    In recent years, interest in studies of traditional medicine in Asian and African countries has gradually increased due to its potential to complement modern medicine. In this review, we provide an overview of Thai traditional medicine (TTM) current development, and ongoing research activities of TTM related to metabolomics. This review will also focus on three important elements of systems biology analysis of TTM including analytical techniques, statistical approaches and bioinformatics tools for handling and analyzing untargeted metabolomics data. The main objective of this data analysis is to gain a comprehensive understanding of the system wide effects that TTM has on individuals. Furthermore, potential applications of metabolomics and systems medicine in TTM will also be discussed. PMID:28769804

  20. A peaking-regulation-balance-based method for wind & PV power integrated accommodation

    NASA Astrophysics Data System (ADS)

    Zhang, Jinfang; Li, Nan; Liu, Jun

    2018-02-01

    Rapid development of China’s new energy in current and future should be focused on cooperation of wind and PV power. Based on the analysis of system peaking balance, combined with the statistical features of wind and PV power output characteristics, a method of comprehensive integrated accommodation analysis of wind and PV power is put forward. By the electric power balance during night peaking load period in typical day, wind power installed capacity is determined firstly; then PV power installed capacity could be figured out by midday peak load hours, which effectively solves the problem of uncertainty when traditional method hard determines the combination of the wind and solar power simultaneously. The simulation results have validated the effectiveness of the proposed method.

  1. Development and Validation of the Caring Loneliness Scale.

    PubMed

    Karhe, Liisa; Kaunonen, Marja; Koivisto, Anna-Maija

    2016-12-01

    The Caring Loneliness Scale (CARLOS) includes 5 categories derived from earlier qualitative research. This article assesses the reliability and construct validity of a scale designed to measure patient experiences of loneliness in a professional caring relationship. Statistical analysis with 4 different sample sizes included Cronbach's alpha and exploratory factor analysis with principal axis factoring extraction. The sample size of 250 gave the most useful and comprehensible structure, but all 4 samples yielded underlying content of loneliness experiences. The initial 5 categories were reduced to 4 factors with 24 items and Cronbach's alpha ranging from .77 to .90. The findings support the reliability and validity of CARLOS for the assessment of Finnish breast cancer and heart surgery patients' experiences but as all instruments, further validation is needed.

  2. A Statistical Analysis of Corona Topography: New Insights into Corona Formation and Evolution

    NASA Technical Reports Server (NTRS)

    Stofan, E. R.; Glaze, L. S.; Smrekar, S. E.; Baloga, S. M.

    2003-01-01

    Extensive mapping of the surface of Venus and continued analysis of Magellan data have allowed a more comprehensive survey of coronae to be conducted. Our updated corona database contains 514 features, an increase from the 326 coronae of the previous survey. We include a new set of 106 Type 2 or stealth coronae, which have a topographic rather than a fracture annulus. The large increase in the number of coronae over the 1992 survey results from several factors, including the use of the full Magellan data set and the addition of features identified as part of the systematic geologic mapping of Venus. Parameters of the population that we have analyzed to date include size and topography.

  3. Stochastic Flow Cascades

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo I.; Shlesinger, Michael F.

    2012-01-01

    We introduce and explore a Stochastic Flow Cascade (SFC) model: A general statistical model for the unidirectional flow through a tandem array of heterogeneous filters. Examples include the flow of: (i) liquid through heterogeneous porous layers; (ii) shocks through tandem shot noise systems; (iii) signals through tandem communication filters. The SFC model combines together the Langevin equation, convolution filters and moving averages, and Poissonian randomizations. A comprehensive analysis of the SFC model is carried out, yielding closed-form results. Lévy laws are shown to universally emerge from the SFC model, and characterize both heavy tailed retention times (Noah effect) and long-ranged correlations (Joseph effect).

  4. Texas Academic Library Statistics, 1986.

    ERIC Educational Resources Information Center

    Texas State Library, Austin. Dept. of Library Development.

    This publication is the latest in a series of annual publications which are intended to provide a comprehensive source of statistics on academic libraries in Texas. The report is divided into four sections containing data on four-year public institutions, four-year private institutions, two-year colleges (both public and private), and law schools…

  5. Education Statistics Quarterly, Fall 2002.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  6. Statistical Tables on Manpower.

    ERIC Educational Resources Information Center

    Manpower Administration (DOL), Washington, DC.

    The President sends to the Congress each year a report on the Nation's manpower, as required by the Manpower Development and Training Act of 1962, which includes a comprehensive report by the Department of Labor on manpower requirements, resources, utilization, and training. This statistical appendix to the Department of Labor report presents data…

  7. Education Statistics Quarterly, Fall 2001.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2001-01-01

    The publication gives a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data products, and funding opportunities developed over a 3-month period. Each issue also contains a message from…

  8. Higher Education in the U.S.S.R.: Curriculums, Schools, and Statistics.

    ERIC Educational Resources Information Center

    Rosen, Seymour M.

    This study is designed to provide more comprehensive information on Soviet higher learning emphasizing its increasingly close alignment with Soviet national planning and economy. Following introductory material, Soviet curriculums in higher education and schools and statistics are reviewed. Highlights include: (1) A major development in Soviet…

  9. Education Statistics Quarterly. Volume 5, Issue 1.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2003-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications, data product, and funding opportunities developed over a 3-month period. Each issue also contains a message…

  10. Education Statistics Quarterly, Winter 2001.

    ERIC Educational Resources Information Center

    Dillow, Sally, Ed.

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  11. A longitudinal analysis of bibliometric and impact factor trends among the core international journals of nursing, 1977-2008.

    PubMed

    Smith, Derek R

    2010-12-01

    Although bibliometric analysis affords significant insight into the progression and distribution of information within a particular research field, detailed longitudinal studies of this type are rare within the field of nursing. This study aimed to investigate, from a bibliometric perspective, the progression and trends of core international nursing journals over the longest possible time period. A detailed bibliometric analysis was undertaken among 7 core international nursing periodicals using custom historical data sourced from the Thomson Reuters Journal Citation Reports®. In the 32 years between 1977 and 2008, the number of citations received by these 7 journals increased over 700%. A sustained and statistically significant (p<0.001) 3-fold increase was also observed in the average impact factor score during this period. Statistical analysis revealed that all periodicals experienced significant (p<0.001) improvements in their impact factors over time, with gains ranging from approximately 2- to 78-fold. Overall, this study provides one of the most comprehensive, longitudinal bibliometric analyses ever conducted in the field of nursing. Impressive and continual impact factor gains suggest that published nursing research is being increasingly seen, heard and cited in the international academic community. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Morphological representation of order-statistics filters.

    PubMed

    Charif-Chefchaouni, M; Schonfeld, D

    1995-01-01

    We propose a comprehensive theory for the morphological bounds on order-statistics filters (and their repeated iterations). Conditions are derived for morphological openings and closings to serve as bounds (lower and upper, respectively) on order-statistics filters (and their repeated iterations). Under various assumptions, morphological open-closings and close-openings are also shown to serve as (tighter) bounds (lower and upper, respectively) on iterations of order-statistics filters. Simulations of the application of the results presented to image restoration are finally provided.

  13. Differential effects of oral reading to improve comprehension with severe learning disabled and educable mentally handicapped students.

    PubMed

    Chang, S Q; Williams, R L; McLaughlin, T F

    1983-01-01

    The purpose of this study was to evaluate the effectiveness of oral reading as a teaching technique for improving reading comprehension of 11 Educable Mentally Handicapped or Severe Learning Disabled adolescents. Students were tested on their ability to answer comprehension questions from a short factual article. Comprehension improved following the oral reading for students with a reading grade equivalent of less than 5.5 (measured from the Wide Range Achievement Test) but not for those students having a grade equivalent of greater than 5.5. This association was statistically significant (p = less than .01). Oral reading appeared to improve comprehension among the poorer readers but not for readers with moderately high ability.

  14. Assessment of statistical education in Indonesia: Preliminary results and initiation to simulation-based inference

    NASA Astrophysics Data System (ADS)

    Saputra, K. V. I.; Cahyadi, L.; Sembiring, U. A.

    2018-01-01

    Start in this paper, we assess our traditional elementary statistics education and also we introduce elementary statistics with simulation-based inference. To assess our statistical class, we adapt the well-known CAOS (Comprehensive Assessment of Outcomes in Statistics) test that serves as an external measure to assess the student’s basic statistical literacy. This test generally represents as an accepted measure of statistical literacy. We also introduce a new teaching method on elementary statistics class. Different from the traditional elementary statistics course, we will introduce a simulation-based inference method to conduct hypothesis testing. From the literature, it has shown that this new teaching method works very well in increasing student’s understanding of statistics.

  15. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire

    PubMed Central

    Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-01-01

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra, extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. PMID:29589829

  16. A systematic review and meta-analysis of music therapy for the older adults with depression.

    PubMed

    Zhao, K; Bai, Z G; Bo, A; Chi, I

    2016-11-01

    To determine the efficacy of music therapy in the management of depression in the elderly. We conducted a systematic review and meta-analysis of randomized controlled trials. Change in depressive symptoms was measured with various scales. Standardized mean differences were calculated for each therapy-control contrast. A comprehensive search yielded 2,692 citations; 19 articles met inclusion criteria. Meta-analysis suggests that music therapy plus standard treatment has statistical significance in reducing depressive symptoms among older adults (standardized mean differences = 1.02; 95% CI = 0.87, 1.17). This systematic review and meta-analysis suggests that music therapy has an effect on reducing depressive symptoms to some extent. However, high-quality trials evaluating the effects of music therapy on depression are required. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Combined magnetic and gravity analysis

    NASA Technical Reports Server (NTRS)

    Hinze, W. J.; Braile, L. W.; Chandler, V. W.; Mazella, F. E.

    1975-01-01

    Efforts are made to identify methods of decreasing magnetic interpretation ambiguity by combined gravity and magnetic analysis, to evaluate these techniques in a preliminary manner, to consider the geologic and geophysical implications of correlation, and to recommend a course of action to evaluate methods of correlating gravity and magnetic anomalies. The major thrust of the study was a search and review of the literature. The literature of geophysics, geology, geography, and statistics was searched for articles dealing with spatial correlation of independent variables. An annotated bibliography referencing the Germane articles and books is presented. The methods of combined gravity and magnetic analysis techniques are identified and reviewed. A more comprehensive evaluation of two types of techniques is presented. Internal correspondence of anomaly amplitudes is examined and a combined analysis is done utilizing Poisson's theorem. The geologic and geophysical implications of gravity and magnetic correlation based on both theoretical and empirical relationships are discussed.

  18. DASS-GUI: a user interface for identification and analysis of significant patterns in non-sequential data.

    PubMed

    Hollunder, Jens; Friedel, Maik; Kuiper, Martin; Wilhelm, Thomas

    2010-04-01

    Many large 'omics' datasets have been published and many more are expected in the near future. New analysis methods are needed for best exploitation. We have developed a graphical user interface (GUI) for easy data analysis. Our discovery of all significant substructures (DASS) approach elucidates the underlying modularity, a typical feature of complex biological data. It is related to biclustering and other data mining approaches. Importantly, DASS-GUI also allows handling of multi-sets and calculation of statistical significances. DASS-GUI contains tools for further analysis of the identified patterns: analysis of the pattern hierarchy, enrichment analysis, module validation, analysis of additional numerical data, easy handling of synonymous names, clustering, filtering and merging. Different export options allow easy usage of additional tools such as Cytoscape. Source code, pre-compiled binaries for different systems, a comprehensive tutorial, case studies and many additional datasets are freely available at http://www.ifr.ac.uk/dass/gui/. DASS-GUI is implemented in Qt.

  19. A Powerful Procedure for Pathway-Based Meta-analysis Using Summary Statistics Identifies 43 Pathways Associated with Type II Diabetes in European Populations.

    PubMed

    Zhang, Han; Wheeler, William; Hyland, Paula L; Yang, Yifan; Shi, Jianxin; Chatterjee, Nilanjan; Yu, Kai

    2016-06-01

    Meta-analysis of multiple genome-wide association studies (GWAS) has become an effective approach for detecting single nucleotide polymorphism (SNP) associations with complex traits. However, it is difficult to integrate the readily accessible SNP-level summary statistics from a meta-analysis into more powerful multi-marker testing procedures, which generally require individual-level genetic data. We developed a general procedure called Summary based Adaptive Rank Truncated Product (sARTP) for conducting gene and pathway meta-analysis that uses only SNP-level summary statistics in combination with genotype correlation estimated from a panel of individual-level genetic data. We demonstrated the validity and power advantage of sARTP through empirical and simulated data. We conducted a comprehensive pathway-based meta-analysis with sARTP on type 2 diabetes (T2D) by integrating SNP-level summary statistics from two large studies consisting of 19,809 T2D cases and 111,181 controls with European ancestry. Among 4,713 candidate pathways from which genes in neighborhoods of 170 GWAS established T2D loci were excluded, we detected 43 T2D globally significant pathways (with Bonferroni corrected p-values < 0.05), which included the insulin signaling pathway and T2D pathway defined by KEGG, as well as the pathways defined according to specific gene expression patterns on pancreatic adenocarcinoma, hepatocellular carcinoma, and bladder carcinoma. Using summary data from 8 eastern Asian T2D GWAS with 6,952 cases and 11,865 controls, we showed 7 out of the 43 pathways identified in European populations remained to be significant in eastern Asians at the false discovery rate of 0.1. We created an R package and a web-based tool for sARTP with the capability to analyze pathways with thousands of genes and tens of thousands of SNPs.

  20. A Powerful Procedure for Pathway-Based Meta-analysis Using Summary Statistics Identifies 43 Pathways Associated with Type II Diabetes in European Populations

    PubMed Central

    Zhang, Han; Wheeler, William; Hyland, Paula L.; Yang, Yifan; Shi, Jianxin; Chatterjee, Nilanjan; Yu, Kai

    2016-01-01

    Meta-analysis of multiple genome-wide association studies (GWAS) has become an effective approach for detecting single nucleotide polymorphism (SNP) associations with complex traits. However, it is difficult to integrate the readily accessible SNP-level summary statistics from a meta-analysis into more powerful multi-marker testing procedures, which generally require individual-level genetic data. We developed a general procedure called Summary based Adaptive Rank Truncated Product (sARTP) for conducting gene and pathway meta-analysis that uses only SNP-level summary statistics in combination with genotype correlation estimated from a panel of individual-level genetic data. We demonstrated the validity and power advantage of sARTP through empirical and simulated data. We conducted a comprehensive pathway-based meta-analysis with sARTP on type 2 diabetes (T2D) by integrating SNP-level summary statistics from two large studies consisting of 19,809 T2D cases and 111,181 controls with European ancestry. Among 4,713 candidate pathways from which genes in neighborhoods of 170 GWAS established T2D loci were excluded, we detected 43 T2D globally significant pathways (with Bonferroni corrected p-values < 0.05), which included the insulin signaling pathway and T2D pathway defined by KEGG, as well as the pathways defined according to specific gene expression patterns on pancreatic adenocarcinoma, hepatocellular carcinoma, and bladder carcinoma. Using summary data from 8 eastern Asian T2D GWAS with 6,952 cases and 11,865 controls, we showed 7 out of the 43 pathways identified in European populations remained to be significant in eastern Asians at the false discovery rate of 0.1. We created an R package and a web-based tool for sARTP with the capability to analyze pathways with thousands of genes and tens of thousands of SNPs. PMID:27362418

  1. Exploiting mineral data: applications to the diversity, distribution, and social networks of copper mineral

    NASA Astrophysics Data System (ADS)

    Morrison, S. M.; Downs, R. T.; Golden, J. J.; Pires, A.; Fox, P. A.; Ma, X.; Zednik, S.; Eleish, A.; Prabhu, A.; Hummer, D. R.; Liu, C.; Meyer, M.; Ralph, J.; Hystad, G.; Hazen, R. M.

    2016-12-01

    We have developed a comprehensive database of copper (Cu) mineral characteristics. These data include crystallographic, paragenetic, chemical, locality, age, structural complexity, and physical property information for the 689 Cu mineral species approved by the International Mineralogical Association (rruff.info/ima). Synthesis of this large, varied dataset allows for in-depth exploration of statistical trends and visualization techniques. With social network analysis (SNA) and cluster analysis of minerals, we create sociograms and chord diagrams. SNA visualizations illustrate the relationships and connectivity between mineral species, which often form cliques associated with rock type and/or geochemistry. Using mineral ecology statistics, we analyze mineral-locality frequency distribution and predict the number of missing mineral species, visualized with accumulation curves. By assembly of 2-dimensional KLEE diagrams of co-existing elements in minerals, we illustrate geochemical trends within a mineral system. To explore mineral age and chemical oxidation state, we create skyline diagrams and compare trends with varying chemistry. These trends illustrate mineral redox changes through geologic time and correlate with significant geologic occurrences, such as the Great Oxidation Event (GOE) or Wilson Cycles.

  2. Statistical Analysis of Bus Networks in India

    PubMed Central

    2016-01-01

    In this paper, we model the bus networks of six major Indian cities as graphs in L-space, and evaluate their various statistical properties. While airline and railway networks have been extensively studied, a comprehensive study on the structure and growth of bus networks is lacking. In India, where bus transport plays an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer basic questions on its evolution, growth, robustness and resiliency. Although the common feature of small-world property is observed, our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks. We also observe that these networks although, robust and resilient to random attacks are particularly degree-sensitive. Unlike real-world networks, such as Internet, WWW and airline, that are virtual, bus networks are physically constrained. Our findings therefore, throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future. PMID:27992590

  3. Have Basic Mathematical Skills Grown Obsolete in the Computer Age: Assessing Basic Mathematical Skills and Forecasting Performance in a Business Statistics Course

    ERIC Educational Resources Information Center

    Noser, Thomas C.; Tanner, John R.; Shah, Situl

    2008-01-01

    The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…

  4. False-Belief Understanding and Language Ability Mediate the Relationship between Emotion Comprehension and Prosocial Orientation in Preschoolers

    PubMed Central

    Ornaghi, Veronica; Pepe, Alessandro; Grazzani, Ilaria

    2016-01-01

    Emotion comprehension (EC) is known to be a key correlate and predictor of prosociality from early childhood. In the present study, we examined this relationship within the broad theoretical construct of social understanding which includes a number of socio-emotional skills, as well as cognitive and linguistic abilities. Theory of mind, especially false-belief understanding, has been found to be positively correlated with both EC and prosocial orientation. Similarly, language ability is known to play a key role in children’s socio-emotional development. The combined contribution of false-belief understanding and language to explaining the relationship between EC and prosociality has yet to be investigated. Thus, in the current study, we conducted an in-depth exploration of how preschoolers’ false-belief understanding and language ability each contribute to modeling the relationship between children’s comprehension of emotion and their disposition to act prosocially toward others, after controlling for age and gender. Participants were 101 4- to 6-year-old children (54% boys), who were administered measures of language ability, false-belief understanding, EC and prosocial orientation. Multiple mediation analysis of the data suggested that false-belief understanding and language ability jointly and fully mediated the effect of preschoolers’ EC on their prosocial orientation. Analysis of covariates revealed that gender exerted no statistically significant effect, while age had a trivial positive effect. Theoretical and practical implications of the findings are discussed. PMID:27774075

  5. Functional annotation of regulatory pathways.

    PubMed

    Pandey, Jayesh; Koyutürk, Mehmet; Kim, Yohan; Szpankowski, Wojciech; Subramaniam, Shankar; Grama, Ananth

    2007-07-01

    Standardized annotations of biomolecules in interaction networks (e.g. Gene Ontology) provide comprehensive understanding of the function of individual molecules. Extending such annotations to pathways is a critical component of functional characterization of cellular signaling at the systems level. We propose a framework for projecting gene regulatory networks onto the space of functional attributes using multigraph models, with the objective of deriving statistically significant pathway annotations. We first demonstrate that annotations of pairwise interactions do not generalize to indirect relationships between processes. Motivated by this result, we formalize the problem of identifying statistically overrepresented pathways of functional attributes. We establish the hardness of this problem by demonstrating the non-monotonicity of common statistical significance measures. We propose a statistical model that emphasizes the modularity of a pathway, evaluating its significance based on the coupling of its building blocks. We complement the statistical model by an efficient algorithm and software, Narada, for computing significant pathways in large regulatory networks. Comprehensive results from our methods applied to the Escherichia coli transcription network demonstrate that our approach is effective in identifying known, as well as novel biological pathway annotations. Narada is implemented in Java and is available at http://www.cs.purdue.edu/homes/jpandey/narada/.

  6. Statistical analysis of midlatitude spread F using multi-station digisonde observations

    NASA Astrophysics Data System (ADS)

    Bhaneja, P.; Earle, G. D.; Bullett, T. W.

    2018-01-01

    A comprehensive statistical study of midlatitude spread F (MSF) is presented for five midlatitude stations in the North American sector. These stations include Ramey AFB, Puerto Rico (18.5°N, 67.1°W, -14° declination angle), Wallops Island, Virginia (37.95°N, 75.5°W, -11° declination angle), Dyess, Texas (32.4°N, 99.8°W, 6.9° declination angle), Boulder, Colorado (40°N, 105.3°W, 10° declination angle), and Vandenberg AFB, California (34.8°N, 120.5°W, 13° declination angle). Pattern recognition algorithms are used to determine the presence of both range and frequency spread F. Data from 1996 to 2011 are analyzed, covering all of Solar Cycle 23 and the beginning of Solar Cycle 24. Variations with respect to season and solar activity are presented, including the effects of the extended minimum between cycles 23 and 24.

  7. Scaling of global input-output networks

    NASA Astrophysics Data System (ADS)

    Liang, Sai; Qi, Zhengling; Qu, Shen; Zhu, Ji; Chiu, Anthony S. F.; Jia, Xiaoping; Xu, Ming

    2016-06-01

    Examining scaling patterns of networks can help understand how structural features relate to the behavior of the networks. Input-output networks consist of industries as nodes and inter-industrial exchanges of products as links. Previous studies consider limited measures for node strengths and link weights, and also ignore the impact of dataset choice. We consider a comprehensive set of indicators in this study that are important in economic analysis, and also examine the impact of dataset choice, by studying input-output networks in individual countries and the entire world. Results show that Burr, Log-Logistic, Log-normal, and Weibull distributions can better describe scaling patterns of global input-output networks. We also find that dataset choice has limited impacts on the observed scaling patterns. Our findings can help examine the quality of economic statistics, estimate missing data in economic statistics, and identify key nodes and links in input-output networks to support economic policymaking.

  8. Descriptive epidemiology of breast cancer in China: incidence, mortality, survival and prevalence.

    PubMed

    Li, Tong; Mello-Thoms, Claudia; Brennan, Patrick C

    2016-10-01

    Breast cancer is the most common neoplasm diagnosed amongst women worldwide and is the leading cause of female cancer death. However, breast cancer in China is not comprehensively understood compared with Westernised countries, although the 5-year prevalence statistics indicate that approximately 11 % of worldwide breast cancer occurs in China and that the incidence has increased rapidly in recent decades. This paper reviews the descriptive epidemiology of Chinese breast cancer in terms of incidence, mortality, survival and prevalence, and explores relevant factors such as age of manifestation and geographic locations. The statistics are compared with data from the Westernised world with particular emphasis on the United States and Australia. Potential causal agents responsible for differences in breast cancer epidemiology between Chinese and other populations are also explored. The need to minimise variability and discrepancies in methods of data acquisition, analysis and presentation is highlighted.

  9. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  10. [The general methodological approaches identifying strategic positions in developing healthy lifestyle of population].

    PubMed

    Dorofeev, S B; Babenko, A I

    2017-01-01

    The article deals with analysis of national and international publications concerning methodological aspects of elaborating systematic approach to healthy life-style of population. This scope of inquiry plays a key role in development of human capital. The costs related to healthy life-style are to be considered as personal investment into future income due to physical incrementation of human capital. The definitions of healthy life-style, its categories and supportive factors are to be considered in the process of development of strategies and programs of healthy lifestyle. The implementation of particular strategies entails application of comprehensive information and educational programs meant for various categories of population. Therefore, different motivation techniques are to be considered for children, adolescents, able-bodied population, the elderly. This approach is to be resulted in establishing particular responsibility for national government, territorial administrations, health care administrations, employers and population itself. The necessity of complex legislative measures is emphasized. The recent social hygienic studies were focused mostly on particular aspects of development of healthy life-style of population. Hence, the demand for long term exploration of development of organizational and functional models implementing medical preventive measures on the basis of comprehensive information analysis using statistical, sociological and professional expertise.

  11. Minfi: a flexible and comprehensive Bioconductor package for the analysis of Infinium DNA methylation microarrays

    PubMed Central

    Aryee, Martin J.; Jaffe, Andrew E.; Corrada-Bravo, Hector; Ladd-Acosta, Christine; Feinberg, Andrew P.; Hansen, Kasper D.; Irizarry, Rafael A.

    2014-01-01

    Motivation: The recently released Infinium HumanMethylation450 array (the ‘450k’ array) provides a high-throughput assay to quantify DNA methylation (DNAm) at ∼450 000 loci across a range of genomic features. Although less comprehensive than high-throughput sequencing-based techniques, this product is more cost-effective and promises to be the most widely used DNAm high-throughput measurement technology over the next several years. Results: Here we describe a suite of computational tools that incorporate state-of-the-art statistical techniques for the analysis of DNAm data. The software is structured to easily adapt to future versions of the technology. We include methods for preprocessing, quality assessment and detection of differentially methylated regions from the kilobase to the megabase scale. We show how our software provides a powerful and flexible development platform for future methods. We also illustrate how our methods empower the technology to make discoveries previously thought to be possible only with sequencing-based methods. Availability and implementation: http://bioconductor.org/packages/release/bioc/html/minfi.html. Contact: khansen@jhsph.edu; rafa@jimmy.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24478339

  12. Navigating freely-available software tools for metabolomics analysis.

    PubMed

    Spicer, Rachel; Salek, Reza M; Moreno, Pablo; Cañueto, Daniel; Steinbeck, Christoph

    2017-01-01

    The field of metabolomics has expanded greatly over the past two decades, both as an experimental science with applications in many areas, as well as in regards to data standards and bioinformatics software tools. The diversity of experimental designs and instrumental technologies used for metabolomics has led to the need for distinct data analysis methods and the development of many software tools. To compile a comprehensive list of the most widely used freely available software and tools that are used primarily in metabolomics. The most widely used tools were selected for inclusion in the review by either ≥ 50 citations on Web of Science (as of 08/09/16) or the use of the tool being reported in the recent Metabolomics Society survey. Tools were then categorised by the type of instrumental data (i.e. LC-MS, GC-MS or NMR) and the functionality (i.e. pre- and post-processing, statistical analysis, workflow and other functions) they are designed for. A comprehensive list of the most used tools was compiled. Each tool is discussed within the context of its application domain and in relation to comparable tools of the same domain. An extended list including additional tools is available at https://github.com/RASpicer/MetabolomicsTools which is classified and searchable via a simple controlled vocabulary. This review presents the most widely used tools for metabolomics analysis, categorised based on their main functionality. As future work, we suggest a direct comparison of tools' abilities to perform specific data analysis tasks e.g. peak picking.

  13. WholePathwayScope: a comprehensive pathway-based analysis tool for high-throughput data

    PubMed Central

    Yi, Ming; Horton, Jay D; Cohen, Jonathan C; Hobbs, Helen H; Stephens, Robert M

    2006-01-01

    Background Analysis of High Throughput (HTP) Data such as microarray and proteomics data has provided a powerful methodology to study patterns of gene regulation at genome scale. A major unresolved problem in the post-genomic era is to assemble the large amounts of data generated into a meaningful biological context. We have developed a comprehensive software tool, WholePathwayScope (WPS), for deriving biological insights from analysis of HTP data. Result WPS extracts gene lists with shared biological themes through color cue templates. WPS statistically evaluates global functional category enrichment of gene lists and pathway-level pattern enrichment of data. WPS incorporates well-known biological pathways from KEGG (Kyoto Encyclopedia of Genes and Genomes) and Biocarta, GO (Gene Ontology) terms as well as user-defined pathways or relevant gene clusters or groups, and explores gene-term relationships within the derived gene-term association networks (GTANs). WPS simultaneously compares multiple datasets within biological contexts either as pathways or as association networks. WPS also integrates Genetic Association Database and Partial MedGene Database for disease-association information. We have used this program to analyze and compare microarray and proteomics datasets derived from a variety of biological systems. Application examples demonstrated the capacity of WPS to significantly facilitate the analysis of HTP data for integrative discovery. Conclusion This tool represents a pathway-based platform for discovery integration to maximize analysis power. The tool is freely available at . PMID:16423281

  14. Examining patient comprehension of emergency department discharge instructions: Who says they understand when they do not?

    PubMed

    Lin, Margaret Jane; Tirosh, Adva Gutman; Landry, Alden

    2015-12-01

    Patient comprehension of emergency department (ED) discharge instructions is important for ensuring that patients understand their diagnosis, recommendations for treatment, appropriate follow-up, and reasons to return. However, many patients may not fully understand their instructions. Furthermore, some patients may state they understand their instructions even when they do not. We surveyed 75 patients on their perception of their understanding of their ED discharge instructions, and asked them specific questions about the instructions. We also performed a chart review, and examined patients' answers for correlation with the written instructions and medical chart. We then performed a statistical analysis evaluating which patients claimed understanding but who were found to have poor understanding on chart review. Overall, there was no significant correlation between patient self-reported understanding and physician evaluation of their understanding (ρ = 0.221, p = 0.08). However, among female patients and patients with less than 4 years of college, there was significant positive correlation between self-report and physician evaluation of comprehension (ρ = 0.326, p = 0.04 and ρ = 0.344, p = 0.04, respectively), whereas there was no correlation for male patients and those with more than 16 years of education (ρ = 0.008, p = 0.9, ρ = -0.041, p = 0.84, respectively). Patients' perception of their understanding may not be accurate, especially among men, and those with greater than college education. Identifying which patients say they understand their discharge instructions, but may actually have poor comprehension could help focus future interventions on improving comprehension.

  15. Impact of comprehensive psychological training on mental health of recruits in Xinjiang.

    PubMed

    Lv, Shi-ying; Zhang, Lan

    2015-04-01

    To examine the effect of comprehensive psychological training on the mental health of recruits and to provide basis for promoting mental health among recruits in Xinjiang. From September to December, 2013, a convenience sampling was used to select 613 recruits from Xinjiang. These recruits were assigned to the training group (n=306) and the control group (n=307). The Simplified Coping Style Questionnaire,the Questionnaire of Armymen's Emotion Regulation Types and the Chinese Military Personnel Social Support Scale were used to evaluate the levels of mental health at the baseline and at the end of comprehensive psychological training. After comprehensive psychological training, the negative coping style score of the training group were significantly lower than the control group (P=0.000), and there were difference in cognitive focus (P=0.000) and behavior restrain (P=0.005); also, there was significant difference in social support scale (P<0.05). The coping style showed positive correlation with emotion regulation and all factors (P<0.05). Social support and all factors was positively correlated with positive coping style (P<0.05) and negatively correlated with negative coping style (P<0.05). Social support and all factors showed positive correlation with affective appeal and self comfort (P<0.05) and negative correlation with congnitive focus and behavior restrain (P<0.05). As shown by stepwise regression analysis,the positive and negative coping styles had statistically significant impacts on cognitive focus, affective appeal, behavior restrain, and self comfort (all P<0.05). Comprehensive psychological training is useful in improving the mental health of recruits.

  16. Using Statistics and Data Mining Approaches to Analyze Male Sexual Behaviors and Use of Erectile Dysfunction Drugs Based on Large Questionnaire Data.

    PubMed

    Qiao, Zhi; Li, Xiang; Liu, Haifeng; Zhang, Lei; Cao, Junyang; Xie, Guotong; Qin, Nan; Jiang, Hui; Lin, Haocheng

    2017-01-01

    The prevalence of erectile dysfunction (ED) has been extensively studied worldwide. Erectile dysfunction drugs has shown great efficacy in preventing male erectile dysfunction. In order to help doctors know drug taken preference of patients and better prescribe, it is crucial to analyze who actually take erectile dysfunction drugs and the relation between sexual behaviors and drug use. Existing clinical studies usually used descriptive statistics and regression analysis based on small volume of data. In this paper, based on big volume of data (48,630 questionnaires), we use data mining approaches besides statistics and regression analysis to comprehensively analyze the relation between male sexual behaviors and use of erectile dysfunction drugs for unravelling the characteristic of patients who take erectile dysfunction drugs. We firstly analyze the impact of multiple sexual behavior factors on whether to use the erectile dysfunction drugs. Then, we explore to mine the Decision Rules for Stratification to discover patients who are more likely to take drugs. Based on the decision rules, the patients can be partitioned into four potential groups for use of erectile dysfunction: high potential group, intermediate potential-1 group, intermediate potential-2 group and low potential group. Experimental results show 1) the sexual behavior factors, erectile hardness and time length to prepare (how long to prepares for sexual behaviors ahead of time), have bigger impacts both in correlation analysis and potential drug taking patients discovering; 2) odds ratio between patients identified as low potential and high potential was 6.098 (95% confidence interval, 5.159-7.209) with statistically significant differences in taking drug potential detected between all potential groups.

  17. Spatio-temporal analysis of sub-hourly rainfall over Mumbai, India: Is statistical forecasting futile?

    NASA Astrophysics Data System (ADS)

    Singh, Jitendra; Sekharan, Sheeba; Karmakar, Subhankar; Ghosh, Subimal; Zope, P. E.; Eldho, T. I.

    2017-04-01

    Mumbai, the commercial and financial capital of India, experiences incessant annual rain episodes, mainly attributable to erratic rainfall pattern during monsoons and urban heat-island effect due to escalating urbanization, leading to increasing vulnerability to frequent flooding. After the infamous episode of 2005 Mumbai torrential rains when only two rain gauging stations existed, the governing civic body, the Municipal Corporation of Greater Mumbai (MCGM) came forward with an initiative to install 26 automatic weather stations (AWS) in June 2006 (MCGM 2007), which later increased to 60 AWS. A comprehensive statistical analysis to understand the spatio-temporal pattern of rainfall over Mumbai or any other coastal city in India has never been attempted earlier. In the current study, a thorough analysis of available rainfall data for 2006-2014 from these stations was performed; the 2013-2014 sub-hourly data from 26 AWS was found useful for further analyses due to their consistency and continuity. Correlogram cloud indicated no pattern of significant correlation when we considered the closest to the farthest gauging station from the base station; this impression was also supported by the semivariogram plots. Gini index values, a statistical measure of temporal non-uniformity, were found above 0.8 in visible majority showing an increasing trend in most gauging stations; this sufficiently led us to conclude that inconsistency in daily rainfall was gradually increasing with progress in monsoon. Interestingly, night rainfall was lesser compared to daytime rainfall. The pattern-less high spatio-temporal variation observed in Mumbai rainfall data signifies the futility of independently applying advanced statistical techniques, and thus calls for simultaneous inclusion of physics-centred models such as different meso-scale numerical weather prediction systems, particularly the Weather Research and Forecasting (WRF) model.

  18. The Effects of Conditioned Reinforcement for Reading on Reading Comprehension for 5th Graders

    ERIC Educational Resources Information Center

    Cumiskey Moore, Colleen

    2017-01-01

    In three experiments, I tested the effects of the conditioned reinforcement for reading (R+Reading) on reading comprehension with 5th graders. In Experiment 1, I conducted a series of statistical analyses with data from 18 participants for one year. I administered 4 pre/post measurements for reading repertoires which included: 1) state-wide…

  19. The Impact of the 2004 Hurricanes on Florida Comprehensive Assessment Test Scores: Implications for School Counselors

    ERIC Educational Resources Information Center

    Baggerly, Jennifer; Ferretti, Larissa K.

    2008-01-01

    What is the impact of natural disasters on students' statewide assessment scores? To answer this question, Florida Comprehensive Assessment Test (FCAT) scores of 55,881 students in grades 4 through 10 were analyzed to determine if there were significant decreases after the 2004 hurricanes. Results reveal that there was statistical but no practical…

  20. A Quantile Regression Approach to Understanding the Relations among Morphological Awareness, Vocabulary, and Reading Comprehension in Adult Basic Education Students

    ERIC Educational Resources Information Center

    Tighe, Elizabeth L.; Schatschneider, Christopher

    2016-01-01

    The purpose of this study was to investigate the joint and unique contributions of morphological awareness and vocabulary knowledge at five reading comprehension levels in adult basic education (ABE) students. We introduce the statistical technique of multiple quantile regression, which enabled us to assess the predictive utility of morphological…

  1. The Direct and Indirect Effects of Word Reading and Vocabulary on Adolescents' Reading Comprehension: Comparing Struggling and Adequate Comprehenders

    ERIC Educational Resources Information Center

    Oslund, Eric L.; Clemens, Nathan H.; Simmons, Deborah C.; Simmons, Leslie E.

    2018-01-01

    The current study examined statistically significant differences between struggling and adequate readers using a multicomponent model of reading comprehension in 796 sixth through eighth graders, with a primary focus on word reading and vocabulary. Path analyses and Wald tests were used to investigate the direct and indirect relations of word…

  2. The Effects of Cultural Familiarity and Question Preview Type on the Listening Comprehension of L2 Learners at the Secondary Level

    ERIC Educational Resources Information Center

    Li, Chen-Hong; Chen, Cai-Jun; Wu, Meng-Jie; Kuo, Ya-Chu; Tseng, Yun-Ting; Tsai, Shi-Yi; Shih, Hung-Chun

    2017-01-01

    We examined the effect of cultural familiarity and question-preview types on the listening comprehension of L2 learners. The results showed that the participants who received the full question-preview format scored higher than those receiving either the answer-option preview or question-stem preview, despite a statistically nonsignificant…

  3. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. How immediate and significant is the outcome of training on diversified diets, hygiene and food safety? An effort to mitigate child undernutrition in rural Malawi.

    PubMed

    Seetha, Anitha; Tsusaka, Takuji W; Munthali, Timalizge W; Musukwa, Maggie; Mwangwela, Agnes; Kalumikiza, Zione; Manani, Tinna; Kachulu, Lizzie; Kumwenda, Nelson; Musoke, Mike; Okori, Patrick

    2018-04-01

    The present study examined the impacts of training on nutrition, hygiene and food safety designed by the Nutrition Working Group, Child Survival Collaborations and Resources Group (CORE). Adapted from the 21d Positive Deviance/Hearth model, mothers were trained on the subjects of appropriate complementary feeding, water, sanitation and hygiene (WASH) practices, and aflatoxin contamination in food. To assess the impacts on child undernutrition, a randomised controlled trial was implemented on a sample of 179 mothers and their children (<2 years old) in two districts of Malawi, namely Mzimba and Balaka. Settings A 21d intensive learning-by-doing process using the positive deviance approach. Malawian children and mothers. Difference-in-difference panel regression analysis revealed that the impacts of the comprehensive training were positive and statistically significant on the Z-scores for wasting and underweight, where the effects increased constantly over time within the 21d time frame. As for stunting, the coefficients were not statistically significant during the 21d programme, although the level of significance started increasing in 2 weeks, indicating that stunting should also be alleviated in a slightly longer time horizon. The study clearly suggests that comprehensive training immediately guides mothers into improved dietary and hygiene practices, and that improved practices take immediate and progressive effects in ameliorating children's undernutrition.

  5. Use of eCompliance, an innovative biometric system for monitoring of tuberculosis treatment in rural Uganda.

    PubMed

    Snidal, Sarah Jane; Barnard, Genevieve; Atuhairwe, Emmanuel; Ben Amor, Yanis

    2015-06-01

    Directly observed therapy short-course (DOTS) requires direct observation of tuberculosis (TB) patients and manual recording of doses taken. Programmatically, manual tracking is both time-consuming and prone to human error. Our project in western Uganda assessed the impact on TB treatment outcomes of a comprehensive patient support program including eCompliance, a biometric medical record device, with the aim of increasing TB patient retention. Through an observational study of 142 patients, DOTS outcomes of patients in the intervention group were compared with two control groups. Descriptive statistical comparisons, case-cohort analysis, and difference in change over time were used to assess the impact. Intervention patients had a higher cure rate than all other patients (55.6% versus 28.3% [P < 0.01]) and the odds of having a "cured" outcome were 3.17 higher (P < 0.05). The intervention group had a statistically significantly lower odds of having a negative outcome (0% versus.17% [P < 0.01]) than patients from the control groups. Additionally, the intervention group had a lost to follow-up rate lower than all other groups (0% versus 7%) that was trending on significant. In resource-limited settings, implementing comprehensive DOTS including eCompliance may reduce the occurrence of negative DOTS outcomes for patients. © The American Society of Tropical Medicine and Hygiene.

  6. Compositional differences among Chinese soy sauce types studied by (13)C NMR spectroscopy coupled with multivariate statistical analysis.

    PubMed

    Kamal, Ghulam Mustafa; Wang, Xiaohua; Bin Yuan; Wang, Jie; Sun, Peng; Zhang, Xu; Liu, Maili

    2016-09-01

    Soy sauce a well known seasoning all over the world, especially in Asia, is available in global market in a wide range of types based on its purpose and the processing methods. Its composition varies with respect to the fermentation processes and addition of additives, preservatives and flavor enhancers. A comprehensive (1)H NMR based study regarding the metabonomic variations of soy sauce to differentiate among different types of soy sauce available on the global market has been limited due to the complexity of the mixture. In present study, (13)C NMR spectroscopy coupled with multivariate statistical data analysis like principle component analysis (PCA), and orthogonal partial least square-discriminant analysis (OPLS-DA) was applied to investigate metabonomic variations among different types of soy sauce, namely super light, super dark, red cooking and mushroom soy sauce. The main additives in soy sauce like glutamate, sucrose and glucose were easily distinguished and quantified using (13)C NMR spectroscopy which were otherwise difficult to be assigned and quantified due to serious signal overlaps in (1)H NMR spectra. The significantly higher concentration of sucrose in dark, red cooking and mushroom flavored soy sauce can directly be linked to the addition of caramel in soy sauce. Similarly, significantly higher level of glutamate in super light as compared to super dark and mushroom flavored soy sauce may come from the addition of monosodium glutamate. The study highlights the potentiality of (13)C NMR based metabonomics coupled with multivariate statistical data analysis in differentiating between the types of soy sauce on the basis of level of additives, raw materials and fermentation procedures. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Incorporating Budget Impact Analysis in the Implementation of Complex Interventions: A Case of an Integrated Intervention for Multimorbid Patients within the CareWell Study.

    PubMed

    Soto-Gordoa, Myriam; Arrospide, Arantzazu; Merino Hernández, Marisa; Mora Amengual, Joana; Fullaondo Zabala, Ane; Larrañaga, Igor; de Manuel, Esteban; Mar, Javier

    2017-01-01

    To develop a framework for the management of complex health care interventions within the Deming continuous improvement cycle and to test the framework in the case of an integrated intervention for multimorbid patients in the Basque Country within the CareWell project. Statistical analysis alone, although necessary, may not always represent the practical significance of the intervention. Thus, to ascertain the true economic impact of the intervention, the statistical results can be integrated into the budget impact analysis. The intervention of the case study consisted of a comprehensive approach that integrated new provider roles and new technological infrastructure for multimorbid patients, with the aim of reducing patient decompensations by 10% over 5 years. The study period was 2012 to 2020. Given the aging of the general population, the conventional scenario predicts an increase of 21% in the health care budget for care of multimorbid patients during the study period. With a successful intervention, this figure should drop to 18%. The statistical analysis, however, showed no significant differences in costs either in primary care or in hospital care between 2012 and 2014. The real costs in 2014 were by far closer to those in the conventional scenario than to the reductions expected in the objective scenario. The present implementation should be reappraised, because the present expenditure did not move closer to the objective budget. This work demonstrates the capacity of budget impact analysis to enhance the implementation of complex interventions. Its integration in the context of the continuous improvement cycle is transferable to other contexts in which implementation depth and time are important. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Integrative pathway analysis of a genome-wide association study of V̇o2max response to exercise training

    PubMed Central

    Vivar, Juan C.; Sarzynski, Mark A.; Sung, Yun Ju; Timmons, James A.; Bouchard, Claude; Rankinen, Tuomo

    2013-01-01

    We previously reported the findings from a genome-wide association study of the response of maximal oxygen uptake (V̇o2max) to an exercise program. Here we follow up on these results to generate hypotheses on genes, pathways, and systems involved in the ability to respond to exercise training. A systems biology approach can help us better establish a comprehensive physiological description of what underlies V̇o2maxtrainability. The primary material for this exploration was the individual single-nucleotide polymorphism (SNP), SNP-gene mapping, and statistical significance levels. We aimed to generate novel hypotheses through analyses that go beyond statistical association of single-locus markers. This was accomplished through three complementary approaches: 1) building de novo evidence of gene candidacy through informatics-driven literature mining; 2) aggregating evidence from statistical associations to link variant enrichment in biological pathways to V̇o2max trainability; and 3) predicting possible consequences of variants residing in the pathways of interest. We started with candidate gene prioritization followed by pathway analysis focused on overrepresentation analysis and gene set enrichment analysis. Subsequently, leads were followed using in silico analysis of predicted SNP functions. Pathways related to cellular energetics (pantothenate and CoA biosynthesis; PPAR signaling) and immune functions (complement and coagulation cascades) had the highest levels of SNP burden. In particular, long-chain fatty acid transport and fatty acid oxidation genes and sequence variants were found to influence differences in V̇o2max trainability. Together, these methods allow for the hypothesis-driven ranking and prioritization of genes and pathways for future experimental testing and validation. PMID:23990238

  9. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    PubMed

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. Simulation on a car interior aerodynamic noise control based on statistical energy analysis

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Wang, Dengfeng; Ma, Zhengdong

    2012-09-01

    How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.

  11. Vitamin D and depression: a systematic review and meta-analysis comparing studies with and without biological flaws.

    PubMed

    Spedding, Simon

    2014-04-11

    Efficacy of Vitamin D supplements in depression is controversial, awaiting further literature analysis. Biological flaws in primary studies is a possible reason meta-analyses of Vitamin D have failed to demonstrate efficacy. This systematic review and meta-analysis of Vitamin D and depression compared studies with and without biological flaws. The systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The literature search was undertaken through four databases for randomized controlled trials (RCTs). Studies were critically appraised for methodological quality and biological flaws, in relation to the hypothesis and study design. Meta-analyses were performed for studies according to the presence of biological flaws. The 15 RCTs identified provide a more comprehensive evidence-base than previous systematic reviews; methodological quality of studies was generally good and methodology was diverse. A meta-analysis of all studies without flaws demonstrated a statistically significant improvement in depression with Vitamin D supplements (+0.78 CI +0.24, +1.27). Studies with biological flaws were mainly inconclusive, with the meta-analysis demonstrating a statistically significant worsening in depression by taking Vitamin D supplements (-1.1 CI -0.7, -1.5). Vitamin D supplementation (≥800 I.U. daily) was somewhat favorable in the management of depression in studies that demonstrate a change in vitamin levels, and the effect size was comparable to that of anti-depressant medication.

  12. Recruitment Methods and Show Rates to a Prostate Cancer Early Detection Program for High-Risk Men: A Comprehensive Analysis

    PubMed Central

    Giri, Veda N.; Coups, Elliot J.; Ruth, Karen; Goplerud, Julia; Raysor, Susan; Kim, Taylor Y.; Bagden, Loretta; Mastalski, Kathleen; Zakrzewski, Debra; Leimkuhler, Suzanne; Watkins-Bruner, Deborah

    2009-01-01

    Purpose Men with a family history (FH) of prostate cancer (PCA) and African American (AA) men are at higher risk for PCA. Recruitment and retention of these high-risk men into early detection programs has been challenging. We report a comprehensive analysis on recruitment methods, show rates, and participant factors from the Prostate Cancer Risk Assessment Program (PRAP), which is a prospective, longitudinal PCA screening study. Materials and Methods Men 35–69 years are eligible if they have a FH of PCA, are AA, or have a BRCA1/2 mutation. Recruitment methods were analyzed with respect to participant demographics and show to the first PRAP appointment using standard statistical methods Results Out of 707 men recruited, 64.9% showed to the initial PRAP appointment. More individuals were recruited via radio than from referral or other methods (χ2 = 298.13, p < .0001). Men recruited via radio were more likely to be AA (p<0.001), less educated (p=0.003), not married or partnered (p=0.007), and have no FH of PCA (p<0.001). Men recruited via referrals had higher incomes (p=0.007). Men recruited via referral were more likely to attend their initial PRAP visit than those recruited by radio or other methods (χ2 = 27.08, p < .0001). Conclusions This comprehensive analysis finds that radio leads to higher recruitment of AA men with lower socioeconomic status. However, these are the high-risk men that have lower show rates for PCA screening. Targeted motivational measures need to be studied to improve show rates for PCA risk assessment for these high-risk men. PMID:19758657

  13. Terrain-analysis procedures for modeling radar backscatter

    USGS Publications Warehouse

    Schaber, Gerald G.; Pike, Richard J.; Berlin, Graydon Lennis

    1978-01-01

    The collection and analysis of detailed information on the surface of natural terrain are important aspects of radar-backscattering modeling. Radar is especially sensitive to surface-relief changes in the millimeter- to-decimeter scale four conventional K-band (~1-cm wavelength) to L-band (~25-cm wavelength) radar systems. Surface roughness statistics that characterize these changes in detail have been generated by a comprehensive set of seven programmed calculations for radar-backscatter modeling from sets of field measurements. The seven programs are 1) formatting of data in readable form for subsequent topographic analysis program; 2) relief analysis; 3) power spectral analysis; 4) power spectrum plots; 5) slope angle between slope reversals; 6) slope angle against slope interval plots; and 7) base length slope angle and curvature. This complete Fortran IV software package, 'Terrain Analysis', is here presented for the first time. It was originally developed a decade ago for investigations of lunar morphology and surface trafficability for the Apollo Lunar Roving Vehicle.

  14. Education Statistics Quarterly. Volume 6, Issue 4, 2004. NCES 2006-613

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2006

    2006-01-01

    The "Quarterly" offers a comprehensive overview of work done across all of the National Center for Education Statistics (NCES). Each issue includes short publications and summaries covering all NCES publications and data products released in a given time period as well as notices about training and funding opportunities. In addition,…

  15. American Indians. 1970 Census of Population, Subject Reports.

    ERIC Educational Resources Information Center

    Department of Commerce, Washington, DC.

    The in-depth statistical profile of the American Indian's condition today is the most comprehensive ever done on the subject by the Bureau of the Census (U.S. Department of Commerce, Social and Economic Statistics Administration). Presenting information from the 1970 Census of Population and Housing it includes tribal and reservation data and…

  16. Public Library Statistics, 1950. Bulletin, 1953, No. 9

    ERIC Educational Resources Information Center

    Dunbar, Ralph M.

    1954-01-01

    The Office of Education has long been interested in the development of public libraries as agencies to further the educational progress of the nation. Beginning with 1870, it has issued at intervals statistical compilations on the status of the various types of libraries. Marking a change in that program, the comprehensive collection covering…

  17. The Role of Statistics and Research Methods in the Academic Success of Psychology Majors: Do Performance and Enrollment Timing Matter?

    ERIC Educational Resources Information Center

    Freng, Scott; Webber, David; Blatter, Jamin; Wing, Ashley; Scott, Walter D.

    2011-01-01

    Comprehension of statistics and research methods is crucial to understanding psychology as a science (APA, 2007). However, psychology majors sometimes approach methodology courses with derision or anxiety (Onwuegbuzie & Wilson, 2003; Rajecki, Appleby, Williams, Johnson, & Jeschke, 2005); consequently, students may postpone…

  18. Education Statistics Quarterly. Volume 4 Issue 4, 2002.

    ERIC Educational Resources Information Center

    National Center for Education Statistics, 2002

    2002-01-01

    This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…

  19. Association between Hypertension and Epistaxis: Systematic Review and Meta-analysis.

    PubMed

    Min, Hyun Jin; Kang, Hyun; Choi, Geun Joo; Kim, Kyung Soo

    2017-12-01

    Objective Whether there is an association or a cause-and-effect relationship between epistaxis and hypertension is a subject of longstanding controversy. The objective of this systematic review and meta-analysis was to determine the association between epistaxis and hypertension and to verify whether hypertension is an independent risk factor of epistaxis. Data Sources A comprehensive search was performed using the MEDLINE, EMBASE, and Cochrane Library databases. Review Methods The review was performed according to the Meta-analysis of Observational Studies in Epidemiology guidelines and reported using the Preferred Reporting Items for Systematic Reviews and Meta-analysis guidelines. Results We screened 2768 unique studies and selected 10 for this meta-analysis. Overall, the risk of epistaxis was significantly increased for patients with hypertension (odds ratio, 1.532 [95% confidence interval (CI), 1.181-1.986]; number needed to treat, 14.9 [95% CI, 12.3-19.0]). Results of the Q test and I 2 statistics suggested considerable heterogeneity ([Formula: see text] = 0.038, I 2 = 49.3%). The sensitivity analysis was performed by excluding 1 study at a time, and it revealed no change in statistical significance. Conclusion Although this meta-analysis had some limitations, our study demonstrated that hypertension was significantly associated with the risk of epistaxis. However, since this association does not support a causal relationship between hypertension and epistaxis, further clinical trials with large patient populations will be required to determine the impact of hypertension on epistaxis.

  20. Evaluating and interpreting cross-taxon congruence: Potential pitfalls and solutions

    NASA Astrophysics Data System (ADS)

    Gioria, Margherita; Bacaro, Giovanni; Feehan, John

    2011-05-01

    Characterizing the relationship between different taxonomic groups is critical to identify potential surrogates for biodiversity. Previous studies have shown that cross-taxa relationships are generally weak and/or inconsistent. The difficulties in finding predictive patterns have often been attributed to the spatial and temporal scales of these studies and on the differences in the measure used to evaluate such relationships (species richness versus composition). However, the choice of the analytical approach used to evaluate cross-taxon congruence inevitably represents a major source of variation. Here, we described the use of a range of methods that can be used to comprehensively assess cross-taxa relationships. To do so, we used data for two taxonomic groups, wetland plants and water beetles, collected from 54 farmland ponds in Ireland. Specifically, we used the Pearson correlation and rarefaction curves to analyse patterns in species richness, while Mantel tests, Procrustes analysis, and co-correspondence analysis were used to evaluate congruence in species composition. We compared the results of these analyses and we described some of the potential pitfalls associated with the use of each of these statistical approaches. Cross-taxon congruence was moderate to strong, depending on the choice of the analytical approach, on the nature of the response variable, and on local and environmental conditions. Our findings indicate that multiple approaches and measures of community structure are required for a comprehensive assessment of cross-taxa relationships. In particular, we showed that selection of surrogate taxa in conservation planning should not be based on a single statistic expressing the degree of correlation in species richness or composition. Potential solutions to the analytical issues associated with the assessment of cross-taxon congruence are provided and the implications of our findings in the selection of surrogates for biodiversity are discussed.

  1. Enriched pathways for major depressive disorder identified from a genome-wide association study.

    PubMed

    Kao, Chung-Feng; Jia, Peilin; Zhao, Zhongming; Kuo, Po-Hsiu

    2012-11-01

    Major depressive disorder (MDD) has caused a substantial burden of disease worldwide with moderate heritability. Despite efforts through conducting numerous association studies and now, genome-wide association (GWA) studies, the success of identifying susceptibility loci for MDD has been limited, which is partially attributed to the complex nature of depression pathogenesis. A pathway-based analytic strategy to investigate the joint effects of various genes within specific biological pathways has emerged as a powerful tool for complex traits. The present study aimed to identify enriched pathways for depression using a GWA dataset for MDD. For each gene, we estimated its gene-wise p value using combined and minimum p value, separately. Canonical pathways from the Kyoto Encyclopedia of Genes and Genomes (KEGG) and BioCarta were used. We employed four pathway-based analytic approaches (gene set enrichment analysis, hypergeometric test, sum-square statistic, sum-statistic). We adjusted for multiple testing using Benjamini & Hochberg's method to report significant pathways. We found 17 significantly enriched pathways for depression, which presented low-to-intermediate crosstalk. The top four pathways were long-term depression (p⩽1×10-5), calcium signalling (p⩽6×10-5), arrhythmogenic right ventricular cardiomyopathy (p⩽1.6×10-4) and cell adhesion molecules (p⩽2.2×10-4). In conclusion, our comprehensive pathway analyses identified promising pathways for depression that are related to neurotransmitter and neuronal systems, immune system and inflammatory response, which may be involved in the pathophysiological mechanisms underlying depression. We demonstrated that pathway enrichment analysis is promising to facilitate our understanding of complex traits through a deeper interpretation of GWA data. Application of this comprehensive analytic strategy in upcoming GWA data for depression could validate the findings reported in this study.

  2. CMS: A Web-Based System for Visualization and Analysis of Genome-Wide Methylation Data of Human Cancers

    PubMed Central

    Huang, Yi-Wen; Roa, Juan C.; Goodfellow, Paul J.; Kizer, E. Lynette; Huang, Tim H. M.; Chen, Yidong

    2013-01-01

    Background DNA methylation of promoter CpG islands is associated with gene suppression, and its unique genome-wide profiles have been linked to tumor progression. Coupled with high-throughput sequencing technologies, it can now efficiently determine genome-wide methylation profiles in cancer cells. Also, experimental and computational technologies make it possible to find the functional relationship between cancer-specific methylation patterns and their clinicopathological parameters. Methodology/Principal Findings Cancer methylome system (CMS) is a web-based database application designed for the visualization, comparison and statistical analysis of human cancer-specific DNA methylation. Methylation intensities were obtained from MBDCap-sequencing, pre-processed and stored in the database. 191 patient samples (169 tumor and 22 normal specimen) and 41 breast cancer cell-lines are deposited in the database, comprising about 6.6 billion uniquely mapped sequence reads. This provides comprehensive and genome-wide epigenetic portraits of human breast cancer and endometrial cancer to date. Two views are proposed for users to better understand methylation structure at the genomic level or systemic methylation alteration at the gene level. In addition, a variety of annotation tracks are provided to cover genomic information. CMS includes important analytic functions for interpretation of methylation data, such as the detection of differentially methylated regions, statistical calculation of global methylation intensities, multiple gene sets of biologically significant categories, interactivity with UCSC via custom-track data. We also present examples of discoveries utilizing the framework. Conclusions/Significance CMS provides visualization and analytic functions for cancer methylome datasets. A comprehensive collection of datasets, a variety of embedded analytic functions and extensive applications with biological and translational significance make this system powerful and unique in cancer methylation research. CMS is freely accessible at: http://cbbiweb.uthscsa.edu/KMethylomes/. PMID:23630576

  3. Development of a Self-Report Physical Function Instrument for Disability Assessment: Item Pool Construction and Factor Analysis

    PubMed Central

    McDonough, Christine M.; Jette, Alan M.; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M.; Rasch, Elizabeth K.

    2014-01-01

    Objectives To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Design Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. Setting In-person and semi-structured interviews; internet and telephone surveys. Participants A sample of 1,017 SSA claimants, and a normative sample of 999 adults from the US general population. Interventions Not Applicable. Main Outcome Measure Model fit statistics Results The final item pool consisted of 139 items. Within the claimant sample 58.7% were white; 31.8% were black; 46.6% were female; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution which included more items and allowed separate characterization of: 1) Changing and Maintaining Body Position, 2) Whole Body Mobility, 3) Upper Body Function and 4) Upper Extremity Fine Motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples respectively were: Comparative Fit Index = 0.93 and 0.98; Tucker-Lewis Index = 0.92 and 0.98; Root Mean Square Error Approximation = 0.05 and 0.04. Conclusions The factor structure of the Physical Function item pool closely resembled the hypothesized content model. The four scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. PMID:23542402

  4. Development of a self-report physical function instrument for disability assessment: item pool construction and factor analysis.

    PubMed

    McDonough, Christine M; Jette, Alan M; Ni, Pengsheng; Bogusz, Kara; Marfeo, Elizabeth E; Brandt, Diane E; Chan, Leighton; Meterko, Mark; Haley, Stephen M; Rasch, Elizabeth K

    2013-09-01

    To build a comprehensive item pool representing work-relevant physical functioning and to test the factor structure of the item pool. These developmental steps represent initial outcomes of a broader project to develop instruments for the assessment of function within the context of Social Security Administration (SSA) disability programs. Comprehensive literature review; gap analysis; item generation with expert panel input; stakeholder interviews; cognitive interviews; cross-sectional survey administration; and exploratory and confirmatory factor analyses to assess item pool structure. In-person and semistructured interviews and Internet and telephone surveys. Sample of SSA claimants (n=1017) and a normative sample of adults from the U.S. general population (n=999). Not applicable. Model fit statistics. The final item pool consisted of 139 items. Within the claimant sample, 58.7% were white; 31.8% were black; 46.6% were women; and the mean age was 49.7 years. Initial factor analyses revealed a 4-factor solution, which included more items and allowed separate characterization of: (1) changing and maintaining body position, (2) whole body mobility, (3) upper body function, and (4) upper extremity fine motor. The final 4-factor model included 91 items. Confirmatory factor analyses for the 4-factor models for the claimant and the normative samples demonstrated very good fit. Fit statistics for claimant and normative samples, respectively, were: Comparative Fit Index=.93 and .98; Tucker-Lewis Index=.92 and .98; and root mean square error approximation=.05 and .04. The factor structure of the physical function item pool closely resembled the hypothesized content model. The 4 scales relevant to work activities offer promise for providing reliable information about claimant physical functioning relevant to work disability. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. CMS: a web-based system for visualization and analysis of genome-wide methylation data of human cancers.

    PubMed

    Gu, Fei; Doderer, Mark S; Huang, Yi-Wen; Roa, Juan C; Goodfellow, Paul J; Kizer, E Lynette; Huang, Tim H M; Chen, Yidong

    2013-01-01

    DNA methylation of promoter CpG islands is associated with gene suppression, and its unique genome-wide profiles have been linked to tumor progression. Coupled with high-throughput sequencing technologies, it can now efficiently determine genome-wide methylation profiles in cancer cells. Also, experimental and computational technologies make it possible to find the functional relationship between cancer-specific methylation patterns and their clinicopathological parameters. Cancer methylome system (CMS) is a web-based database application designed for the visualization, comparison and statistical analysis of human cancer-specific DNA methylation. Methylation intensities were obtained from MBDCap-sequencing, pre-processed and stored in the database. 191 patient samples (169 tumor and 22 normal specimen) and 41 breast cancer cell-lines are deposited in the database, comprising about 6.6 billion uniquely mapped sequence reads. This provides comprehensive and genome-wide epigenetic portraits of human breast cancer and endometrial cancer to date. Two views are proposed for users to better understand methylation structure at the genomic level or systemic methylation alteration at the gene level. In addition, a variety of annotation tracks are provided to cover genomic information. CMS includes important analytic functions for interpretation of methylation data, such as the detection of differentially methylated regions, statistical calculation of global methylation intensities, multiple gene sets of biologically significant categories, interactivity with UCSC via custom-track data. We also present examples of discoveries utilizing the framework. CMS provides visualization and analytic functions for cancer methylome datasets. A comprehensive collection of datasets, a variety of embedded analytic functions and extensive applications with biological and translational significance make this system powerful and unique in cancer methylation research. CMS is freely accessible at: http://cbbiweb.uthscsa.edu/KMethylomes/.

  6. Animal movement: Statistical models for telemetry data

    USGS Publications Warehouse

    Hooten, Mevin B.; Johnson, Devin S.; McClintock, Brett T.; Morales, Juan M.

    2017-01-01

    The study of animal movement has always been a key element in ecological science, because it is inherently linked to critical processes that scale from individuals to populations and communities to ecosystems. Rapid improvements in biotelemetry data collection and processing technology have given rise to a variety of statistical methods for characterizing animal movement. The book serves as a comprehensive reference for the types of statistical models used to study individual-based animal movement. 

  7. Incidence of Minimally Invasive Colorectal Cancer Surgery at National Comprehensive Cancer Network Centers

    PubMed Central

    Yeo, Heather; Niland, Joyce; Milne, Dana; ter Veer, Anna; Bekaii-Saab, Tanios; Farma, Jeffrey M.; Lai, Lily; Skibber, John M.; Small, William; Wilkinson, Neal; Schrag, Deborah

    2015-01-01

    Background: Laparoscopic colectomy has been shown to have equivalent oncologic outcomes to open colectomy for the management of colon cancer, but its adoption nationally has been slow. This study investigates the prevalence and factors associated with laparoscopic colorectal resection at National Comprehensive Cancer Network (NCCN) centers. Methods: Data on patients undergoing surgery for colon and rectal cancer at NCCN centers from 2005 to 2010 were obtained from chart review of medical records for the NCCN Outcomes Project and included information on socioeconomic status, insurance coverage, comorbidity, and physician-reported Eastern Cooperative Oncology Group (ECOG) performance status. Associations between receipt of minimally invasive surgery and patient and clinical variables were analyzed with univariate and multivariable logistic regression. All statistical tests were two-sided. Results: A total of 4032 patients, diagnosed between September 2005 and December 2010, underwent elective colon or rectal resection for cancer at NCCN centers. Median age of colon cancer patients was 62.6 years, and 49% were men. The percent of colon cancer patients treated with minimally invasive surgery (MIS) increased from 35% in 2006 to 51% in 2010 across all centers but varied statistically significantly between centers. On multivariable analysis, factors associated with minimally invasive surgery for colon cancer patients who had surgery at an NCCN institution were older age (P = .02), male sex (P = .006), fewer comorbidities (P ≤ .001), lower final T-stage (P < .001), median household income greater than or equal to $80000 (P < .001), ECOG performance status = 0 (P = .02), and NCCN institution (P ≤ .001). Conclusions: The use of MIS increased at NCCN centers. However, there was statistically significant variation in adoption of MIS technique among centers. PMID:25527640

  8. Predicting heart failure mortality in frail seniors: comparing the NYHA functional classification with the Resident Assessment Instrument (RAI) 2.0.

    PubMed

    Tjam, Erin Y; Heckman, George A; Smith, Stuart; Arai, Bruce; Hirdes, John; Poss, Jeff; McKelvie, Robert S

    2012-02-23

    Though the NYHA functional classification is recommended in clinical settings, concerns have been raised about its reliability particularly among older patients. The RAI 2.0 is a comprehensive assessment system specifically developed for frail seniors. We hypothesized that a prognostic model for heart failure (HF) developed from the RAI 2.0 would be superior to the NYHA classification. The purpose of this study was to determine whether a HF-specific prognostic model based on the RAI 2.0 is superior to the NYHA functional classification in predicting mortality in frail older HF patients. Secondary analysis of data from a prospective cohort study of a HF education program for care providers in long-term care and retirement homes. Univariate analyses identified RAI 2.0 variables predicting death at 6 months. These and the NYHA classification were used to develop logistic models. Two RAI 2.0 models were derived. The first includes six items: "weight gain of 5% or more of total body weight over 30 days", "leaving 25% or more food uneaten", "unable to lie flat", "unstable cognitive, ADL, moods, or behavioural patterns", "change in cognitive function" and "needing help to walk in room"; the C statistic was 0.866. The second includes the CHESS health instability scale and the item "requiring help walking in room"; the C statistic was 0.838. The C statistic for the NYHA scale was 0.686. These results suggest that data from the RAI 2.0, an instrument for comprehensive assessment of frail seniors, can better predict mortality than the NYHA classification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.

  10. CircadiOmics: circadian omic web portal.

    PubMed

    Ceglia, Nicholas; Liu, Yu; Chen, Siwei; Agostinelli, Forest; Eckel-Mahan, Kristin; Sassone-Corsi, Paolo; Baldi, Pierre

    2018-06-15

    Circadian rhythms play a fundamental role at all levels of biological organization. Understanding the mechanisms and implications of circadian oscillations continues to be the focus of intense research. However, there has been no comprehensive and integrated way for accessing and mining all circadian omic datasets. The latest release of CircadiOmics (http://circadiomics.ics.uci.edu) fills this gap for providing the most comprehensive web server for studying circadian data. The newly updated version contains high-throughput 227 omic datasets corresponding to over 74 million measurements sampled over 24 h cycles. Users can visualize and compare oscillatory trajectories across species, tissues and conditions. Periodicity statistics (e.g. period, amplitude, phase, P-value, q-value etc.) obtained from BIO_CYCLE and other methods are provided for all samples in the repository and can easily be downloaded in the form of publication-ready figures and tables. New features and substantial improvements in performance and data volume make CircadiOmics a powerful web portal for integrated analysis of circadian omic data.

  11. Comprehensive non-dimensional normalization of gait data.

    PubMed

    Pinzone, Ornella; Schwartz, Michael H; Baker, Richard

    2016-02-01

    Normalizing clinical gait analysis data is required to remove variability due to physical characteristics such as leg length and weight. This is particularly important for children where both are associated with age. In most clinical centres conventional normalization (by mass only) is used whereas there is a stronger biomechanical argument for non-dimensional normalization. This study used data from 82 typically developing children to compare how the two schemes performed over a wide range of temporal-spatial and kinetic parameters by calculating the coefficients of determination with leg length, weight and height. 81% of the conventionally normalized parameters had a coefficient of determination above the threshold for a statistical association (p<0.05) compared to 23% of those normalized non-dimensionally. All the conventionally normalized parameters exceeding this threshold showed a reduced association with non-dimensional normalization. In conclusion, non-dimensional normalization is more effective that conventional normalization in reducing the effects of height, weight and age in a comprehensive range of temporal-spatial and kinetic parameters. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Renewable Energy Zones for the Africa Clean Energy Corridor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Grace C.; Deshmukh, Ranjit; Ndhlukula, Kudakwashe

    Multi-criteria Analysis for Planning Renewable Energy (MapRE) is a study approach developed by the Lawrence Berkeley National Laboratory with the support of the International Renewable Energy Agency (IRENA). The approach combines geospatial, statistical, energy engineering, and economic methods to comprehensively identify and value high-quality wind, solar PV, and solar CSP resources for grid integration based on techno-economic criteria, generation profiles (for wind), and socio-environmental impacts. The Renewable Energy Zones for the Africa Clean Energy Corridor study sought to identify and comprehensively value high-quality wind, solar photovoltaic (PV), and concentrating solar power (CSP) resources in 21 countries in the East andmore » Southern Africa Power Pools to support the prioritization of areas for development through a multi-criteria planning process. These countries include Angola, Botswana, Burundi, Djibouti, Democratic Republic of Congo, Egypt, Ethiopia, Kenya, Lesotho, Libya, Malawi, Mozambique, Namibia, Rwanda, South Africa, Sudan, Swaziland, Tanzania, Uganda, Zambia, and Zimbabwe. The study includes the methodology and the key results including renewable energy potential for each region.« less

  13. CloVR-ITS: Automated internal transcribed spacer amplicon sequence analysis pipeline for the characterization of fungal microbiota

    PubMed Central

    2013-01-01

    Background Besides the development of comprehensive tools for high-throughput 16S ribosomal RNA amplicon sequence analysis, there exists a growing need for protocols emphasizing alternative phylogenetic markers such as those representing eukaryotic organisms. Results Here we introduce CloVR-ITS, an automated pipeline for comparative analysis of internal transcribed spacer (ITS) pyrosequences amplified from metagenomic DNA isolates and representing fungal species. This pipeline performs a variety of steps similar to those commonly used for 16S rRNA amplicon sequence analysis, including preprocessing for quality, chimera detection, clustering of sequences into operational taxonomic units (OTUs), taxonomic assignment (at class, order, family, genus, and species levels) and statistical analysis of sample groups of interest based on user-provided information. Using ITS amplicon pyrosequencing data from a previous human gastric fluid study, we demonstrate the utility of CloVR-ITS for fungal microbiota analysis and provide runtime and cost examples, including analysis of extremely large datasets on the cloud. We show that the largest fractions of reads from the stomach fluid samples were assigned to Dothideomycetes, Saccharomycetes, Agaricomycetes and Sordariomycetes but that all samples were dominated by sequences that could not be taxonomically classified. Representatives of the Candida genus were identified in all samples, most notably C. quercitrusa, while sequence reads assigned to the Aspergillus genus were only identified in a subset of samples. CloVR-ITS is made available as a pre-installed, automated, and portable software pipeline for cloud-friendly execution as part of the CloVR virtual machine package (http://clovr.org). Conclusion The CloVR-ITS pipeline provides fungal microbiota analysis that can be complementary to bacterial 16S rRNA and total metagenome sequence analysis allowing for more comprehensive studies of environmental and host-associated microbial communities. PMID:24451270

  14. Factors associated with comprehensive dental care following an initial emergency dental visit.

    PubMed

    Johnson, Jeffrey T; Turner, Erwin G; Novak, Karen F; Kaplan, Alan L

    2005-01-01

    The purpose of this study was to characterize the patient population utilization of a dental home as grouped by: (1) age; (2) sex; and (3) payment method. A retrospective chart review of 1,020 patients, who initially presented for an emergency visit, was performed. From the original data pool, 2 groups were delineated: (1) those patients who returned for comprehensive dental care; and (2) those who did not return for comprehensive dental care. Patients with private dental insurance or Medicaid dental benefits were statistically more likely to return for comprehensive oral health care than those with no form of dental insurance. Younger patients (< or =3 years of age) were least likely to return for comprehensive dental care. Socioeconomic factors play a crucial role in care-seeking behaviors. These obstacles are often a barrier to preventive and comprehensive oral health care.

  15. Comprehensive Analyses of Ventricular Myocyte Models Identify Targets Exhibiting Favorable Rate Dependence

    PubMed Central

    Bugana, Marco; Severi, Stefano; Sobie, Eric A.

    2014-01-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs. PMID:24675446

  16. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    PubMed

    Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A

    2014-03-01

    Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs.

  17. The Relationship between Background Classical Music and Reading Comprehension on Seventh and Eighth Grade Students

    ERIC Educational Resources Information Center

    Falcon, Evelyn

    2017-01-01

    The purpose of this study was to examine if there is any relationship on reading comprehension when background classical music is played in the setting of a 7th and 8th grade classroom. This study also examined if there was a statistically significant difference in test anxiety when listening to classical music while completing a test. Reading…

  18. Statistical analysis of trypanosomes' motility

    NASA Astrophysics Data System (ADS)

    Zaburdaev, Vasily; Uppaluri, Sravanti; Pfohl, Thomas; Engstler, Markus; Stark, Holger; Friedrich, Rudolf

    2010-03-01

    Trypanosome is a parasite causing the sleeping sickness. The way it moves in the blood stream and penetrates various obstacles is the area of active research. Our goal was to investigate a free trypanosomes' motion in the planar geometry. Our analysis of trypanosomes' trajectories reveals that there are two correlation times - one is associated with a fast motion of its body and the second one with a slower rotational diffusion of the trypanosome as a point object. We propose a system of Langevin equations to model such motion. One of its peculiarities is the presence of multiplicative noise predicting higher level of noise for higher velocity of the trypanosome. Theoretical and numerical results give a comprehensive description of the experimental data such as the mean squared displacement, velocity distribution and auto-correlation function.

  19. Joint approximate diagonalization of eigenmatrices as a high-throughput approach for analysis of hyphenated and comprehensive two-dimensional gas chromatographic data.

    PubMed

    Zarghani, Maryam; Parastar, Hadi

    2017-11-17

    The objective of the present work is development of joint approximate diagonalization of eigenmatrices (JADE) as a member of independent component analysis (ICA) family, for the analysis of gas chromatography-mass spectrometry (GC-MS) and comprehensive two-dimensional gas chromatography-mass spectrometry (GC×GC-MS) data to address incomplete separation problem occurred during the analysis of complex sample matrices. In this regard, simulated GC-MS and GC×GC-MS data sets with different number of components, different degree of overlap and noise were evaluated. In the case of simultaneous analysis of multiple samples, column-wise augmentation for GC-MS and column-wise super-augmentation for GC×GC-MS was used before JADE analysis. The performance of JADE was evaluated in terms of statistical parameters of lack of fit (LOF), mutual information (MI) and Amari index as well as analytical figures of merit (AFOMs) obtained from calibration curves. In addition, the area of feasible solutions (AFSs) was calculated by two different approaches of MCR-BANDs and polygon inflation algorithm (FACPACK). Furthermore, JADE performance was compared with multivariate curve resolution-alternating least squares (MCR-ALS) and other ICA algorithms of mean-field ICA (MFICA) and mutual information least dependent component analysis (MILCA). In all cases, JADE could successfully resolve the elution and spectral profiles in GC-MS and GC×GC-MS data with acceptable statistical and calibration parameters and their solutions were in AFSs. To check the applicability of JADE in real cases, JADE was used for resolution and quantification of phenanthrene and anthracene in aromatic fraction of heavy fuel oil (HFO) analyzed by GC×GC-MS. Surprisingly, pure elution and spectral profiles of target compounds were properly resolved in the presence of baseline and interferences using JADE. Once more, the performance of JADE was compared with MCR-ALS in real case. On this matter, the mutual information (MI) values were 1.01 and 1.13 for resolved profiles by JADE and MCR-ALS, respectively. In addition, LOD values (μg/mL) were respectively 1.36 and 1.24 for phenanthrene and 1.26 and 1.09 for anthracene using MCR-ALS and JADE which showed outperformance of JADE over MCR-ALS. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Hierarchical multivariate covariance analysis of metabolic connectivity.

    PubMed

    Carbonell, Felix; Charil, Arnaud; Zijdenbos, Alex P; Evans, Alan C; Bedell, Barry J

    2014-12-01

    Conventional brain connectivity analysis is typically based on the assessment of interregional correlations. Given that correlation coefficients are derived from both covariance and variance, group differences in covariance may be obscured by differences in the variance terms. To facilitate a comprehensive assessment of connectivity, we propose a unified statistical framework that interrogates the individual terms of the correlation coefficient. We have evaluated the utility of this method for metabolic connectivity analysis using [18F]2-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) study. As an illustrative example of the utility of this approach, we examined metabolic connectivity in angular gyrus and precuneus seed regions of mild cognitive impairment (MCI) subjects with low and high β-amyloid burdens. This new multivariate method allowed us to identify alterations in the metabolic connectome, which would not have been detected using classic seed-based correlation analysis. Ultimately, this novel approach should be extensible to brain network analysis and broadly applicable to other imaging modalities, such as functional magnetic resonance imaging (MRI).

  1. Prevalence of suicidal ideation and suicide attempts in the general population of China: A meta-Analysis

    PubMed Central

    CAO, XIAO-LAN; ZHONG, BAO-LIANG; XIANG, YU-TAO; UNGVARI, GABOR S.; LAI, KELLY Y. C.; CHIU, HELEN F. K.; CAINE, ERIC D.

    2015-01-01

    Objective The objective of this meta-analysis is to estimate the pooled prevalence of suicidal ideation and suicide attempts in the general population of Mainland China. Methods A systematic literature search was conducted via the following databases: PubMed, PsycINFO, MEDLINE, China Journals Full-Text Databases, Chongqing VIP database for Chinese Technical Periodicals and Wan Fang Data. Statistical analysis used the Comprehensive Meta-Analysis program. Results Eight studies met the inclusion criteria for the analysis; five reported on the prevalence of suicidal ideation and seven on that of suicide attempts. The estimated lifetime prevalence figures of suicidal ideation and suicide attempts were 3.9% (95% Confidence interval [CI]: 2.5%–6.0%) and 0.8% (95% CI: 0.7%–0.9%), respectively. The estimated female-male ratio for lifetime prevalence of suicidal ideation and suicide attempts was 1.7 and 2.2, respectively. Only the difference of suicide attempts between the two genders was statistically significant. Conclusion This was the first meta-analysis of the prevalence of suicidal ideation and suicide attempts in the general population of Mainland China. The pooled lifetime prevalence of both suicidal ideation and suicide attempts are relatively low; however, caution is required when assessing these self-report data. Women had a modestly higher prevalence for suicide attempts than men. The frequency for suicidal ideation and suicide attempts in urban regions was similar to those in rural areas. PMID:26060259

  2. An R package for analyzing and modeling ranking data

    PubMed Central

    2013-01-01

    Background In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty’s and Koczkodaj’s inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Results Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians’ preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as “internal/external”), and the second dimension can be interpreted as their overall variance of (labeled as “push/pull factors”). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman’s footrule distance. Conclusions In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations. PMID:23672645

  3. An R package for analyzing and modeling ranking data.

    PubMed

    Lee, Paul H; Yu, Philip L H

    2013-05-14

    In medical informatics, psychology, market research and many other fields, researchers often need to analyze and model ranking data. However, there is no statistical software that provides tools for the comprehensive analysis of ranking data. Here, we present pmr, an R package for analyzing and modeling ranking data with a bundle of tools. The pmr package enables descriptive statistics (mean rank, pairwise frequencies, and marginal matrix), Analytic Hierarchy Process models (with Saaty's and Koczkodaj's inconsistencies), probability models (Luce model, distance-based model, and rank-ordered logit model), and the visualization of ranking data with multidimensional preference analysis. Examples of the use of package pmr are given using a real ranking dataset from medical informatics, in which 566 Hong Kong physicians ranked the top five incentives (1: competitive pressures; 2: increased savings; 3: government regulation; 4: improved efficiency; 5: improved quality care; 6: patient demand; 7: financial incentives) to the computerization of clinical practice. The mean rank showed that item 4 is the most preferred item and item 3 is the least preferred item, and significance difference was found between physicians' preferences with respect to their monthly income. A multidimensional preference analysis identified two dimensions that explain 42% of the total variance. The first can be interpreted as the overall preference of the seven items (labeled as "internal/external"), and the second dimension can be interpreted as their overall variance of (labeled as "push/pull factors"). Various statistical models were fitted, and the best were found to be weighted distance-based models with Spearman's footrule distance. In this paper, we presented the R package pmr, the first package for analyzing and modeling ranking data. The package provides insight to users through descriptive statistics of ranking data. Users can also visualize ranking data by applying a thought multidimensional preference analysis. Various probability models for ranking data are also included, allowing users to choose that which is most suitable to their specific situations.

  4. Statistical Supplement to the Annual Report of the Coordinating Board, Texas College and University System for the Fiscal Year 1980.

    ERIC Educational Resources Information Center

    Texas Coll. and Univ. System, Austin. Coordinating Board.

    Comprehensive statistical data on Texas higher education is presented. Data and formulas relating to student enrollments and faculty headcounts, program development and productivity, faculty salaries and teaching loads, campus development, funding, and the state student load program are included. Student headcount enrollment data are presented by…

  5. Statistical Supplement to the Annual Report of the Coordinating Board, Texas College and University System for Fiscal Year 1978.

    ERIC Educational Resources Information Center

    Ashworth, Kenneth H.

    This supplement to the 1978 Annual Report of the Coordinating Board, Texas College and University System, contains comprehensive statistical data on higher education in Texas. The supplement provides facts, figures, and formulas relating to student enrollments and faculty headcounts, program development and productivity, faculty salaries and…

  6. Replicate This! Creating Individual-Level Data from Summary Statistics Using R

    ERIC Educational Resources Information Center

    Morse, Brendan J.

    2013-01-01

    Incorporating realistic data and research examples into quantitative (e.g., statistics and research methods) courses has been widely recommended for enhancing student engagement and comprehension. One way to achieve these ends is to use a data generator to emulate the data in published research articles. "MorseGen" is a free data generator that…

  7. African Americans' Participation in a Comprehensive Intervention College Prep Program

    ERIC Educational Resources Information Center

    Sianjina, Rayton R.; Phillips, Richard

    2014-01-01

    The National Center for Educational Statistics, in conjunction with the U.S. Department of Education, compiles statistical data for U.S. schools. As charts indicate, in 2001, it reported that nationwide, 76% of high-income graduates immediately enroll in colleges or trade schools. However, only 49% of Hispanic and 59% of African Americans enroll…

  8. Transparency in State Debt Disclosure. Working Papers. No. 17-10

    ERIC Educational Resources Information Center

    Zhao, Bo; Wang, Wen

    2017-01-01

    We develop a new measure of relative debt transparency by comparing the amount of state debt reported in the annual Census survey and the amount reported in the statistical section of the state Comprehensive Annual Financial Report (CAFR). GASB 44 requires states to start reporting their total debt in the CAFR statistical section in FY 2006.…

  9. The determinants of bond angle variability in protein/peptide backbones: A comprehensive statistical/quantum mechanics analysis.

    PubMed

    Improta, Roberto; Vitagliano, Luigi; Esposito, Luciana

    2015-11-01

    The elucidation of the mutual influence between peptide bond geometry and local conformation has important implications for protein structure refinement, validation, and prediction. To gain insights into the structural determinants and the energetic contributions associated with protein/peptide backbone plasticity, we here report an extensive analysis of the variability of the peptide bond angles by combining statistical analyses of protein structures and quantum mechanics calculations on small model peptide systems. Our analyses demonstrate that all the backbone bond angles strongly depend on the peptide conformation and unveil the existence of regular trends as function of ψ and/or φ. The excellent agreement of the quantum mechanics calculations with the statistical surveys of protein structures validates the computational scheme here employed and demonstrates that the valence geometry of protein/peptide backbone is primarily dictated by local interactions. Notably, for the first time we show that the position of the H(α) hydrogen atom, which is an important parameter in NMR structural studies, is also dependent on the local conformation. Most of the trends observed may be satisfactorily explained by invoking steric repulsive interactions; in some specific cases the valence bond variability is also influenced by hydrogen-bond like interactions. Moreover, we can provide a reliable estimate of the energies involved in the interplay between geometry and conformations. © 2015 Wiley Periodicals, Inc.

  10. Spatial analyses identify the geographic source of patients at a National Cancer Institute Comprehensive Cancer Center.

    PubMed

    Su, Shu-Chih; Kanarek, Norma; Fox, Michael G; Guseynova, Alla; Crow, Shirley; Piantadosi, Steven

    2010-02-01

    We examined the geographic distribution of patients to better understand the service area of the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins, a designated National Cancer Institute (NCI) comprehensive cancer center located in an urban center. Like most NCI cancer centers, the Sidney Kimmel Comprehensive Cancer Center serves a population beyond city limits. Urban cancer centers are expected to serve their immediate neighborhoods and to address disparities in access to specialty care. Our purpose was to learn the extent and nature of the cancer center service area. Statistical clustering of patient residence in the continental United States was assessed for all patients and by gender, cancer site, and race using SaTScan. Primary clusters detected for all cases and demographically and tumor-defined subpopulations were centered at Baltimore City and consisted of adjacent counties in Delaware, Pennsylvania, Virginia, West Virginia, New Jersey and New York, and the District of Columbia. Primary clusters varied in size by race, gender, and cancer site. Spatial analysis can provide insights into the populations served by urban cancer centers, assess centers' performance relative to their communities, and aid in developing a cancer center business plan that recognizes strengths, regional utility, and referral patterns. Today, 62 NCI cancer centers serve a quarter of the U.S. population in their immediate communities. From the Baltimore experience, we might project that the population served by these centers is actually more extensive and varies by patient characteristics, cancer site, and probably cancer center services offered.

  11. Differentiation of five body fluids from forensic samples by expression analysis of four microRNAs using quantitative PCR.

    PubMed

    Sauer, Eva; Reinke, Ann-Kathrin; Courts, Cornelius

    2016-05-01

    Applying molecular genetic approaches for the identification of forensically relevant body fluids, which often yield crucial information for the reconstruction of a potential crime, is a current topic of forensic research. Due to their body fluid specific expression patterns and stability against degradation, microRNAs (miRNA) emerged as a promising molecular species, with a range of candidate markers published. The analysis of miRNA via quantitative Real-Time PCR, however, should be based on a relevant strategy of normalization of non-biological variances to deliver reliable and biologically meaningful results. The herein presented work is the as yet most comprehensive study of forensic body fluid identification via miRNA expression analysis based on a thoroughly validated qPCR procedure and unbiased statistical decision making to identify single source samples. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Evaluation of the sustainability of contrasted pig farming systems: economy.

    PubMed

    Ilari-Antoine, E; Bonneau, M; Klauke, T N; Gonzàlez, J; Dourmad, J Y; De Greef, K; Houwers, H W J; Fabrega, E; Zimmer, C; Hviid, M; Van der Oever, B; Edwards, S A

    2014-12-01

    The aim of this paper is to present an efficient tool for evaluating the economy part of the sustainability of pig farming systems. The selected tool IDEA was tested on a sample of farms from 15 contrasted systems in Europe. A statistical analysis was carried out to check the capacity of the indicators to illustrate the variability of the population and to analyze which of these indicators contributed the most towards it. The scores obtained for the farms were consistent with the reality of pig production; the variable distribution showed an important variability of the sample. The principal component analysis and cluster analysis separated the sample into five subgroups, in which the six main indicators significantly differed, which underlines the robustness of the tool. The IDEA method was proven to be easily comprehensible, requiring few initial variables and with an efficient benchmarking system; all six indicators contributed to fully describe a varied and contrasted population.

  13. TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.

    PubMed

    Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D

    2018-05-08

    Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.

  14. MPTinR: analysis of multinomial processing tree models in R.

    PubMed

    Singmann, Henrik; Kellen, David

    2013-06-01

    We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .

  15. QUEST/Ada (Query Utility Environment for Software Testing) of Ada: The development of a program analysis environment for Ada

    NASA Technical Reports Server (NTRS)

    Brown, David B.

    1988-01-01

    A history of the Query Utility Environment for Software Testing (QUEST)/Ada is presented. A fairly comprehensive literature review which is targeted toward issues of Ada testing is given. The definition of the system structure and the high level interfaces are then presented. The design of the three major components is described. The QUEST/Ada IORL System Specifications to this point in time are included in the Appendix. A paper is also included in the appendix which gives statistical evidence of the validity of the test case generation approach which is being integrated into QUEST/Ada.

  16. Homogeneous buoyancy-generated turbulence

    NASA Technical Reports Server (NTRS)

    Batchelor, G. K.; Canuto, V. M.; Chasnov, J. R.

    1992-01-01

    Using a theoretical analysis of fundamental equations and a numerical simulation of the flow field, the statistically homogeneous motion that is generated by buoyancy forces after the creation of homogeneous random fluctuations in the density of infinite fluid at an initial instant is examined. It is shown that analytical results together with numerical results provide a comprehensive description of the 'birth, life, and death' of buoyancy-generated turbulence. Results of numerical simulations yielded the mean-square density mean-square velocity fluctuations and the associated spectra as functions of time for various initial conditions, and the time required for the mean-square density fluctuation to fall to a specified small value was estimated.

  17. Ramifications of increased training in quantitative methodology.

    PubMed

    Zimiles, Herbert

    2009-01-01

    Comments on the article "Doctoral training in statistics, measurement, and methodology in psychology: Replication and extension of Aiken, West, Sechrest, and Reno's (1990) survey of PhD programs in North America" by Aiken, West, and Millsap. The current author asks three questions that are provoked by the comprehensive identification of gaps and deficiencies in the training of quantitative methodology that led Aiken, West, and Millsap to call for expanded graduate instruction resources and programs. This comment calls for greater attention to how advances and expansion in the training of quantitative analysis are influencing who chooses to study psychology and how and what will be studied. PsycINFO Database Record 2009 APA.

  18. Not virtual, but a real, live, online, interactive reference service.

    PubMed

    Jerant, Lisa Lott; Firestein, Kenneth

    2003-01-01

    In today's fast-paced environment, traditional medical reference services alone are not adequate to meet users' information needs. Efforts to find new ways to provide comprehensive service to users, where and when needed, have often included the use of new and developing technologies. This paper describes the experience of an academic health science library in developing and providing an online, real-time reference service. Issues discussed include selecting software, training librarians, staffing the service, and considering the future of the service. Use statistics, question type analysis, and feedback from users of the service and librarians who staff the service, are also presented.

  19. Employee resourcing strategies and universities' corporate image: A survey dataset.

    PubMed

    Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe

    2018-06-01

    The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.

  20. PathJam: a new service for integrating biological pathway information.

    PubMed

    Glez-Peña, Daniel; Reboiro-Jato, Miguel; Domínguez, Rubén; Gómez-López, Gonzalo; Pisano, David G; Fdez-Riverola, Florentino

    2010-10-28

    Biological pathways are crucial to much of the scientific research today including the study of specific biological processes related with human diseases. PathJam is a new comprehensive and freely accessible web-server application integrating scattered human pathway annotation from several public sources. The tool has been designed for both (i) being intuitive for wet-lab users providing statistical enrichment analysis of pathway annotations and (ii) giving support to the development of new integrative pathway applications. PathJam’s unique features and advantages include interactive graphs linking pathways and genes of interest, downloadable results in fully compatible formats, GSEA compatible output files and a standardized RESTful API.

  1. Innovative intelligent technology of distance learning for visually impaired people

    NASA Astrophysics Data System (ADS)

    Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet

    2017-12-01

    The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.

  2. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. High-order fuzzy time-series based on multi-period adaptation model for forecasting stock markets

    NASA Astrophysics Data System (ADS)

    Chen, Tai-Liang; Cheng, Ching-Hsue; Teoh, Hia-Jong

    2008-02-01

    Stock investors usually make their short-term investment decisions according to recent stock information such as the late market news, technical analysis reports, and price fluctuations. To reflect these short-term factors which impact stock price, this paper proposes a comprehensive fuzzy time-series, which factors linear relationships between recent periods of stock prices and fuzzy logical relationships (nonlinear relationships) mined from time-series into forecasting processes. In empirical analysis, the TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) and HSI (Heng Seng Index) are employed as experimental datasets, and four recent fuzzy time-series models, Chen’s (1996), Yu’s (2005), Cheng’s (2006) and Chen’s (2007), are used as comparison models. Besides, to compare with conventional statistic method, the method of least squares is utilized to estimate the auto-regressive models of the testing periods within the databases. From analysis results, the performance comparisons indicate that the multi-period adaptation model, proposed in this paper, can effectively improve the forecasting performance of conventional fuzzy time-series models which only factor fuzzy logical relationships in forecasting processes. From the empirical study, the traditional statistic method and the proposed model both reveal that stock price patterns in the Taiwan stock and Hong Kong stock markets are short-term.

  4. Stress perception and social indicators for low back, shoulder and joint pains in Japan: national surveys in 1995 and 2001.

    PubMed

    Takeuchi, Takeaki; Nakao, Mutsuhiro; Nishikitani, Mariko; Yano, Eiji

    2004-07-01

    This study aims to clarify the effects of stress perception and related social indicators on three major musculoskeletal symptoms: low back, shoulder, and joint pains in a Japanese population. Twenty health-related variables (stress perception and 19 social indicators) and the three symptoms were obtained from the following Japanese national surveys: the Comprehensive Survey of Living Condition of the People on Health and Welfare, the System of Social and Demographic Statistics of Japan, and the Statistical Report on Health Administration Services. The results were compared among 46 Japanese prefectures in 1995 and 2001. By factor analysis, the 19 indicators were classified into three factors of urbanization, aging and life-regularity, and individualization. The prevalence of stress perception was significantly correlated to the 8 indicators of urbanization factor. Although simple correlation analysis revealed a significant relationship of stress perception only to shoulder pain (in both years) and low back pain (in 2001), the results of multiple regression analysis showed that stress perception and some urbanization factors were significantly associated with all the three symptoms in both years exclusive of joint pain in 1995. Taking the effects of urbanization into consideration, stress perception seems to be closely related to the complaints of musculoskeletal symptoms in Japan.

  5. Integrated Application of Multivariate Statistical Methods to Source Apportionment of Watercourses in the Liao River Basin, Northeast China

    PubMed Central

    Chen, Jiabo; Li, Fayun; Fan, Zhiping; Wang, Yanjie

    2016-01-01

    Source apportionment of river water pollution is critical in water resource management and aquatic conservation. Comprehensive application of various GIS-based multivariate statistical methods was performed to analyze datasets (2009–2011) on water quality in the Liao River system (China). Cluster analysis (CA) classified the 12 months of the year into three groups (May–October, February–April and November–January) and the 66 sampling sites into three groups (groups A, B and C) based on similarities in water quality characteristics. Discriminant analysis (DA) determined that temperature, dissolved oxygen (DO), pH, chemical oxygen demand (CODMn), 5-day biochemical oxygen demand (BOD5), NH4+–N, total phosphorus (TP) and volatile phenols were significant variables affecting temporal variations, with 81.2% correct assignments. Principal component analysis (PCA) and positive matrix factorization (PMF) identified eight potential pollution factors for each part of the data structure, explaining more than 61% of the total variance. Oxygen-consuming organics from cropland and woodland runoff were the main latent pollution factor for group A. For group B, the main pollutants were oxygen-consuming organics, oil, nutrients and fecal matter. For group C, the evaluated pollutants primarily included oxygen-consuming organics, oil and toxic organics. PMID:27775679

  6. Schooling mediates brain reserve in Alzheimer's disease: findings of fluoro-deoxy-glucose-positron emission tomography.

    PubMed

    Perneczky, R; Drzezga, A; Diehl-Schmid, J; Schmid, G; Wohlschläger, A; Kars, S; Grimmer, T; Wagenpfeil, S; Monsch, A; Kurz, A

    2006-09-01

    Functional imaging studies report that higher education is associated with more severe pathology in patients with Alzheimer's disease, controlling for disease severity. Therefore, schooling seems to provide brain reserve against neurodegeneration. To provide further evidence for brain reserve in a large sample, using a sensitive technique for the indirect assessment of brain abnormality (18F-fluoro-deoxy-glucose-positron emission tomography (FDG-PET)), a comprehensive measure of global cognitive impairment to control for disease severity (total score of the Consortium to Establish a Registry for Alzheimer's Disease Neuropsychological Battery) and an approach unbiased by predefined regions of interest for the statistical analysis (statistical parametric mapping (SPM)). 93 patients with mild Alzheimer's disease and 16 healthy controls underwent 18F-FDG-PET imaging of the brain. A linear regression analysis with education as independent and glucose utilisation as dependent variables, adjusted for global cognitive status and demographic variables, was conducted in SPM2. The regression analysis showed a marked inverse association between years of schooling and glucose metabolism in the posterior temporo-occipital association cortex and the precuneus in the left hemisphere. In line with previous reports, the findings suggest that education is associated with brain reserve and that people with higher education can cope with brain damage for a longer time.

  7. 3Drefine: an interactive web server for efficient protein structure refinement

    PubMed Central

    Bhattacharya, Debswapna; Nowotny, Jackson; Cao, Renzhi; Cheng, Jianlin

    2016-01-01

    3Drefine is an interactive web server for consistent and computationally efficient protein structure refinement with the capability to perform web-based statistical and visual analysis. The 3Drefine refinement protocol utilizes iterative optimization of hydrogen bonding network combined with atomic-level energy minimization on the optimized model using a composite physics and knowledge-based force fields for efficient protein structure refinement. The method has been extensively evaluated on blind CASP experiments as well as on large-scale and diverse benchmark datasets and exhibits consistent improvement over the initial structure in both global and local structural quality measures. The 3Drefine web server allows for convenient protein structure refinement through a text or file input submission, email notification, provided example submission and is freely available without any registration requirement. The server also provides comprehensive analysis of submissions through various energy and statistical feedback and interactive visualization of multiple refined models through the JSmol applet that is equipped with numerous protein model analysis tools. The web server has been extensively tested and used by many users. As a result, the 3Drefine web server conveniently provides a useful tool easily accessible to the community. The 3Drefine web server has been made publicly available at the URL: http://sysbio.rnet.missouri.edu/3Drefine/. PMID:27131371

  8. Examining the delivery modes of metacognitive awareness and active reading lessons in a college nonmajors introductory biology course.

    PubMed

    Hill, Kendra M; Brözel, Volker S; Heiberger, Greg A

    2014-05-01

    Current research supports the role of metacognitive strategies to enhance reading comprehension. This study measured the effectiveness of online versus face-to-face metacognitive and active reading skills lessons introduced by Biology faculty to college students in a nonmajors introductory biology course. These lessons were delivered in two lectures either online (Group 1: N = 154) or face to face (Group 2: N = 152). Previously validated pre- and post- surveys were used to collect and compare data by paired and independent t-test analysis (α = 0.05). Pre- and post- survey data showed a statistically significant improvement in both groups in metacognitive awareness (p = 0.001, p = 0.003, respectively) and reading comprehension (p < 0.001 for both groups). When comparing the delivery mode of these lessons, no difference was detected between the online and face-to-face instruction for metacognitive awareness (pre- p = 0.619, post- p = 0.885). For reading comprehension, no difference in gains was demonstrated between online and face-to-face (p = 0.381); however, differences in pre- and post- test scores were measured (pre- p = 0.005, post- p = 0.038). This study suggests that biology instructors can easily introduce effective metacognitive awareness and active reading lessons into their course, either through online or face-to-face instruction.

  9. Perspectives on statistics education: observations from statistical consulting in an academic nursing environment.

    PubMed

    Hayat, Matthew J; Schmiege, Sarah J; Cook, Paul F

    2014-04-01

    Statistics knowledge is essential for understanding the nursing and health care literature, as well as for applying rigorous science in nursing research. Statistical consultants providing services to faculty and students in an academic nursing program have the opportunity to identify gaps and challenges in statistics education for nursing students. This information may be useful to curriculum committees and statistics educators. This article aims to provide perspective on statistics education stemming from the experiences of three experienced statistics educators who regularly collaborate and consult with nurse investigators. The authors share their knowledge and express their views about data management, data screening and manipulation, statistical software, types of scientific investigation, and advanced statistical topics not covered in the usual coursework. The suggestions provided promote a call for data to study these topics. Relevant data about statistics education can assist educators in developing comprehensive statistics coursework for nursing students. Copyright 2014, SLACK Incorporated.

  10. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline

    PubMed Central

    2013-01-01

    Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. Results We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS A : DE genes with non-zero effect sizes in all studies, (2) HS B : DE genes with non-zero effect sizes in one or more studies and (3) HS r : DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. Conclusions The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS A , HS B , and HS r ). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author’s publication website. PMID:24359104

  11. Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.

    PubMed

    Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C

    2013-12-21

    As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.

  12. Lessons learned from IDeAl - 33 recommendations from the IDeAl-net about design and analysis of small population clinical trials.

    PubMed

    Hilgers, Ralf-Dieter; Bogdan, Malgorzata; Burman, Carl-Fredrik; Dette, Holger; Karlsson, Mats; König, Franz; Male, Christoph; Mentré, France; Molenberghs, Geert; Senn, Stephen

    2018-05-11

    IDeAl (Integrated designs and analysis of small population clinical trials) is an EU funded project developing new statistical design and analysis methodologies for clinical trials in small population groups. Here we provide an overview of IDeAl findings and give recommendations to applied researchers. The description of the findings is broken down by the nine scientific IDeAl work packages and summarizes results from the project's more than 60 publications to date in peer reviewed journals. In addition, we applied text mining to evaluate the publications and the IDeAl work packages' output in relation to the design and analysis terms derived from in the IRDiRC task force report on small population clinical trials. The results are summarized, describing the developments from an applied viewpoint. The main result presented here are 33 practical recommendations drawn from the work, giving researchers a comprehensive guidance to the improved methodology. In particular, the findings will help design and analyse efficient clinical trials in rare diseases with limited number of patients available. We developed a network representation relating the hot topics developed by the IRDiRC task force on small population clinical trials to IDeAl's work as well as relating important methodologies by IDeAl's definition necessary to consider in design and analysis of small-population clinical trials. These network representation establish a new perspective on design and analysis of small-population clinical trials. IDeAl has provided a huge number of options to refine the statistical methodology for small-population clinical trials from various perspectives. A total of 33 recommendations developed and related to the work packages help the researcher to design small population clinical trial. The route to improvements is displayed in IDeAl-network representing important statistical methodological skills necessary to design and analysis of small-population clinical trials. The methods are ready for use.

  13. Evaluating comprehensiveness in children's healthcare.

    PubMed

    Diniz, Suênia Gonçalves de Medeiros; Damasceno, Simone Soares; Coutinho, Simone Elizabeth Duarte; Toso, Beatriz Rosana Gonçalves de Oliveira; Collet, Neusa

    2016-12-15

    To evaluate the presence and extent of comprehensiveness in children's healthcare in the context of the Family Health Strategy. Evaluative, quantitative, cross-sectional study conducted with 344 family members of children at the Family Health Units of João Pessoa, PB, Brazil. Data were collected using the PCATool Brazil - child version and analysed according to descriptive and exploratory statistics. The attribute of comprehensiveness did not obtain satisfactory scores in the two evaluated dimensions, namely "available services" and "provided services". The low scores reveal that the attribute comprehensiveness is not employed as expected in a primary care unit and points to the issues that must be altered. It was concluded that the services should be restructured to ensure cross-sector performance in the provision of child care. It is also important to improve the relations between professionals and users to promote comprehensive and effective care.

  14. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  15. Making texts in electronic health records comprehensible to consumers: a prototype translator.

    PubMed

    Zeng-Treitler, Qing; Goryachev, Sergey; Kim, Hyeoneui; Keselman, Alla; Rosendale, Douglas

    2007-10-11

    Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15).

  16. Making Texts in Electronic Health Records Comprehensible to Consumers: A Prototype Translator

    PubMed Central

    Zeng-Treitler, Qing; Goryachev, Sergey; Kim, Hyeoneui; Keselman, Alla; Rosendale, Douglas

    2007-01-01

    Narrative reports from electronic health records are a major source of content for personal health records. We designed and implemented a prototype text translator to make these reports more comprehensible to consumers. The translator identifies difficult terms, replaces them with easier synonyms, and generates and inserts explanatory texts for them. In feasibility testing, the application was used to translate 9 clinical reports. Majority (68.8%) of text replacements and insertions were deemed correct and helpful by expert review. User evaluation demonstrated a non-statistically significant trend toward better comprehension when translation is provided (p=0.15). PMID:18693956

  17. SU-F-T-227: A Comprehensive Patient Specific, Structure Specific, Pre-Treatment 3D QA Protocol for IMRT, SBRT and VMAT - Clinical Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gueorguiev, G; Cotter, C; Young, M

    2016-06-15

    Purpose: To present a 3D QA method and clinical results for 550 patients. Methods: Five hundred and fifty patient treatment deliveries (400 IMRT, 75 SBRT and 75 VMAT) from various treatment sites, planned on Raystation treatment planning system (TPS), were measured on three beam-matched Elekta linear accelerators using IBA’s COMPASS system. The difference between TPS computed and delivered dose was evaluated in 3D by applying three statistical parameters to each structure of interest: absolute average dose difference (AADD, 6% allowed difference), absolute dose difference greater than 6% (ADD6, 4% structure volume allowed to fail) and 3D gamma test (3%/3mm DTA,more » 4% structure volume allowed to fail). If the allowed value was not met for a given structure, manual review was performed. The review consisted of overlaying dose difference or gamma results with the patient CT, scrolling through the slices. For QA to pass, areas of high dose difference or gamma must be small and not on consecutive slices. For AADD to manually pass QA, the average dose difference in cGy must be less than 50cGy. The QA protocol also includes DVH analysis based on QUANTEC and TG-101 recommended dose constraints. Results: Figures 1–3 show the results for the three parameters per treatment modality. Manual review was performed on 67 deliveries (27 IMRT, 22 SBRT and 18 VMAT), for which all passed QA. Results show that statistical parameter AADD may be overly sensitive for structures receiving low dose, especially for the SBRT deliveries (Fig.1). The TPS computed and measured DVH values were in excellent agreement and with minimum difference. Conclusion: Applying DVH analysis and different statistical parameters to any structure of interest, as part of the 3D QA protocol, provides a comprehensive treatment plan evaluation. Author G. Gueorguiev discloses receiving travel and research funding from IBA for unrelated to this project work. Author B. Crawford discloses receiving travel funding from IBA for unrelated to this project work.« less

  18. Contrast-Enhanced Ultrasonography in Differential Diagnosis of Benign and Malignant Ovarian Tumors

    PubMed Central

    Qiao, Jing-Jing; Yu, Jing; Yu, Zhe; Li, Na; Song, Chen; Li, Man

    2015-01-01

    Objective To evaluate the accuracy of contrast-enhanced ultrasonography (CEUS) in differential diagnosis of benign and malignant ovarian tumors. Methods The scientific literature databases PubMed, Cochrane Library and CNKI were comprehensively searched for studies relevant to the use of CEUS technique for differential diagnosis of benign and malignant ovarian cancer. Pooled summary statistics for specificity (Spe), sensitivity (Sen), positive and negative likelihood ratios (LR+/LR−), and diagnostic odds ratio (DOR) and their 95%CIs were calculated. Software for statistical analysis included STATA version 12.0 (Stata Corp, College Station, TX, USA) and Meta-Disc version 1.4 (Universidad Complutense, Madrid, Spain). Results Following a stringent selection process, seven high quality clinical trials were found suitable for inclusion in the present meta-analysis. The 7 studies contained a combined total of 375 ovarian cancer patients (198 malignant and 177 benign). Statistical analysis revealed that CEUS was associated with the following performance measures in differential diagnosis of ovarian tumors: pooled Sen was 0.96 (95%CI = 0.92∼0.98); the summary Spe was 0.91 (95%CI = 0.86∼0.94); the pooled LR+ was 10.63 (95%CI = 6.59∼17.17); the pooled LR− was 0.04 (95%CI = 0.02∼0.09); and the pooled DOR was 241.04 (95% CI = 92.61∼627.37). The area under the SROC curve was 0.98 (95% CI = 0.20∼1.00). Lastly, publication bias was not detected (t = −0.52, P = 0.626) in the meta-analysis. Conclusions Our results revealed the high clinical value of CEUS in differential diagnosis of benign and malignant ovarian tumors. Further, CEUS may also prove to be useful in differential diagnosis at early stages of this disease. PMID:25764442

  19. Part-time versus full-time occlusion therapy for treatment of amblyopia: A meta-analysis.

    PubMed

    Yazdani, Negareh; Sadeghi, Ramin; Momeni-Moghaddam, Hamed; Zarifmahmoudi, Leili; Ehsaei, Asieh; Barrett, Brendan T

    2017-06-01

    To compare full-time occlusion (FTO) and part-time occlusion (PTO) therapy in the treatment of amblyopia, with the secondary aim of evaluating the minimum number of hours of part-time patching required for maximal effect from occlusion. A literature search was performed in PubMed, Scopus, Science Direct, Ovid, Web of Science and Cochrane library. Methodological quality of the literature was evaluated according to the Oxford Center for Evidence Based Medicine and modified Newcastle-Ottawa scale. Statistical analyses were performed using Comprehensive Meta-Analysis (version 2, Biostat Inc., USA). The present meta-analysis included six studies [three randomized controlled trials (RCTs) and three non-RCTs]. Pooled standardized difference in the mean changes in the visual acuity was 0.337 [lower and upper limits: -0.009, 0.683] higher in the FTO as compared to the PTO group; however, this difference was not statistically significant ( P  = 0.056, Cochrane Q value = 20.4 ( P  = 0.001), I 2  = 75.49%). Egger's regression intercept was 5.46 ( P  = 0.04). The pooled standardized difference in means of visual acuity changes was 1.097 [lower and upper limits: 0.68, 1.513] higher in the FTO arm ( P  < 0.001), and 0.7 [lower and upper limits: 0.315, 1.085] higher in the PTO arm ( P  < 0.001) compared to PTO less than two hours. This meta-analysis shows no statistically significant difference between PTO and FTO in treatment of amblyopia. However, our results suggest that the minimum effective PTO duration, to observe maximal improvement in visual acuity is six hours per day.

  20. Racism as a determinant of health: a protocol for conducting a systematic review and meta-analysis

    PubMed Central

    2013-01-01

    Background Racism is increasingly recognized as a key determinant of health. A growing body of epidemiological evidence shows strong associations between self-reported racism and poor health outcomes across diverse minority groups in developed countries. While the relationship between racism and health has received increasing attention over the last two decades, a comprehensive meta-analysis focused on the health effects of racism has yet to be conducted. The aim of this review protocol is to provide a structure from which to conduct a systematic review and meta-analysis of studies that assess the relationship between racism and health. Methods This research will consist of a systematic review and meta-analysis. Studies will be considered for review if they are empirical studies reporting quantitative data on the association between racism and health for adults and/or children of all ages from any racial/ethnic/cultural groups. Outcome measures will include general health and well-being, physical health, mental health, healthcare use and health behaviors. Scientific databases (for example, Medline) will be searched using a comprehensive search strategy and reference lists will be manually searched for relevant studies. In addition, use of online search engines (for example, Google Scholar), key websites, and personal contact with experts will also be undertaken. Screening of search results and extraction of data from included studies will be independently conducted by at least two authors, including assessment of inter-rater reliability. Studies included in the review will be appraised for quality using tools tailored to each study design. Summary statistics of study characteristics and findings will be compiled and findings synthesized in a narrative summary as well as a meta-analysis. Discussion This review aims to examine associations between reported racism and health outcomes. This comprehensive and systematic review and meta-analysis of empirical research will provide a rigorous and reliable evidence base for future research, policy and practice, including information on the extent of available evidence for a range of racial/ethnic minority groups PMID:24059279

  1. Systematic Evaluation of “Compliance” to Prescribed Treatment Medications and “Abstinence” from Psychoactive Drug Abuse in Chemical Dependence Programs: Data from the Comprehensive Analysis of Reported Drugs

    PubMed Central

    2014-01-01

    This is the first quantitative analysis of data from urine drug tests for compliance to treatment medications and abstinence from drug abuse across “levels of care” in six eastern states of America. Comprehensive Analysis of Reported Drugs (CARD) data was used in this post-hoc retrospective observational study from 10,570 patients, filtered to include a total of 2,919 patients prescribed at least one treatment medication during 2010 and 2011. The first and last urine samples (5,838 specimens) were analyzed; compliance to treatment medications and abstinence from drugs of abuse supported treatment effectiveness for many. Compared to non-compliant patients, compliant patients were marginally less likely to abuse opioids, cannabinoids, and ethanol during treatment although more likely to abuse benzodiazepines. Almost 17% of the non-abstinent patients used benzodiazepines, 15% used opiates, and 10% used cocaine during treatment. Compliance was significantly higher in residential than in the non-residential treatment facilities. Independent of level of care, 67.2% of the patients (n = 1963; P<.001) had every treatment medication found in both first and last urine specimens (compliance). In addition, 39.2% of the patients (n = 1143; P<.001) had no substance of abuse detected in either the first or last urine samples (abstinence). Moreover, in 2010, 16.9% of the patients (n = 57) were abstinent at first but not at last urine (deteriorating abstinence), the percentage dropped to 13.3% (n = 174) in 2011; this improvement over years was statistically significant. A longitudinal analysis for abstinence and compliance was studied in a randomized subset from 2011, (n = 511) representing 17.5% of the total cohort. A statistically significant upward trend (p = 2.353×10−8) of abstinence rates as well as a similar but stronger trend for compliance ((p = 2.200×10−16) was found. Being cognizant of the trend toward drug urine testing being linked to medical necessity eliminating abusive screening, the interpretation of these valuable results require further intensive investigation. PMID:25247439

  2. World War II War Production-Why Were the B-17 and B-24 Produced in Parallel?

    DTIC Science & Technology

    1997-03-01

    Winton, A Black Hole in the Wild Blue Yonder: The Need for a Comprehensive Theory of Airpower (Air Command and Staff College War Theory Coursebook ... statistical comparisons made, of which most are summarized as follows2: 1. Statistical data compiled on the utilization of both planes showed that the B-17 was...easier to maintain and therefore more available for combat. 2. Statistical data on time from aircraft acceptance to delivery in theater showed that

  3. Predictors of nutrition information comprehension in adulthood.

    PubMed

    Miller, Lisa M Soederberg; Gibson, Tanja N; Applegate, Elizabeth A

    2010-07-01

    The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Ninety-three participants, ages 18-80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased-and knowledge increased-with age. When knowledge was statistically controlled, age declines in comprehension increased. Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  4. Predictors of Nutrition Information Comprehension in Adulthood

    PubMed Central

    Miller, Lisa M. Soederberg; Gibson, Tanja N.; Applegate, Elizabeth A.

    2009-01-01

    Objective The goal of the present study was to examine relationships among several predictors of nutrition comprehension. We were particularly interested in exploring whether nutrition knowledge or motivation moderated the effects of attention on comprehension across a wide age range of adults. Methods Ninety-three participants, ages 18 to 80, completed measures of nutrition knowledge and motivation and then read nutrition information (from which attention allocation was derived) and answered comprehension questions. Results In general, predictor variables were highly intercorrelated. However, knowledge, but not motivation, had direct effects on comprehension accuracy. In contrast, motivation influenced attention, which in turn influenced accuracy. Results also showed that comprehension accuracy decreased- and knowledge increased -with age. When knowledge was statistically controlled, age declines in comprehension increased. Conclusion Knowledge is an important predictor of nutrition information comprehension and its role increases in later life. Motivation is also important; however, its effects on comprehension differ from knowledge. Practice Implications Health educators and clinicians should consider cognitive skills such as knowledge as well as motivation and age of patients when deciding how to best convey health information. The increased role of knowledge among older adults suggests that lifelong educational efforts may have important payoffs in later life. PMID:19854605

  5. Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.

    PubMed

    Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P

    2017-08-23

    Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.

  6. Cognitive and attitudinal predictors related to graphing achievement among pre-service elementary teachers

    NASA Astrophysics Data System (ADS)

    Szyjka, Sebastian P.

    The purpose of this study was to determine the extent to which six cognitive and attitudinal variables predicted pre-service elementary teachers' performance on line graphing. Predictors included Illinois teacher education basic skills sub-component scores in reading comprehension and mathematics, logical thinking performance scores, as well as measures of attitudes toward science, mathematics and graphing. This study also determined the strength of the relationship between each prospective predictor variable and the line graphing performance variable, as well as the extent to which measures of attitude towards science, mathematics and graphing mediated relationships between scores on mathematics, reading, logical thinking and line graphing. Ninety-four pre-service elementary education teachers enrolled in two different elementary science methods courses during the spring 2009 semester at Southern Illinois University Carbondale participated in this study. Each subject completed five different instruments designed to assess science, mathematics and graphing attitudes as well as logical thinking and graphing ability. Sixty subjects provided copies of primary basic skills score reports that listed subset scores for both reading comprehension and mathematics. The remaining scores were supplied by a faculty member who had access to a database from which the scores were drawn. Seven subjects, whose scores could not be found, were eliminated from final data analysis. Confirmatory factor analysis (CFA) was conducted in order to establish validity and reliability of the Questionnaire of Attitude Toward Line Graphs in Science (QALGS) instrument. CFA tested the statistical hypothesis that the five main factor structures within the Questionnaire of Attitude Toward Statistical Graphs (QASG) would be maintained in the revised QALGS. Stepwise Regression Analysis with backward elimination was conducted in order to generate a parsimonious and precise predictive model. This procedure allowed the researcher to explore the relationships among the affective and cognitive variables that were included in the regression analysis. The results for CFA indicated that the revised QALGS measure was sound in its psychometric properties when tested against the QASG. Reliability statistics indicated that the overall reliability for the 32 items in the QALGS was .90. The learning preferences construct had the lowest reliability (.67), while enjoyment (.89), confidence (.86) and usefulness (.77) constructs had moderate to high reliabilities. The first four measurement models fit the data well as indicated by the appropriate descriptive and statistical indices. However, the fifth measurement model did not fit the data well statistically, and only fit well with two descriptive indices. The results addressing the research question indicated that mathematical and logical thinking ability were significant predictors of line graph performance among the remaining group of variables. These predictors accounted for 41% of the total variability on the line graph performance variable. Partial correlation coefficients indicated that mathematics ability accounted for 20.5% of the variance on the line graphing performance variable when removing the effect of logical thinking. The logical thinking variable accounted for 4.7% of the variance on the line graphing performance variable when removing the effect of mathematics ability.

  7. 2011-12 National Postsecondary Student Aid Study (NPSAS:12). Data File Documentation. Appendix J-O. NCES 2014-182_2

    ERIC Educational Resources Information Center

    Wine, Jennifer; Bryan, Michael; Siegel, Peter

    2013-01-01

    The National Postsecondary Student Aid Study (NPSAS) helps fulfill the U.S. Department of Education's National Center for Education Statistics (NCES) mandate to collect, analyze, and publish statistics related to education. The purpose of NPSAS is to compile a comprehensive research dataset, based on student-level records, on financial aid…

  8. 2011-12 National Postsecondary Student Aid Study (NPSAS:12). Data File Documentation. Appendix A-I. NCES 2014-182_1

    ERIC Educational Resources Information Center

    Wine, Jennifer; Bryan, Michael; Siegel, Peter

    2013-01-01

    The National Postsecondary Student Aid Study (NPSAS) helps fulfill the U.S. Department of Education's National Center for Education Statistics (NCES) mandate to collect, analyze, and publish statistics related to education. The purpose of NPSAS is to compile a comprehensive research dataset, based on student-level records, on financial aid…

  9. Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk

    PubMed Central

    Willis, Matt; Sun, Peiyuan; Wang, Jun

    2013-01-01

    Background Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. Objective To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? Methods We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. Results The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. Conclusions The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. PMID:23732572

  10. Crowdsourcing participatory evaluation of medical pictograms using Amazon Mechanical Turk.

    PubMed

    Yu, Bei; Willis, Matt; Sun, Peiyuan; Wang, Jun

    2013-06-03

    Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the "turkers". To answer two research questions: (1) Is the turkers' collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers' demographic characteristics affect their performance in medical pictogram comprehension? We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers' guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers' interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers' demographic characteristics and their pictogram comprehension performance. The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response-based open-ended testing with local people. The turkers' misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers' misunderstandings overlap with those elicited from low-literate people.

  11. Statistical analysis of mirror mode waves in sheath regions driven by interplanetary coronal mass ejection

    NASA Astrophysics Data System (ADS)

    Ala-Lahti, Matti M.; Kilpua, Emilia K. J.; Dimmock, Andrew P.; Osmane, Adnane; Pulkkinen, Tuija; Souček, Jan

    2018-05-01

    We present a comprehensive statistical analysis of mirror mode waves and the properties of their plasma surroundings in sheath regions driven by interplanetary coronal mass ejection (ICME). We have constructed a semi-automated method to identify mirror modes from the magnetic field data. We analyze 91 ICME sheath regions from January 1997 to April 2015 using data from the Wind spacecraft. The results imply that similarly to planetary magnetosheaths, mirror modes are also common structures in ICME sheaths. However, they occur almost exclusively as dip-like structures and in mirror stable plasma. We observe mirror modes throughout the sheath, from the bow shock to the ICME leading edge, but their amplitudes are largest closest to the shock. We also find that the shock strength (measured by Alfvén Mach number) is the most important parameter in controlling the occurrence of mirror modes. Our findings suggest that in ICME sheaths the dominant source of free energy for mirror mode generation is the shock compression. We also suggest that mirror modes that are found deeper in the sheath are remnants from earlier times of the sheath evolution, generated also in the vicinity of the shock.

  12. Social relationships and risk of dementia: A systematic review and meta-analysis of longitudinal cohort studies.

    PubMed

    Kuiper, Jisca S; Zuidersma, Marij; Oude Voshaar, Richard C; Zuidema, Sytse U; van den Heuvel, Edwin R; Stolk, Ronald P; Smidt, Nynke

    2015-07-01

    It is unclear to what extent poor social relationships are related to the development of dementia. A comprehensive systematic literature search identified 19 longitudinal cohort studies investigating the association between various social relationship factors and incident dementia in the general population. Relative risks (RRs) with 95% confidence intervals (CIs) were pooled using random-effects meta-analysis. Low social participation (RR: 1.41 (95% CI: 1.13-1.75)), less frequent social contact (RR: 1.57 (95% CI: 1.32-1.85)), and more loneliness (RR: 1.58 (95% CI: 1.19-2.09)) were statistically significant associated with incident dementia. The results of the association between social network size and dementia were inconsistent. No statistically significant association was found for low satisfaction with social network and the onset of dementia (RR: 1.25 (95% CI: 0.96-1.62). We conclude that social relationship factors that represent a lack of social interaction are associated with incident dementia. The strength of the associations between poor social interaction and incident dementia is comparable with other well-established risk factors for dementia, including low education attainment, physical inactivity, and late-life depression. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Performance Evaluation of 14 Neural Network Architectures Used for Predicting Heat Transfer Characteristics of Engine Oils

    NASA Astrophysics Data System (ADS)

    Al-Ajmi, R. M.; Abou-Ziyan, H. Z.; Mahmoud, M. A.

    2012-01-01

    This paper reports the results of a comprehensive study that aimed at identifying best neural network architecture and parameters to predict subcooled boiling characteristics of engine oils. A total of 57 different neural networks (NNs) that were derived from 14 different NN architectures were evaluated for four different prediction cases. The NNs were trained on experimental datasets performed on five engine oils of different chemical compositions. The performance of each NN was evaluated using a rigorous statistical analysis as well as careful examination of smoothness of predicted boiling curves. One NN, out of the 57 evaluated, correctly predicted the boiling curves for all cases considered either for individual oils or for all oils taken together. It was found that the pattern selection and weight update techniques strongly affect the performance of the NNs. It was also revealed that the use of descriptive statistical analysis such as R2, mean error, standard deviation, and T and slope tests, is a necessary but not sufficient condition for evaluating NN performance. The performance criteria should also include inspection of the smoothness of the predicted curves either visually or by plotting the slopes of these curves.

  14. Supporting Regularized Logistic Regression Privately and Efficiently.

    PubMed

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  15. GIS-based bivariate statistical techniques for groundwater potential analysis (an example of Iran)

    NASA Astrophysics Data System (ADS)

    Haghizadeh, Ali; Moghaddam, Davoud Davoudi; Pourghasemi, Hamid Reza

    2017-12-01

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster-Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran. The research was done using 11 groundwater conditioning factors and 496 spring positions. Based on the ground water potential maps (GPMs) of SI and DST methods, 24.22% and 23.74% of the study area is covered by poor zone of groundwater potential, and 43.93% and 36.3% of Broujerd region is covered by good and very good potential zones, respectively. The validation of outcomes displayed that area under the curve (AUC) of SI and DST techniques are 81.23% and 79.41%, respectively, which shows SI method has slightly a better performance than the DST technique. Therefore, SI and DST methods are advantageous to analyze groundwater capacity and scrutinize the complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts.

  16. 1H NMR-based metabolic profiling for evaluating poppy seed rancidity and brewing.

    PubMed

    Jawień, Ewa; Ząbek, Adam; Deja, Stanisław; Łukaszewicz, Marcin; Młynarz, Piotr

    2015-12-01

    Poppy seeds are widely used in household and commercial confectionery. The aim of this study was to demonstrate the application of metabolic profiling for industrial monitoring of the molecular changes which occur during minced poppy seed rancidity and brewing processes performed on raw seeds. Both forms of poppy seeds were obtained from a confectionery company. Proton nuclear magnetic resonance (1H NMR) was applied as the analytical method of choice together with multivariate statistical data analysis. Metabolic fingerprinting was applied as a bioprocess control tool to monitor rancidity with the trajectory of change and brewing progressions. Low molecular weight compounds were found to be statistically significant biomarkers of these bioprocesses. Changes in concentrations of chemical compounds were explained relative to the biochemical processes and external conditions. The obtained results provide valuable and comprehensive information to gain a better understanding of the biology of rancidity and brewing processes, while demonstrating the potential for applying NMR spectroscopy combined with multivariate data analysis tools for quality control in food industries involved in the processing of oilseeds. This precious and versatile information gives a better understanding of the biology of these processes.

  17. Supporting Regularized Logistic Regression Privately and Efficiently

    PubMed Central

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738

  18. Vitamin D and Depression: A Systematic Review and Meta-Analysis Comparing Studies with and without Biological Flaws

    PubMed Central

    Spedding, Simon

    2014-01-01

    Efficacy of Vitamin D supplements in depression is controversial, awaiting further literature analysis. Biological flaws in primary studies is a possible reason meta-analyses of Vitamin D have failed to demonstrate efficacy. This systematic review and meta-analysis of Vitamin D and depression compared studies with and without biological flaws. The systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The literature search was undertaken through four databases for randomized controlled trials (RCTs). Studies were critically appraised for methodological quality and biological flaws, in relation to the hypothesis and study design. Meta-analyses were performed for studies according to the presence of biological flaws. The 15 RCTs identified provide a more comprehensive evidence-base than previous systematic reviews; methodological quality of studies was generally good and methodology was diverse. A meta-analysis of all studies without flaws demonstrated a statistically significant improvement in depression with Vitamin D supplements (+0.78 CI +0.24, +1.27). Studies with biological flaws were mainly inconclusive, with the meta-analysis demonstrating a statistically significant worsening in depression by taking Vitamin D supplements (−1.1 CI −0.7, −1.5). Vitamin D supplementation (≥800 I.U. daily) was somewhat favorable in the management of depression in studies that demonstrate a change in vitamin levels, and the effect size was comparable to that of anti-depressant medication. PMID:24732019

  19. Performance evaluation of tile-based Fisher Ratio analysis using a benchmark yeast metabolome dataset.

    PubMed

    Watson, Nathanial E; Parsons, Brendon A; Synovec, Robert E

    2016-08-12

    Performance of tile-based Fisher Ratio (F-ratio) data analysis, recently developed for discovery-based studies using comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-TOFMS), is evaluated with a metabolomics dataset that had been previously analyzed in great detail, but while taking a brute force approach. The previously analyzed data (referred to herein as the benchmark dataset) were intracellular extracts from Saccharomyces cerevisiae (yeast), either metabolizing glucose (repressed) or ethanol (derepressed), which define the two classes in the discovery-based analysis to find metabolites that are statistically different in concentration between the two classes. Beneficially, this previously analyzed dataset provides a concrete means to validate the tile-based F-ratio software. Herein, we demonstrate and validate the significant benefits of applying tile-based F-ratio analysis. The yeast metabolomics data are analyzed more rapidly in about one week versus one year for the prior studies with this dataset. Furthermore, a null distribution analysis is implemented to statistically determine an adequate F-ratio threshold, whereby the variables with F-ratio values below the threshold can be ignored as not class distinguishing, which provides the analyst with confidence when analyzing the hit table. Forty-six of the fifty-four benchmarked changing metabolites were discovered by the new methodology while consistently excluding all but one of the benchmarked nineteen false positive metabolites previously identified. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. No Association of Coronary Artery Disease with X-Chromosomal Variants in Comprehensive International Meta-Analysis.

    PubMed

    Loley, Christina; Alver, Maris; Assimes, Themistocles L; Bjonnes, Andrew; Goel, Anuj; Gustafsson, Stefan; Hernesniemi, Jussi; Hopewell, Jemma C; Kanoni, Stavroula; Kleber, Marcus E; Lau, King Wai; Lu, Yingchang; Lyytikäinen, Leo-Pekka; Nelson, Christopher P; Nikpay, Majid; Qu, Liming; Salfati, Elias; Scholz, Markus; Tukiainen, Taru; Willenborg, Christina; Won, Hong-Hee; Zeng, Lingyao; Zhang, Weihua; Anand, Sonia S; Beutner, Frank; Bottinger, Erwin P; Clarke, Robert; Dedoussis, George; Do, Ron; Esko, Tõnu; Eskola, Markku; Farrall, Martin; Gauguier, Dominique; Giedraitis, Vilmantas; Granger, Christopher B; Hall, Alistair S; Hamsten, Anders; Hazen, Stanley L; Huang, Jie; Kähönen, Mika; Kyriakou, Theodosios; Laaksonen, Reijo; Lind, Lars; Lindgren, Cecilia; Magnusson, Patrik K E; Marouli, Eirini; Mihailov, Evelin; Morris, Andrew P; Nikus, Kjell; Pedersen, Nancy; Rallidis, Loukianos; Salomaa, Veikko; Shah, Svati H; Stewart, Alexandre F R; Thompson, John R; Zalloua, Pierre A; Chambers, John C; Collins, Rory; Ingelsson, Erik; Iribarren, Carlos; Karhunen, Pekka J; Kooner, Jaspal S; Lehtimäki, Terho; Loos, Ruth J F; März, Winfried; McPherson, Ruth; Metspalu, Andres; Reilly, Muredach P; Ripatti, Samuli; Sanghera, Dharambir K; Thiery, Joachim; Watkins, Hugh; Deloukas, Panos; Kathiresan, Sekar; Samani, Nilesh J; Schunkert, Heribert; Erdmann, Jeanette; König, Inke R

    2016-10-12

    In recent years, genome-wide association studies have identified 58 independent risk loci for coronary artery disease (CAD) on the autosome. However, due to the sex-specific data structure of the X chromosome, it has been excluded from most of these analyses. While females have 2 copies of chromosome X, males have only one. Also, one of the female X chromosomes may be inactivated. Therefore, special test statistics and quality control procedures are required. Thus, little is known about the role of X-chromosomal variants in CAD. To fill this gap, we conducted a comprehensive X-chromosome-wide meta-analysis including more than 43,000 CAD cases and 58,000 controls from 35 international study cohorts. For quality control, sex-specific filters were used to adequately take the special structure of X-chromosomal data into account. For single study analyses, several logistic regression models were calculated allowing for inactivation of one female X-chromosome, adjusting for sex and investigating interactions between sex and genetic variants. Then, meta-analyses including all 35 studies were conducted using random effects models. None of the investigated models revealed genome-wide significant associations for any variant. Although we analyzed the largest-to-date sample, currently available methods were not able to detect any associations of X-chromosomal variants with CAD.

  1. Using the Job Burden-Capital Model of Occupational Stress to Predict Depression and Well-Being among Electronic Manufacturing Service Employees in China

    PubMed Central

    Wang, Chao; Li, Shuang; Li, Tao; Yu, Shanfa; Dai, Junming; Liu, Xiaoman; Zhu, Xiaojun; Ji, Yuqing; Wang, Jin

    2016-01-01

    Background: This study aimed to identify the association between occupational stress and depression-well-being by proposing a comprehensive and flexible job burden-capital model with its corresponding hypotheses. Methods: For this research, 1618 valid samples were gathered from the electronic manufacturing service industry in Hunan Province, China; self-rated questionnaires were administered to participants for data collection after obtaining their written consent. The proposed model was fitted and tested through structural equation model analysis. Results: Single-factor correlation analysis results indicated that coefficients between all items and dimensions had statistical significance. The final model demonstrated satisfactory global goodness of fit (CMIN/DF = 5.37, AGFI = 0.915, NNFI = 0.945, IFI = 0.952, RMSEA = 0.052). Both the measurement and structural models showed acceptable path loadings. Job burden and capital were directly associated with depression and well-being or indirectly related to them through personality. Multi-group structural equation model analyses indicated general applicability of the proposed model to basic features of such a population. Gender, marriage and education led to differences in the relation between occupational stress and health outcomes. Conclusions: The job burden-capital model of occupational stress-depression and well-being was found to be more systematic and comprehensive than previous models. PMID:27529267

  2. Using the Job Burden-Capital Model of Occupational Stress to Predict Depression and Well-Being among Electronic Manufacturing Service Employees in China.

    PubMed

    Wang, Chao; Li, Shuang; Li, Tao; Yu, Shanfa; Dai, Junming; Liu, Xiaoman; Zhu, Xiaojun; Ji, Yuqing; Wang, Jin

    2016-08-12

    This study aimed to identify the association between occupational stress and depression-well-being by proposing a comprehensive and flexible job burden-capital model with its corresponding hypotheses. For this research, 1618 valid samples were gathered from the electronic manufacturing service industry in Hunan Province, China; self-rated questionnaires were administered to participants for data collection after obtaining their written consent. The proposed model was fitted and tested through structural equation model analysis. Single-factor correlation analysis results indicated that coefficients between all items and dimensions had statistical significance. The final model demonstrated satisfactory global goodness of fit (CMIN/DF = 5.37, AGFI = 0.915, NNFI = 0.945, IFI = 0.952, RMSEA = 0.052). Both the measurement and structural models showed acceptable path loadings. Job burden and capital were directly associated with depression and well-being or indirectly related to them through personality. Multi-group structural equation model analyses indicated general applicability of the proposed model to basic features of such a population. Gender, marriage and education led to differences in the relation between occupational stress and health outcomes. The job burden-capital model of occupational stress-depression and well-being was found to be more systematic and comprehensive than previous models.

  3. Combinatorial modification of human histone H4 quantitated by two-dimensional liquid chromatography coupled with top down mass spectrometry.

    PubMed

    Pesavento, James J; Bullock, Courtney R; LeDuc, Richard D; Mizzen, Craig A; Kelleher, Neil L

    2008-05-30

    Quantitative proteomics has focused heavily on correlating protein abundances, ratios, and dynamics by developing methods that are protein expression-centric (e.g. isotope coded affinity tag, isobaric tag for relative and absolute quantification, etc.). These methods effectively detect changes in protein abundance but fail to provide a comprehensive perspective of the diversity of proteins such as histones, which are regulated by post-translational modifications. Here, we report the characterization of modified forms of HeLa cell histone H4 with a dynamic range >10(4) using a strictly Top Down mass spectrometric approach coupled with two dimensions of liquid chromatography. This enhanced dynamic range enabled the precise characterization and quantitation of 42 forms uniquely modified by combinations of methylation and acetylation, including those with trimethylated Lys-20, monomethylated Arg-3, and the novel dimethylated Arg-3 (each <1% of all H4 forms). Quantitative analyses revealed distinct trends in acetylation site occupancy depending on Lys-20 methylation state. Because both modifications are dynamically regulated through the cell cycle, we simultaneously investigated acetylation and methylation kinetics through three cell cycle phases and used these data to statistically assess the robustness of our quantitative analysis. This work represents the most comprehensive analysis of histone H4 forms present in human cells reported to date.

  4. In silico identification and comparative analysis of differentially expressed genes in human and mouse tissues

    PubMed Central

    Pao, Sheng-Ying; Lin, Win-Li; Hwang, Ming-Jing

    2006-01-01

    Background Screening for differentially expressed genes on the genomic scale and comparative analysis of the expression profiles of orthologous genes between species to study gene function and regulation are becoming increasingly feasible. Expressed sequence tags (ESTs) are an excellent source of data for such studies using bioinformatic approaches because of the rich libraries and tremendous amount of data now available in the public domain. However, any large-scale EST-based bioinformatics analysis must deal with the heterogeneous, and often ambiguous, tissue and organ terms used to describe EST libraries. Results To deal with the issue of tissue source, in this work, we carefully screened and organized more than 8 million human and mouse ESTs into 157 human and 108 mouse tissue/organ categories, to which we applied an established statistic test using different thresholds of the p value to identify genes differentially expressed in different tissues. Further analysis of the tissue distribution and level of expression of human and mouse orthologous genes showed that tissue-specific orthologs tended to have more similar expression patterns than those lacking significant tissue specificity. On the other hand, a number of orthologs were found to have significant disparity in their expression profiles, hinting at novel functions, divergent regulation, or new ortholog relationships. Conclusion Comprehensive statistics on the tissue-specific expression of human and mouse genes were obtained in this very large-scale, EST-based analysis. These statistical results have been organized into a database, freely accessible at our website , for easy searching of human and mouse tissue-specific genes and for investigating gene expression profiles in the context of comparative genomics. Comparative analysis showed that, although highly tissue-specific genes tend to exhibit similar expression profiles in human and mouse, there are significant exceptions, indicating that orthologous genes, while sharing basic genomic properties, could result in distinct phenotypes. PMID:16626500

  5. Performance of dental impression materials: Benchmarking of materials and techniques by three-dimensional analysis.

    PubMed

    Rudolph, Heike; Graf, Michael R S; Kuhn, Katharina; Rupf-Köhler, Stephanie; Eirich, Alfred; Edelmann, Cornelia; Quaas, Sebastian; Luthardt, Ralph G

    2015-01-01

    Among other factors, the precision of dental impressions is an important and determining factor for the fit of dental restorations. The aim of this study was to examine the three-dimensional (3D) precision of gypsum dies made using a range of impression techniques and materials. Ten impressions of a steel canine were fabricated for each of the 24 material-method-combinations and poured with type 4 die stone. The dies were optically digitized, aligned to the CAD model of the steel canine, and 3D differences were calculated. The results were statistically analyzed using one-way analysis of variance. Depending on material and impression technique, the mean values had a range between +10.9/-10.0 µm (SD 2.8/2.3) and +16.5/-23.5 µm (SD 11.8/18.8). Qualitative analysis using colorcoded graphs showed a characteristic location of deviations for different impression techniques. Three-dimensional analysis provided a comprehensive picture of the achievable precision. Processing aspects and impression technique were of significant influence.

  6. The Top 50 most-cited articles on Total Ankle Arthroplasty: A bibliometric analysis.

    PubMed

    Malik, Azeem Tariq; Noordin, Shahryar

    2018-03-29

    Total Ankle Arthroplasty (TAA) is a relatively new and evolving field in Foot and Ankle surgery. We conducted a citation analysis to identify the characteristics of the top 50 most cited articles on total ankle arthroplasty. Using the Web of Science database and the search strategy total ankle arthroplasty OR total ankle replacement , we identified 2445 articles. After filtering for relevant articles, the top 50 cited articles on total ankle arthroplasty were retrieved for descriptive and statistical analysis. The publication years ranged from 1979 to 2013. USA was the most productive country in terms of research output, followed by the UK. Though citation analysis has its flaws, this is a comprehensive list of the top 50 articles significantly impacting literature on total ankle arthroplasty. Based on our study, we conclude that there is marked deficiency of high level articles with respect to the number of citations and future researches need to cater to this question to produce high quality studies.

  7. Comprehensive machine learning analysis of Hydra behavior reveals a stable basal behavioral repertoire.

    PubMed

    Han, Shuting; Taralova, Ekaterina; Dupre, Christophe; Yuste, Rafael

    2018-03-28

    Animal behavior has been studied for centuries, but few efficient methods are available to automatically identify and classify it. Quantitative behavioral studies have been hindered by the subjective and imprecise nature of human observation, and the slow speed of annotating behavioral data. Here, we developed an automatic behavior analysis pipeline for the cnidarian Hydra vulgaris using machine learning. We imaged freely behaving Hydra , extracted motion and shape features from the videos, and constructed a dictionary of visual features to classify pre-defined behaviors. We also identified unannotated behaviors with unsupervised methods. Using this analysis pipeline, we quantified 6 basic behaviors and found surprisingly similar behavior statistics across animals within the same species, regardless of experimental conditions. Our analysis indicates that the fundamental behavioral repertoire of Hydra is stable. This robustness could reflect a homeostatic neural control of "housekeeping" behaviors which could have been already present in the earliest nervous systems. © 2018, Han et al.

  8. Introducing computational thinking through hands-on projects using R with applications to calculus, probability and data analysis

    NASA Astrophysics Data System (ADS)

    Benakli, Nadia; Kostadinov, Boyan; Satyanarayana, Ashwin; Singh, Satyanand

    2017-04-01

    The goal of this paper is to promote computational thinking among mathematics, engineering, science and technology students, through hands-on computer experiments. These activities have the potential to empower students to learn, create and invent with technology, and they engage computational thinking through simulations, visualizations and data analysis. We present nine computer experiments and suggest a few more, with applications to calculus, probability and data analysis, which engage computational thinking through simulations, visualizations and data analysis. We are using the free (open-source) statistical programming language R. Our goal is to give a taste of what R offers rather than to present a comprehensive tutorial on the R language. In our experience, these kinds of interactive computer activities can be easily integrated into a smart classroom. Furthermore, these activities do tend to keep students motivated and actively engaged in the process of learning, problem solving and developing a better intuition for understanding complex mathematical concepts.

  9. Social-Ecological Resilience and Sustainable Commons Management Paradigms in State Comprehensive Water Planning Legislation: Are We Adapting or Maintaining the Status Quo?

    NASA Astrophysics Data System (ADS)

    Dyckman, C.

    2016-12-01

    Water shortage has been increasing throughout the country, as record drought grips the western states and several southeastern states have sued adjoining states over shared water resources. State water planning can avert or lessen conflicts by balancing sectoral needs and legal priority within their own states. The state comprehensive water planning laws dictate the state water plan's process, coverage, and content, and the extent to which they codify the allocation status quo. The plans can contain the latest resource management paradigms that respond to climate change uncertainty; namely, sustainable commons management (SCM) and social-ecological resilience (SER). Building on the work of Pahl-Wostl (2009), Ostrom and Cox (2010), Agrawal (2003), and Walker and Salt (2012), who have advocated for and empirically researched the presence of sustainable SCM and SER processes in water management, I surveyed all 50 states to determine which states had comprehensive water planning legislation. Of those 26, I evaluated their legislative content using an augmented coercive versus cooperative analysis metric (May, 1993; Berke and French, 1994) that includes codifiable SCM and SER measures. I found that the majority of the states' legislation did not contain the SER and SCM measures; they also lack integral comprehensive water planning measures (i.e., conjoined surface and groundwater planning, instream flow protection, critical area planning, and water conservation practices) (Dyckman, forthcoming). There is a statistically significant and inverse relationship between the indices within the metric, affirming that the greater the legislation's coerciveness, the lower its adaptive capacity and its water planning comprehensiveness (Ostrom, 2010; Pendall, 2001). Planners in states with more SER and SCM measures in their state water planning statutes are more likely to have autonomy and ability to respond to localized water needs, with more comprehensive water planning tools.

  10. Requirements for Next Generation Comprehensive Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Data, Anubhav

    2008-01-01

    The unique demands of rotorcraft aeromechanics analysis have led to the development of software tools that are described as comprehensive analyses. The next generation of rotorcraft comprehensive analyses will be driven and enabled by the tremendous capabilities of high performance computing, particularly modular and scaleable software executed on multiple cores. Development of a comprehensive analysis based on high performance computing both demands and permits a new analysis architecture. This paper describes a vision of the requirements for this next generation of comprehensive analyses of rotorcraft. The requirements are described and substantiated for what must be included and justification provided for what should be excluded. With this guide, a path to the next generation code can be found.

  11. The Comprehension Problems of Children with Poor Reading Comprehension despite Adequate Decoding: A Meta-Analysis.

    PubMed

    Spencer, Mercedes; Wagner, Richard K

    2018-06-01

    The purpose of this meta-analysis was to examine the comprehension problems of children who have a specific reading comprehension deficit (SCD), which is characterized by poor reading comprehension despite adequate decoding. The meta-analysis included 86 studies of children with SCD who were assessed in reading comprehension and oral language (vocabulary, listening comprehension, storytelling ability, and semantic and syntactic knowledge). Results indicated that children with SCD had deficits in oral language ( d = -0.78, 95% CI [-0.89, -0.68], but these deficits were not as severe as their deficit in reading comprehension ( d = -2.78, 95% CI [-3.01, -2.54]). When compared to reading comprehension age-matched normal readers, the oral language skills of the two groups were comparable ( d = 0.32, 95% CI [-0.49, 1.14]), which suggests that the oral language weaknesses of children with SCD represent a developmental delay rather than developmental deviance. Theoretical and practical implications of these findings are discussed.

  12. Identifying and Investigating Unexpected Response to Treatment: A Diabetes Case Study.

    PubMed

    Ozery-Flato, Michal; Ein-Dor, Liat; Parush-Shear-Yashuv, Naama; Aharonov, Ranit; Neuvirth, Hani; Kohn, Martin S; Hu, Jianying

    2016-09-01

    The availability of electronic health records creates fertile ground for developing computational models of various medical conditions. We present a new approach for detecting and analyzing patients with unexpected responses to treatment, building on machine learning and statistical methodology. Given a specific patient, we compute a statistical score for the deviation of the patient's response from responses observed in other patients having similar characteristics and medication regimens. These scores are used to define cohorts of patients showing deviant responses. Statistical tests are then applied to identify clinical features that correlate with these cohorts. We implement this methodology in a tool that is designed to assist researchers in the pharmaceutical field to uncover new features associated with reduced response to a treatment. It can also aid physicians by flagging patients who are not responding to treatment as expected and hence deserve more attention. The tool provides comprehensive visualizations of the analysis results and the supporting data, both at the cohort level and at the level of individual patients. We demonstrate the utility of our methodology and tool in a population of type II diabetic patients, treated with antidiabetic drugs, and monitored by the HbA1C test.

  13. The Global Signature of Ocean Wave Spectra

    NASA Astrophysics Data System (ADS)

    Portilla-Yandún, Jesús

    2018-01-01

    A global atlas of ocean wave spectra is developed and presented. The development is based on a new technique for deriving wave spectral statistics, which is applied to the extensive ERA-Interim database from European Centre of Medium-Range Weather Forecasts. Spectral statistics is based on the idea of long-term wave systems, which are unique and distinct at every geographical point. The identification of those wave systems allows their separation from the overall spectrum using the partition technique. Their further characterization is made using standard integrated parameters, which turn out much more meaningful when applied to the individual components than to the total spectrum. The parameters developed include the density distribution of spectral partitions, which is the main descriptor; the identified wave systems; the individual distribution of the characteristic frequencies, directions, wave height, wave age, seasonal variability of wind and waves; return periods derived from extreme value analysis; and crossing-sea probabilities. This information is made available in web format for public use at http://www.modemat.epn.edu.ec/#/nereo. It is found that wave spectral statistics offers the possibility to synthesize data while providing a direct and comprehensive view of the local and regional wave conditions.

  14. Best practices for evaluating the capability of nondestructive evaluation (NDE) and structural health monitoring (SHM) techniques for damage characterization

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Annis, Charles; Sabbagh, Harold A.; Lindgren, Eric A.

    2016-02-01

    A comprehensive approach to NDE and SHM characterization error (CE) evaluation is presented that follows the framework of the `ahat-versus-a' regression analysis for POD assessment. Characterization capability evaluation is typically more complex with respect to current POD evaluations and thus requires engineering and statistical expertise in the model-building process to ensure all key effects and interactions are addressed. Justifying the statistical model choice with underlying assumptions is key. Several sizing case studies are presented with detailed evaluations of the most appropriate statistical model for each data set. The use of a model-assisted approach is introduced to help assess the reliability of NDE and SHM characterization capability under a wide range of part, environmental and damage conditions. Best practices of using models are presented for both an eddy current NDE sizing and vibration-based SHM case studies. The results of these studies highlight the general protocol feasibility, emphasize the importance of evaluating key application characteristics prior to the study, and demonstrate an approach to quantify the role of varying SHM sensor durability and environmental conditions on characterization performance.

  15. The Business Of Urban Animals Survey: the facts and statistics on companion animals in Canada.

    PubMed

    Perrin, Terri

    2009-01-01

    At the first Banff Summit for Urban Animal Strategies (BSUAS) in 2006, delegates clearly indicated that a lack of reliable Canadian statistics hampers municipal leaders and legislators in their efforts to develop urban animal strategies that create and sustain a healthy community for pets and people. To gain a better understanding of the situation, BSUAS municipal delegates and other industry stakeholders partnered with Ipsos Reid, one of the world's leading polling firms, to conduct a national survey on the "Business of Urban Animals." The results of the survey, summarized in this article, were presented at the BSUAS meeting in October 2008. In addition, each participating community will receive a comprehensive written analysis, as well as a customized report. The online survey was conducted from September 22 to October 1, 2008. There were 7208 participants, including 3973 pet and 3235 non-pet owners from the Ipsos-Reid's proprietary Canadian online panel. The national results were weighted to reflect the true population distribution across Canada and the panel was balanced on all major demographics to mirror Statistics Canada census information. The margin for error for the national results is 1/- 1.15%.

  16. Pediatric Cancer Survivorship Research: Experience of the Childhood Cancer Survivor Study

    PubMed Central

    Leisenring, Wendy M.; Mertens, Ann C.; Armstrong, Gregory T.; Stovall, Marilyn A.; Neglia, Joseph P.; Lanctot, Jennifer Q.; Boice, John D.; Whitton, John A.; Yasui, Yutaka

    2009-01-01

    The Childhood Cancer Survivor Study (CCSS) is a comprehensive multicenter study designed to quantify and better understand the effects of pediatric cancer and its treatment on later health, including behavioral and sociodemographic outcomes. The CCSS investigators have published more than 100 articles in the scientific literature related to the study. As with any large cohort study, high standards for methodologic approaches are imperative for valid and generalizable results. In this article we describe methodological issues of study design, exposure assessment, outcome validation, and statistical analysis. Methods for handling missing data, intrafamily correlation, and competing risks analysis are addressed; each with particular relevance to pediatric cancer survivorship research. Our goal in this article is to provide a resource and reference for other researchers working in the area of long-term cancer survivorship. PMID:19364957

  17. Space flight risk data collection and analysis project: Risk and reliability database

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The focus of the NASA 'Space Flight Risk Data Collection and Analysis' project was to acquire and evaluate space flight data with the express purpose of establishing a database containing measurements of specific risk assessment - reliability - availability - maintainability - supportability (RRAMS) parameters. The developed comprehensive RRAMS database will support the performance of future NASA and aerospace industry risk and reliability studies. One of the primary goals has been to acquire unprocessed information relating to the reliability and availability of launch vehicles and the subsystems and components thereof from the 45th Space Wing (formerly Eastern Space and Missile Command -ESMC) at Patrick Air Force Base. After evaluating and analyzing this information, it was encoded in terms of parameters pertinent to ascertaining reliability and availability statistics, and then assembled into an appropriate database structure.

  18. Interpreting comprehensive two-dimensional gas chromatography using peak topography maps with application to petroleum forensics.

    PubMed

    Ghasemi Damavandi, Hamidreza; Sen Gupta, Ananya; Nelson, Robert K; Reddy, Christopher M

    2016-01-01

    Comprehensive two-dimensional gas chromatography [Formula: see text] provides high-resolution separations across hundreds of compounds in a complex mixture, thus unlocking unprecedented information for intricate quantitative interpretation. We exploit this compound diversity across the [Formula: see text] topography to provide quantitative compound-cognizant interpretation beyond target compound analysis with petroleum forensics as a practical application. We focus on the [Formula: see text] topography of biomarker hydrocarbons, hopanes and steranes, as they are generally recalcitrant to weathering. We introduce peak topography maps (PTM) and topography partitioning techniques that consider a notably broader and more diverse range of target and non-target biomarker compounds compared to traditional approaches that consider approximately 20 biomarker ratios. Specifically, we consider a range of 33-154 target and non-target biomarkers with highest-to-lowest peak ratio within an injection ranging from 4.86 to 19.6 (precise numbers depend on biomarker diversity of individual injections). We also provide a robust quantitative measure for directly determining "match" between samples, without necessitating training data sets. We validate our methods across 34 [Formula: see text] injections from a diverse portfolio of petroleum sources, and provide quantitative comparison of performance against established statistical methods such as principal components analysis (PCA). Our data set includes a wide range of samples collected following the 2010 Deepwater Horizon disaster that released approximately 160 million gallons of crude oil from the Macondo well (MW). Samples that were clearly collected following this disaster exhibit statistically significant match [Formula: see text] using PTM-based interpretation against other closely related sources. PTM-based interpretation also provides higher differentiation between closely correlated but distinct sources than obtained using PCA-based statistical comparisons. In addition to results based on this experimental field data, we also provide extentive perturbation analysis of the PTM method over numerical simulations that introduce random variability of peak locations over the [Formula: see text] biomarker ROI image of the MW pre-spill sample (sample [Formula: see text] in Additional file 4: Table S1). We compare the robustness of the cross-PTM score against peak location variability in both dimensions and compare the results against PCA analysis over the same set of simulated images. Detailed description of the simulation experiment and discussion of results are provided in Additional file 1: Section S8. We provide a peak-cognizant informational framework for quantitative interpretation of [Formula: see text] topography. Proposed topographic analysis enables [Formula: see text] forensic interpretation across target petroleum biomarkers, while including the nuances of lesser-known non-target biomarkers clustered around the target peaks. This allows potential discovery of hitherto unknown connections between target and non-target biomarkers.

  19. Comprehensive nutrition and lifestyle education improves weight loss and physical activity in Hispanic Americans following gastric bypass surgery: a randomized controlled trial.

    PubMed

    Nijamkin, Monica Petasne; Campa, Adriana; Sosa, Jorge; Baum, Marianna; Himburg, Susan; Johnson, Paulette

    2012-03-01

    As morbid obesity increasingly affects Hispanic Americans, the incidence of bariatric procedures among this population is rising. Despite this, prospective research on the effects of comprehensive postoperative education-centered interventions on weight loss and physical activity focused on Hispanic Americans is lacking. To examine whether a comprehensive nutrition education and behavior modification intervention improves weight loss and physical activity in Hispanic Americans with obesity following Roux-en-Y gastric bypass surgery (RYGB). A prospective randomized-controlled trial was conducted between November 2008 and April 2010. At 6 months following RYGB, 144 Hispanic Americans with obesity were randomly assigned to a comprehensive nutrition and lifestyle educational intervention (n=72) or a noncomprehensive approach (comparison group n=72). Those in the comprehensive group received education sessions every other week for 6 weeks in small groups and frequent contact with a registered dietitian. Those in the comparison group received brief, printed healthy lifestyle guidelines. Patients were reassessed at 12 months following surgery. Main outcome measures were excess weight loss and physical activity changes over time. Statistical analyses used t test, ?(2) test, Wilcoxon signed rank, Mann-Whitney U test, and intent-to-treat analysis, significance P<0.05. Participants (mean age 44.5 ± 13.5 years) were mainly Cuban-born women (83.3%). Mean preoperative excess weight and body mass index (calculated as kg/m(2)) were 72.20 ± 27.81 kg and 49.26 ± 9.06, respectively. At 12 months following surgery, both groups lost weight significantly, but comprehensive group participants experienced greater excess weight loss (80% vs 64% from preoperative excess weight; P<0.001) and greater body mass index reduction (6.48 ± 4.37 vs 3.63 ± 3.41; P<0.001) than comparison group participants. Comprehensive group participants were significantly more involved in physical activity (+14 min/wk vs ?4 min/wk; P<0.001) than comparison group participants. Mean protein intake was significantly lower in the comparison group than that in the comprehensive group (P<0.024). Findings support the importance of comprehensive nutrition education for achieving more effective weight reduction in Hispanic Americans following RYGB. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  20. Protein and gene model inference based on statistical modeling in k-partite graphs.

    PubMed

    Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter

    2010-07-06

    One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.

  1. A Vignette (User's Guide) for “An R Package for Statistical ...

    EPA Pesticide Factsheets

    StatCharrms is a graphical user front-end for ease of use in analyzing data generated from OCSPP 890.2200, Medaka Extended One Generation Reproduction Test (MEOGRT) and OCSPP 890.2300, Larval Amphibian Gonad Development Assay (LAGDA). The analyses StatCharrms is capable of performing are: Rao-Scott adjusted Cochran-Armitage test for trend By Slices (RSCABS), a Standard Cochran-Armitage test for trend By Slices (SCABS), mixed effects Cox proportional model, Jonckheere-Terpstra step down trend test, Dunn test, one way ANOVA, weighted ANOVA, mixed effects ANOVA, repeated measures ANOVA, and Dunnett test. This document provides a User’s Manual (termed a Vignette by the Comprehensive R Archive Network (CRAN)) for the previously created R-code tool called StatCharrms (Statistical analysis of Chemistry, Histopathology, and Reproduction endpoints using Repeated measures and Multi-generation Studies). The StatCharrms R-code has been publically available directly from EPA staff since the approval of OCSPP 890.2200 and 890.2300, and now is available publically available at the CRAN.

  2. GenAlEx 6.5: genetic analysis in Excel. Population genetic software for teaching and research--an update.

    PubMed

    Peakall, Rod; Smouse, Peter E

    2012-10-01

    GenAlEx: Genetic Analysis in Excel is a cross-platform package for population genetic analyses that runs within Microsoft Excel. GenAlEx offers analysis of diploid codominant, haploid and binary genetic loci and DNA sequences. Both frequency-based (F-statistics, heterozygosity, HWE, population assignment, relatedness) and distance-based (AMOVA, PCoA, Mantel tests, multivariate spatial autocorrelation) analyses are provided. New features include calculation of new estimators of population structure: G'(ST), G''(ST), Jost's D(est) and F'(ST) through AMOVA, Shannon Information analysis, linkage disequilibrium analysis for biallelic data and novel heterogeneity tests for spatial autocorrelation analysis. Export to more than 30 other data formats is provided. Teaching tutorials and expanded step-by-step output options are included. The comprehensive guide has been fully revised. GenAlEx is written in VBA and provided as a Microsoft Excel Add-in (compatible with Excel 2003, 2007, 2010 on PC; Excel 2004, 2011 on Macintosh). GenAlEx, and supporting documentation and tutorials are freely available at: http://biology.anu.edu.au/GenAlEx. rod.peakall@anu.edu.au.

  3. Microarray R-based analysis of complex lysate experiments with MIRACLE

    PubMed Central

    List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan

    2014-01-01

    Motivation: Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. Results: This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Availability: Project URL: http://www.nanocan.org/miracle/ Contact: mlist@health.sdu.dk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161257

  4. Microarray R-based analysis of complex lysate experiments with MIRACLE.

    PubMed

    List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan

    2014-09-01

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Project URL: http://www.nanocan.org/miracle/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  5. High second-language proficiency protects against the effects of reverberation on listening comprehension

    PubMed Central

    Sörqvist, Patrik; Hurtig, Anders; Ljung, Robert; Rönnberg, Jerker

    2014-01-01

    The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity (WMC) modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. WMC was also related to the effect of reverberation (although just barely significant), but the effect of WMC was eliminated when baseline L2 proficiency was statistically controlled. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed. PMID:24646043

  6. Energy Facts; Subcommittee on Energy of the Committee on Science and Astronautics, House of Representatives, Ninety-Third Congress, First Session [Committee Print.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. Congressional Research Service.

    This handbook contains a comprehensive selection of United States and foreign energy statistics in the form of graphs and tables. The data are classified according to resources, production, consumption and demand, energy and gross national product, and research and development. Statistics on energy sources such as coal, oil, gas, nuclear energy,…

  7. The Effectiveness of CPS-ALM Model in Enhancing Statistical Literacy Ability and Self Concept of Elementary School Student Teacher

    ERIC Educational Resources Information Center

    Takaria, J.; Rumahlatu, D.

    2016-01-01

    The focus of this study is to examine comprehensively statistical literacy and self-concept enhancement of elementary school student teacher through CPS-BML model in which this enhancement is measured through N-gain. The result of study indicate that the use of Collaborative Problem Solving Model assisted by literacy media (CPS-ALM) model…

  8. Data-Base for Communication Planning. The Basic and Statistical Data Required for the Elaboration of a Plan for a National Communication System.

    ERIC Educational Resources Information Center

    Rahim, Syed A.

    Based in part on a list developed by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) for use in Afghanistan, this document presents a comprehensive checklist of items of statistical and descriptive data required for planning a national communication system. It is noted that such a system provides the vital…

  9. Examining the Effects of Classroom Discussion on Students' Comprehension of Text: A Meta-Analysis

    ERIC Educational Resources Information Center

    Murphy, P. Karen; Wilkinson, Ian A. G.; Soter, Anna O.; Hennessey, Maeghan N.; Alexander, John F.

    2009-01-01

    The role of classroom discussions in comprehension and learning has been the focus of investigations since the early 1960s. Despite this long history, no syntheses have quantitatively reviewed the vast body of literature on classroom discussions for their effects on students' comprehension and learning. This comprehensive meta-analysis of…

  10. The Effects of Meta-Cognitive Instruction on Students' Reading Comprehension in Computerized Reading Contexts: A Quantitative Meta-Analysis

    ERIC Educational Resources Information Center

    Lan, Yi-Chin; Lo, Yu-Ling; Hsu, Ying-Shao

    2014-01-01

    Comprehension is the essence of reading. Finding appropriate and effective reading strategies to support students' reading comprehension has always been a critical issue for educators. This article presents findings from a meta-analysis of 17 studies of metacognitive strategy instruction on students' reading comprehension in computerized…

  11. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  12. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  13. The Need for Anticoagulation Following Inferior Vena Cava Filter Placement: Systematic Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Charles E.; Prochazka, Allan

    Purpose. To perform a systemic review to determine the effect of anticoagulation on the rates of venous thromboembolism (pulmonary embolus, deep venous thrombosis, inferior vena cava (IVC) filter thrombosis) following placement of an IVC filter. Methods. A comprehensive computerized literature search was performed to identify relevant articles. Data were abstracted by two reviewers. Studies were included if it could be determined whether or not subjects received anticoagulation following filter placement, and if follow-up data were presented. A meta-analysis of patients from all included studies was performed. A total of 14 articles were included in the final analysis, but the datamore » from only nine articles could be used in the meta-analysis; five studies were excluded because they did not present raw data which could be analyzed in the meta-analysis. A total of 1,369 subjects were included in the final meta-analysis. Results. The summary odds ratio for the effect of anticoagulation on venous thromboembolism rates following filter deployment was 0.639 (95% CI 0.351 to 1.159, p = 0.141). There was significant heterogeneity in the results from different studies [Q statistic of 15.95 (p = 0.043)]. Following the meta-analysis, there was a trend toward decreased venous thromboembolism rates in patients with post-filter anticoagulation (12.3% vs. 15.8%), but the result failed to reach statistical significance. Conclusion. Inferior vena cava filters can be placed in patients who cannot receive concomitant anticoagulation without placing them at significantly higher risk of development of venous thromboembolism.« less

  14. Implementation of Quality Systems in Nuclear Medicine: Why It Matters. An Outcome Analysis (Quality Management Audits in Nuclear Medicine Part III).

    PubMed

    Dondi, Maurizio; Paez, Diana; Torres, Leonel; Marengo, Mario; Delaloye, Angelika Bischof; Solanki, Kishor; Van Zyl Ellmann, Annare; Lobato, Enrique Estrada; Miller, Rodolfo Nunez; Giammarile, Francesco; Pascual, Thomas

    2018-05-01

    The International Atomic Energy Agency (IAEA) developed a comprehensive program-Quality Management Audits in Nuclear Medicine (QUANUM). This program covers all aspects of nuclear medicine practices including, but not limited to, clinical practice, management, operations, and services. The QUANUM program, which includes quality standards detailed in relevant checklists, aims at introducing a culture of comprehensive quality audit processes that are patient oriented, systematic, and outcome based. This paper will focus on the impact of the implementation of QUANUM on daily routine practices in audited centers. Thirty-seven centers, which had been externally audited by experts under IAEA auspices at least 1 year earlier, were invited to run an internal audit using the QUANUM checklists. The external audits also served as training in quality management and the use of QUANUM for the local teams, which were responsible of conducting the internal audits. Twenty-five out of the 37 centers provided their internal audit report, which was compared with the previous external audit. The program requires that auditors score each requirement within the QUANUM checklists on a scale of 0-4, where 0-2 means nonconformance and 3-4 means conformance to international regulations and standards on which QUANUM is based. Our analysis covering both general and clinical areas assessed changes on the conformance status on a binary manner and the level of conformance scores. Statistical analysis was performed using nonparametric statistical tests. The evaluation of the general checklists showed a global improvement on both the status and the levels of conformances (P < 0.01). The evaluation of the requirements by checklist also showed a significant improvement in all, with the exception of Hormones and Tumor marker determinations, where changes were not significant. Of the 25 evaluated institutions, 88% (22 of 25) and 92% (23 of 25) improved their status and levels of conformance, respectively. Fifty-five requirements, on average, increased from nonconformance to conformance status. In 8 key areas, the number of improved requirements was well above the average: Administration & Management (checklist 2); Radiation Protection & Safety (checklist 4); General Quality Assurance system (checklist 6); Imaging Equipment Quality Assurance or Quality Control (checklist 7); General Diagnostic (checklist 9); General Therapeutic (checklist 12); Radiopharmacy Level 1 (checklist 14); and Radiopharmacy Level 2 (checklist 15). Analysis of results related to clinical activities showed an overall positive impact on both the status and the level of conformance to international standards. Similar results were obtained for the most frequently performed clinical imaging and therapeutic procedures. Our study shows that the implementation of a comprehensive quality management system through the IAEA QUANUM program has a positive impact on nuclear medicine practices. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Assessment of the dynamics of urbanized areas by remote sensing

    NASA Astrophysics Data System (ADS)

    Yeprintsev, S. A.; Klevtsova, M. A.; Lepeshkina, L. A.; Shekoyan, S. V.; Voronin, A. A.

    2018-01-01

    This research looks at the results of a study of spatial ecological zoning of urban territories using the NDVI-analysis of actual multi-channel satellite images from Landsat-7 and Landsat-8 in the Voronezh region for the period 2001 to 2016. The results obtained in the course of interpretation of space images and processing of statistical information compiled in the GIS environment “Ecology of cities Voronezh region” on the basis of which carried out a comprehensive ecological zoning of the studied urbanized areas. The obtained data on the spatial classification of urban and suburban areas, the peculiarities of the dynamics of weakly and strongly anthropogenically territories, hydrological features and vegetation.

  16. The Hog Cycle of Law Professors: An Econometric Time Series Analysis of the Entry-Level Job Market in Legal Academia.

    PubMed

    Engel, Christoph; Hamann, Hanjo

    2016-01-01

    The (German) market for law professors fulfils the conditions for a hog cycle: In the short run, supply cannot be extended or limited; future law professors must be hired soon after they first present themselves, or leave the market; demand is inelastic. Using a comprehensive German dataset, we show that the number of market entries today is negatively correlated with the number of market entries eight years ago. This suggests short-sighted behavior of young scholars at the time when they decide to prepare for the market. Using our statistical model, we make out-of-sample predictions for the German academic market in law until 2020.

  17. First-Grade Cognitive Abilities as Long-Term Predictors of Reading Comprehension and Disability Status

    PubMed Central

    Fuchs, Douglas; Compton, Donald L.; Fuchs, Lynn S.; Bryant, V. Joan; Hamlett, Carol L.; Lambert, Warren

    2012-01-01

    In a sample of 195 first graders selected for poor reading performance, the authors explored four cognitive predictors of later reading comprehension and reading disability (RD) status. In fall of first grade, the authors measured the children’s phonological processing, rapid automatized naming (RAN), oral language comprehension, and nonverbal reasoning. Throughout first grade, they also modeled the students’ reading progress by means of weekly Word Identification Fluency (WIF) tests to derive December and May intercepts. The authors assessed their reading comprehension in the spring of Grades 1–5. With the four cognitive variables and the WIF December intercept as predictors, 50.3% of the variance in fifth-grade reading comprehension was explained: 52.1% of this 50.3% was unique to the cognitive variables, 13.1% to the WIF December intercept, and 34.8% was shared. All five predictors were statistically significant. The same four cognitive variables with the May (rather than December) WIF intercept produced a model that explained 62.1% of the variance. Of this amount, the cognitive variables and May WIF intercept accounted for 34.5% and 27.7%, respectively; they shared 37.8%. All predictors in this model were statistically significant except RAN. Logistic regression analyses indicated that the accuracy with which the cognitive variables predicted end-of-fifth-grade RD status was 73.9%. The May WIF intercept contributed reliably to this prediction; the December WIF intercept did not. Results are discussed in terms of a role for cognitive abilities in identifying, classifying, and instructing students with severe reading problems. PMID:22539057

  18. First-grade cognitive abilities as long-term predictors of reading comprehension and disability status.

    PubMed

    Fuchs, Douglas; Compton, Donald L; Fuchs, Lynn S; Bryant, V Joan; Hamlett, Carol L; Lambert, Warren

    2012-01-01

    In a sample of 195 first graders selected for poor reading performance, the authors explored four cognitive predictors of later reading comprehension and reading disability (RD) status. In fall of first grade, the authors measured the children's phonological processing, rapid automatized naming (RAN), oral language comprehension, and nonverbal reasoning. Throughout first grade, they also modeled the students' reading progress by means of weekly Word Identification Fluency (WIF) tests to derive December and May intercepts. The authors assessed their reading comprehension in the spring of Grades 1-5. With the four cognitive variables and the WIF December intercept as predictors, 50.3% of the variance in fifth-grade reading comprehension was explained: 52.1% of this 50.3% was unique to the cognitive variables, 13.1% to the WIF December intercept, and 34.8% was shared. All five predictors were statistically significant. The same four cognitive variables with the May (rather than December) WIF intercept produced a model that explained 62.1% of the variance. Of this amount, the cognitive variables and May WIF intercept accounted for 34.5% and 27.7%, respectively; they shared 37.8%. All predictors in this model were statistically significant except RAN. Logistic regression analyses indicated that the accuracy with which the cognitive variables predicted end-of-fifth-grade RD status was 73.9%. The May WIF intercept contributed reliably to this prediction; the December WIF intercept did not. Results are discussed in terms of a role for cognitive abilities in identifying, classifying, and instructing students with severe reading problems.

  19. Integrating the iPad into an intensive, comprehensive aphasia program.

    PubMed

    Hoover, Elizabeth L; Carney, Anne

    2014-02-01

    The proliferation of tablet technology and the development of apps to support aphasia rehabilitation offer increasing opportunities for speech-language pathologists in a clinical setting. This article describes the components of an Intensive Comprehensive Aphasia Program at Boston University and details how usage of the iPad (Apple Inc., Cupertino, CA) was incorporated. We describe how the iPad was customized for use in individual, dyadic, and group treatment formats and how its use was encouraged through home practice tasks. In addition to providing the participants with step-by-step instructions for the usage of each new app, participants had multiple opportunities for practice across various treatment formats. Examples of how the participants continued using their iPad beyond the program suggest how the usage of this device has generalized into their day-to-day life. An overall summary of performance on targeted linguistic measures as well as an analysis of functional and quality-of-life measures reveal statistically significant improvements pre- to posttreatment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Overview of Emerging Contaminants and Associated Human Health Effects

    PubMed Central

    Lei, Meng; Zhang, Lun; Lei, Jianjun; Zong, Liang; Li, Jiahui; Wu, Zheng; Wang, Zheng

    2015-01-01

    In recent decades, because of significant progress in the analysis and detection of trace pollutants, emerging contaminants have been discovered and quantified in living beings and diverse environmental substances; however, the adverse effects of environmental exposure on the general population are largely unknown. This review summarizes the conclusions of the comprehensive epidemic literature and representative case reports relevant to emerging contaminants and the human body to address concerns about potential harmful health effects in the general population. The most prevalent emerging contaminants include perfluorinated compounds, water disinfection byproducts, gasoline additives, manufactured nanomaterials, human and veterinary pharmaceuticals, and UV-filters. Rare but statistically meaningful connections have been reported for a number of contaminants and cancer and reproductive risks. Because of contradictions in the outcomes of some investigations and the limited number of articles, no significant conclusions regarding the relationship between adverse effects on humans and extents of exposure can be drawn at this time. Here, we report that the current evidence is not conclusive and comprehensive and suggest prospective cohort studies in the future to evaluate the associations between human health outcomes and emerging environmental contaminants. PMID:26713315

  1. Comprehensive analysis of correlation coefficients estimated from pooling heterogeneous microarray data

    PubMed Central

    2013-01-01

    Background The synthesis of information across microarray studies has been performed by combining statistical results of individual studies (as in a mosaic), or by combining data from multiple studies into a large pool to be analyzed as a single data set (as in a melting pot of data). Specific issues relating to data heterogeneity across microarray studies, such as differences within and between labs or differences among experimental conditions, could lead to equivocal results in a melting pot approach. Results We applied statistical theory to determine the specific effect of different means and heteroskedasticity across 19 groups of microarray data on the sign and magnitude of gene-to-gene Pearson correlation coefficients obtained from the pool of 19 groups. We quantified the biases of the pooled coefficients and compared them to the biases of correlations estimated by an effect-size model. Mean differences across the 19 groups were the main factor determining the magnitude and sign of the pooled coefficients, which showed largest values of bias as they approached ±1. Only heteroskedasticity across the pool of 19 groups resulted in less efficient estimations of correlations than did a classical meta-analysis approach of combining correlation coefficients. These results were corroborated by simulation studies involving either mean differences or heteroskedasticity across a pool of N > 2 groups. Conclusions The combination of statistical results is best suited for synthesizing the correlation between expression profiles of a gene pair across several microarray studies. PMID:23822712

  2. Methodological and Reporting Quality of Systematic Reviews and Meta-analyses in Endodontics.

    PubMed

    Nagendrababu, Venkateshbabu; Pulikkotil, Shaju Jacob; Sultan, Omer Sheriff; Jayaraman, Jayakumar; Peters, Ove A

    2018-06-01

    The aim of this systematic review (SR) was to evaluate the quality of SRs and meta-analyses (MAs) in endodontics. A comprehensive literature search was conducted to identify relevant articles in the electronic databases from January 2000 to June 2017. Two reviewers independently assessed the articles for eligibility and data extraction. SRs and MAs on interventional studies with a minimum of 2 therapeutic strategies in endodontics were included in this SR. Methodologic and reporting quality were assessed using A Measurement Tool to Assess Systematic Reviews (AMSTAR) and Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA), respectively. The interobserver reliability was calculated using the Cohen kappa statistic. Statistical analysis with the level of significance at P < .05 was performed using Kruskal-Wallis tests and simple linear regression analysis. A total of 30 articles were selected for the current SR. Using AMSTAR, the item related to the scientific quality of studies used in conclusion was adhered by less than 40% of studies. Using PRISMA, 3 items were reported by less than 40% of studies, which were on objectives, protocol registration, and funding. No association was evident comparing the number of authors and country with quality. Statistical significance was observed when quality was compared among journals, with studies published as Cochrane reviews superior to those published in other journals. AMSTAR and PRISMA scores were significantly related. SRs in endodontics showed variability in both methodologic and reporting quality. Copyright © 2018 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  3. VizieR Online Data Catalog: GRB Swift X-ray light curves analysis (Margutti+, 2013)

    NASA Astrophysics Data System (ADS)

    Margutti, R.; Zaninoni, E.; Bernardini, M. G.; Chincarini, G.; Pasotti, F.; Guidorzi, C.; Angelini, L.; Burrows, D. N.; Capalbi, M.; Evans, P. A.; Gehrels, N.; Kennea, J.; Mangano, V.; Moretti, A.; Nousek, J.; Osborne, J. P.; Page, K. L.; Perri, M.; Racusin, J.; Romano, P.; Sbarufatti, B.; Stafford, S.; Stamatikos, M.

    2013-11-01

    We present a comprehensive statistical analysis of Swift X-ray light curves of gamma-ray bursts (GRBs) collecting data from more than 650 GRBs discovered by Swift and other facilities. The unprecedented sample size allows us to constrain the rest-frame X-ray properties of GRBs from a statistical perspective, with particular reference to intrinsic time-scales and the energetics of the different light-curve phases in a common rest-frame 0.3-30keV energy band. Temporal variability episodes are also studied and their properties constrained. Two fundamental questions drive this effort: (i) Does the X-ray emission retain any kind of 'memory' of the prompt γ-ray phase? (ii) Where is the dividing line between long and short GRB X-ray properties? We show that short GRBs decay faster, are less luminous and less energetic than long GRBs in the X-rays, but are interestingly characterized by similar intrinsic absorption. We furthermore reveal the existence of a number of statistically significant relations that link the X-ray to prompt γ-ray parameters in long GRBs; short GRBs are outliers of the majority of these two-parameter relations. However and more importantly, we report on the existence of a universal three-parameter scaling that links the X-ray and the γ-ray energy to the prompt spectral peak energy of both long and short GRBs: EX,iso{prop.to}E1.00+/-0.06γ,iso/E0.60+/-0.10pk. (3 data files).

  4. Navigating Return to Work and Breastfeeding in a Hospital with a Comprehensive Employee Lactation Program.

    PubMed

    Froh, Elizabeth B; Spatz, Diane L

    2016-11-01

    The Surgeon General's Call to Action to Support Breastfeeding details the need for comprehensive employer lactation support programs. Our institution has an extensive employee lactation program, and our breastfeeding initiation and continuation rates are statistically significantly higher than state and national data, with more than 20% of our employees breastfeeding for more than 1 year. The objective of this research was complete secondary data analysis of qualitative data collected as part of a larger study on breastfeeding outcomes. In the larger study, 545 women who returned to work full or part time completed an online survey with the ability to provide free text qualitative data and feedback regarding their experiences with breastfeeding after return to work. Qualitative data were pulled from the online survey platform. The responses to these questions were analyzed using conventional content analysis by the research team (2 PhD-prepared nurse researchers trained and experienced in qualitative methodologies and 1 research assistant) in order to complete a thematic analysis of the survey data. Analysis of the data yielded 5 major themes: (1) positive reflections, (2) nonsupportive environment/work culture, (3) supportive environment/work culture, (4) accessibility of resources, and (5) internal barriers. The themes that emerged from this research clearly indicate that even in a hospital with an extensive employee lactation program, women have varied experiences-some more positive than others. Returning to work while breastfeeding requires time and commitment of the mother, and a supportive employee lactation program may ease that transition of return to work.

  5. Research diagnostic criteria for temporomandibular disorders (RDC/TMD): development of image analysis criteria and examiner reliability for image analysis.

    PubMed

    Ahmad, Mansur; Hollender, Lars; Anderson, Quentin; Kartha, Krishnan; Ohrbach, Richard; Truelove, Edmond L; John, Mike T; Schiffman, Eric L

    2009-06-01

    As part of the Multisite Research Diagnostic Criteria For Temporomandibular Disorders (RDC/TMD) Validation Project, comprehensive temporomandibular joint diagnostic criteria were developed for image analysis using panoramic radiography, magnetic resonance imaging (MRI), and computerized tomography (CT). Interexaminer reliability was estimated using the kappa (kappa) statistic, and agreement between rater pairs was characterized by overall, positive, and negative percent agreement. Computerized tomography was the reference standard for assessing validity of other imaging modalities for detecting osteoarthritis (OA). For the radiologic diagnosis of OA, reliability of the 3 examiners was poor for panoramic radiography (kappa = 0.16), fair for MRI (kappa = 0.46), and close to the threshold for excellent for CT (kappa = 0.71). Using MRI, reliability was excellent for diagnosing disc displacements (DD) with reduction (kappa = 0.78) and for DD without reduction (kappa = 0.94) and good for effusion (kappa = 0.64). Overall percent agreement for pairwise ratings was >or=82% for all conditions. Positive percent agreement for diagnosing OA was 19% for panoramic radiography, 59% for MRI, and 84% for CT. Using MRI, positive percent agreement for diagnoses of any DD was 95% and of effusion was 81%. Negative percent agreement was >or=88% for all conditions. Compared with CT, panoramic radiography and MRI had poor and marginal sensitivity, respectively, but excellent specificity in detecting OA. Comprehensive image analysis criteria for the RDC/TMD Validation Project were developed, which can reliably be used for assessing OA using CT and for disc position and effusion using MRI.

  6. A lead discovery strategy driven by a comprehensive analysis of proteases in the peptide substrate space

    PubMed Central

    Sukuru, Sai Chetan K; Nigsch, Florian; Quancard, Jean; Renatus, Martin; Chopra, Rajiv; Brooijmans, Natasja; Mikhailov, Dmitri; Deng, Zhan; Cornett, Allen; Jenkins, Jeremy L; Hommel, Ulrich; Davies, John W; Glick, Meir

    2010-01-01

    We present here a comprehensive analysis of proteases in the peptide substrate space and demonstrate its applicability for lead discovery. Aligned octapeptide substrates of 498 proteases taken from the MEROPS peptidase database were used for the in silico analysis. A multiple-category naïve Bayes model, trained on the two-dimensional chemical features of the substrates, was able to classify the substrates of 365 (73%) proteases and elucidate statistically significant chemical features for each of their specific substrate positions. The positional awareness of the method allows us to identify the most similar substrate positions between proteases. Our analysis reveals that proteases from different families, based on the traditional classification (aspartic, cysteine, serine, and metallo), could have substrates that differ at the cleavage site (P1–P1′) but are similar away from it. Caspase-3 (cysteine protease) and granzyme B (serine protease) are previously known examples of cross-family neighbors identified by this method. To assess whether peptide substrate similarity between unrelated proteases could reliably translate into the discovery of low molecular weight synthetic inhibitors, a lead discovery strategy was tested on two other cross-family neighbors—namely cathepsin L2 and matrix metallo proteinase 9, and calpain 1 and pepsin A. For both these pairs, a naïve Bayes classifier model trained on inhibitors of one protease could successfully enrich those of its neighbor from a different family and vice versa, indicating that this approach could be prospectively applied to lead discovery for a novel protease target with no known synthetic inhibitors. PMID:20799349

  7. Chapter 11. Community analysis-based methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Wu, C.H.; Andersen, G.L.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. Inmore » increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.« less

  8. Soy intake and breast cancer risk: A meta-analysis of epidemiological studies

    NASA Astrophysics Data System (ADS)

    Bahrom, Suhaila; Idris, Nik Ruzni Nik

    2016-06-01

    The impact of soy intake on breast cancer risk has been investigated extensively. However, these studies reported conflicting results. The objective of this study is to perform comprehensive review and updated meta-analysis on the association between soy intake and breast cancer risk and to identify significant factors which may contribute to the inconsistencies of results of the individual studies. Based on reviews of existing meta-analysis, we identified four main factors which contributed to the inconsistencies of results of individual studies on the association of soy intake and breast cancer risk namely; region, menopausal status of the patients, soy type and study design. Accordingly, we performed an updated meta-analysis of 57 studies grouped by the identified factors. Pooled ORs of studies carried out in Asian countries suggested that soy isoflavones consumption was inversely associated with the risk of breast cancer among both pre and postmenopausal women (OR=0.63, 95% CI: 0.54-0.74 for premenopausal women; OR=0.63, 95% CI: 0.52-0.75 for postmenopausal women). However, pooled OR of studies carried out in Western countries shows that there is no statistically significant association between soy intake and breast cancer risk (OR=0.98, 95% CI: 0.93-1.03). Our study suggests that soy food intake is associated with significantly reduced risk of breast cancer for women in Asian but not in Western countries. Further epidemiological studies need to be conducted with more comprehensive information about the dietary intake and relative exposure among the women in these two different regions.

  9. A Comprehensive Approach to Fusion for Microsensor Networks: Distributed and Hierarchical Inference, Communication, and Adaption

    DTIC Science & Technology

    2000-08-01

    lecturer of LATIN 2006 , (Latin America Theoretical Informat- ics, 2006 ), Valdivia , Chile, March 2006 . 67. Sergio Verdu gave a Keynote Talk at the New...NUMBER OF PAGES 20. LIMITATION OF ABSTRACT UL - 31-Jan- 2006 Data Fusion in Large Arrays of Microsensors (SensorWeb): A Comprehensive Approach to...Transactions on Wireless Communications, February 2006 . 21. A.P. George, W.B. Powell, S.R. Kulkarni. The Statistics of Hierarchical Aggregation for

  10. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  11. A comprehensive segmentation analysis of crude oil market based on time irreversibility

    NASA Astrophysics Data System (ADS)

    Xia, Jianan; Shang, Pengjian; Lu, Dan; Yin, Yi

    2016-05-01

    In this paper, we perform a comprehensive entropic segmentation analysis of crude oil future prices from 1983 to 2014 which used the Jensen-Shannon divergence as the statistical distance between segments, and analyze the results from original series S and series begin at 1986 (marked as S∗) to find common segments which have same boundaries. Then we apply time irreversibility analysis of each segment to divide all segments into two groups according to their asymmetry degree. Based on the temporal distribution of the common segments and high asymmetry segments, we figure out that these two types of segments appear alternately and do not overlap basically in daily group, while the common portions are also high asymmetry segments in weekly group. In addition, the temporal distribution of the common segments is fairly close to the time of crises, wars or other events, because the hit from severe events to oil price makes these common segments quite different from their adjacent segments. The common segments can be confirmed in daily group series, or weekly group series due to the large divergence between common segments and their neighbors. While the identification of high asymmetry segments is helpful to know the segments which are not affected badly by the events and can recover to steady states automatically. Finally, we rearrange the segments by merging the connected common segments or high asymmetry segments into a segment, and conjoin the connected segments which are neither common nor high asymmetric.

  12. The Effects of Music on Pain: A Meta-Analysis.

    PubMed

    Lee, Jin Hyung

    2016-01-01

    Numerous meta-analyses have been conducted on the topic of music and pain, with the latest comprehensive study published in 2006. Since that time, more than 70 randomized controlled trials (RCTs) have been published, necessitating a new and comprehensive review. The aim of this meta-analysis was to examine published RCT studies investigating the effect of music on pain. The present study included RCTs published between 1995 and 2014. Studies were obtained by searching 12 databases and hand-searching related journals and reference lists. Main outcomes were pain intensity, emotional distress from pain, vital signs, and amount of analgesic intake. Study quality was evaluated according to the Cochrane Collaboration guidelines. Analysis of the 97 included studies revealed that music interventions had statistically significant effects in decreasing pain on 0-10 pain scales (MD = -1.13), other pain scales (SMD = -0.39), emotional distress from pain (MD = -10.83), anesthetic use (SMD = -0.56), opioid intake (SMD = -0.24), non-opioid intake (SMD = -0.54), heart rate (MD = -4.25), systolic blood pressure (MD = -3.34), diastolic blood pressure (MD = -1.18), and respiration rate (MD = -1.46). Subgroup and moderator analyses yielded additional clinically informative outcomes. Considering all the possible benefits, music interventions may provide an effective complementary approach for the relief of acute, procedural, and cancer/chronic pain in the medical setting. © the American Music Therapy Association 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models.

    PubMed

    Erguler, Kamil; Stumpf, Michael P H

    2011-05-01

    The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.

  14. The Comprehension Problems for Second-Language Learners with Poor Reading Comprehension Despite Adequate Decoding: A Meta-Analysis

    ERIC Educational Resources Information Center

    Spencer, Mercedes; Wagner, Richard K.

    2017-01-01

    We conducted a meta-analysis of 16 existing studies to examine the nature of the comprehension problems for children who were second-language learners with poor reading comprehension despite adequate decoding. Results indicated that these children had deficits in oral language (d = -0.80), but these deficits were not as severe as their reading…

  15. CH-47D Rotating System Fault Sensing for Condition Based Maintenance

    DTIC Science & Technology

    2011-03-01

    replacement. This research seeks to create an analytical model in the Rotorcraft Comprehensive Analysis System which will enable the identifica- tion of...answer my many questions. Without your assistance and that of Dr. Jon Keller and Mr. Clayton Kachelle at AMRDEC, the Rotorcraft Comprehensive Analysis...20 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Rotorcraft Comprehensive Analysis

  16. Skin antiseptics in venous puncture site disinfection for preventing blood culture contamination: A Bayesian network meta-analysis of randomized controlled trials.

    PubMed

    Liu, Wenjie; Duan, Yuchen; Cui, Wenyao; Li, Li; Wang, Xia; Dai, Heling; You, Chao; Chen, Maojun

    2016-07-01

    To compare the efficacy of several antiseptics in decreasing the blood culture contamination rate. Network meta-analysis. Electronic searches of PubMed and Embase were conducted up to November 2015. Only randomized controlled trials or quasi-randomized controlled trials were eligible. We applied no language restriction. A comprehensive review of articles in the reference lists was also accomplished for possible relevant studies. Relevant studies evaluating efficacy of different antiseptics in venous puncture site for decreasing the blood culture contamination rate were included. The data were extracted from the included randomized controlled trials by two authors independently. The risk of bias was evaluated using Detsky scale by two authors independently. We used WinBUGS1.43 software and statistic model described by Chaimani to perform this network meta-analysis. Then graphs of statistical results of WinBUGS1.43 software were generated using 'networkplot', 'ifplot', 'netfunnel' and 'sucra' procedure by STATA13.0. Odds ratio and 95% confidence intervals were assessed for dichotomous data. A probability of p less than 0.05 was considered to be statistically significant. Compared with ordinary meta-analyses, this network meta-analysis offered hierarchies for the efficacy of different antiseptics in decreasing the blood culture contamination rate. Seven randomized controlled trials involving 34,408 blood samples were eligible for the meta-analysis. No significant difference was found in blood culture contamination rate among different antiseptics. No significant difference was found between non-alcoholic antiseptics and alcoholic antiseptics, alcoholic chlorhexidine and povidone iodine, chlorhexidine and iodine compounds, povidone iodine and iodine tincture in this aspect, respectively. Different antiseptics may not affect the blood culture contamination rate. Different intervals between the skin disinfection and the venous puncture, the different settings (emergency room, medical wards, and intensive care units) and the performance of the phlebotomy may affect the blood culture contamination rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. [The effectiveness of comprehensive rehabilitation after a first episode of ischemic stroke].

    PubMed

    Starosta, Michał; Niwald, Marta; Miller, Elżbieta

    2015-05-01

    Ischemic stroke is the most common cause of hospitalization in the Department of Neurological Rehabilitation. Comprehensive rehabilitation is essential for regaining lost functional efficiency. The aim of study was to evaluate the effectiveness of specific disorder rehabilitation program in 57 patients with first-ever ischemic stroke. The study included 57 patients (27 women, 30 men) aged from 47 to 89. Patients were admitted for comprehensive rehabilitation, lasted an average of 25 days. The treatment program consisted of exercises aimed at reeducation of posture and gait. In addition, physical treatments were used. Evaluation of the effectiveness of rehabilitation was measured using the Activity Daily Living scale, Modified Rankin Scale, Rivermead Measure Assessment (RMA1-global movements, RMA2-lower limb and trunk, RMA3-upper limb) and the psychological tests - Geriatric Depression Scale (GDS) and Beck Depression Inventory (BDI). As a result of comprehensive rehabilitation treatment, functional status and mental health improvement was observed in relation to the ADL scale by 32% (woman 36%, man 30%), Rankin scale by 22% (woman 22%, man 21%). In the RMA, improvement was observed with the statistical significance of p=0.001 in all of the subscales. The highest rate of improvement affected upper limb function: RMA/3 (41%). In other subscales women have achieved statistically more significant improvement than men (RMA/1-43% versus 25%; RMA/2-41% versus 30%). The results related to the psychological assessment showed statistically significant GDS improvement p<0.001 (<60 years old) and BDI (> 60 years old) in test men (p=0.038). Spearman correlation coefficient showed no relation between mental state and functional improvement (GDS versus ADL; BDI versus ADL). The 25 days comprehensive rehabilitation program during the subacute stroke phase affects mainly the improvement of upper limb function. Women have achieved better functional improvement in all of the parameters. In addition, it was observed that symptoms of depression were presented in all study group, and the improvement of mental focused primarily on patients after 60 years old. © 2015 MEDPRESS.

  18. Identifying customer-focused performance measures : final report 655.

    DOT National Transportation Integrated Search

    2010-10-01

    The Arizona Department of Transportation (ADOT) completed a comprehensive customer satisfaction : assessment in July 2009. ADOT commissioned the assessment to acquire statistically valid data from residents : and community leaders to help it identify...

  19. The auditory comprehension changes over time after sport-related concussion can indicate multisensory processing dysfunctions.

    PubMed

    Białuńska, Anita; Salvatore, Anthony P

    2017-12-01

    Although science findings and treatment approaches of a concussion have changed in recent years, there continue to be challenges in understanding the nature of the post-concussion behavior. There is growing a body of evidence that some deficits can be related to an impaired auditory processing. To assess auditory comprehension changes over time following sport-related concussion (SRC) in young athletes. A prospective, repeated measures mixed-design was used. A sample of concussed athletes ( n  = 137) and the control group consisted of age-matched, non-concussed athletes ( n  = 143) were administered Subtest VIII of the Computerized-Revised Token Test (C-RTT). The 88 concussed athletes selected for final analysis (neither previous history of brain injury, neurological, psychiatric problems, nor auditory deficits) were evaluated after injury during three sessions (PC1, PC2, and PC3); controls were tested once. Between- and within-group comparisons using RMANOVA were performed on the C-RTT Efficiency Score (ES). ES of the SRC athletes group improved over consecutive testing sessions ( F  =   14.7, p  <   .001), while post-hoc analysis showed that PC1 results differed from PC2 and PC3 ( ts  ≥ 4.0, ps  < .001), but PC2 and PC3 C-RTT ES did not change statistically ( t  =   0.6, p =  .557). The SRC athletes demonstrated lower ES for all test session when compared to the control group ( ts  > 2.0, Ps <.01). Dysfunctional auditory comprehension performance following a concussion improved over time, but after the second testing session improved performance slowed, especially in terms of its timing. Yet, not only auditory processing but also sensorimotor integration and/or motor execution can be compromised after a concussion.

  20. How reliable and accurate is the AO/OTA comprehensive classification for adult long-bone fractures?

    PubMed

    Meling, Terje; Harboe, Knut; Enoksen, Cathrine H; Aarflot, Morten; Arthursson, Astvaldur J; Søreide, Kjetil

    2012-07-01

    Reliable classification of fractures is important for treatment allocation and study comparisons. The overall accuracy of scoring applied to a general population of fractures is little known. This study aimed to investigate the accuracy and reliability of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for adult long-bone fractures and identify factors associated with poor coding agreement. Adults (>16 years) with long-bone fractures coded in a Fracture and Dislocation Registry at the Stavanger University Hospital during the fiscal year 2008 were included. An unblinded reference code dataset was generated for the overall accuracy assessment by two experienced orthopedic trauma surgeons. Blinded analysis of intrarater reliability was performed by rescoring and of interrater reliability by recoding of a randomly selected fracture sample. Proportion of agreement (PA) and kappa (κ) statistics are presented. Uni- and multivariate logistic regression analyses of factors predicting accuracy were performed. During the study period, 949 fractures were included and coded by 26 surgeons. For the intrarater analysis, overall agreements were κ = 0.67 (95% confidence interval [CI]: 0.64-0.70) and PA 69%. For interrater assessment, κ = 0.67 (95% CI: 0.62-0.72) and PA 69%. The accuracy of surgeons' blinded recoding was κ = 0.68 (95% CI: 0.65- 0.71) and PA 68%. Fracture type, frequency of the fracture, and segment fractured significantly influenced accuracy whereas the coder's experience did not. Both the reliability and accuracy of the comprehensive Arbeitsgemeinschaft für Osteosynthesefragen/Orthopedic Trauma Association classification for long-bone fractures ranged from substantial to excellent. Variations in coding accuracy seem to be related more to the fracture itself than the surgeon. Diagnostic study, level I.

Top